How the software market will evolve alongside Sky computing
By Assaf Araki and Ben Lorica.
Cloud computing is one of the fastest growing areas within the enterprise IT market. A recent study estimates that the global market for cloud computing will grow from $371.4 billion in 2021 to $832.1 billion in 2025. Yet, we are still very much in the early stages of migration to the cloud: data from Morgan Stanley’s survey of CIOs hints that only 23% of workloads are running on the cloud today.
While cloud computing has certainly improved access to computing resources and specialized software, as we note below there are signs that lack of standardization is impeding innovation. In addition, cloud providers increasingly promote managed software services that are only available on their platforms. The differences in offerings and costs across providers has not been lost on users of cloud platforms. In fact, as companies move more of their compute workloads to the cloud, a recent WSJ article noted that more of them are opting to choose multi-cloud strategies so they can lower costs and access the best services.
Sky Computing
There are nascent efforts to swing the pendulum back toward more standardization. A widely read paper by UC Berkeley Professors Ion Stoica and Scott Shenker describes technical challenges that must be overcome to arrive at “Sky computing,” a notion of utility computing that stems from a 1961 paper by John McCarthy. Sky computing represents a more commoditized version of cloud computing where cloud platforms are more like a utility such as a telephone service:
No matter what provider you use, you can reach anyone, and switching providers is relatively easy; you can even keep your number when switching.
Stoica and Shenker note that while cloud computing has completely transformed IT, there are many areas where cloud computing still falls short of the “utility computing” service McCarthy described 50 years ago. Cloud computing is a fragmented market, with the top three cloud providers—AWS, Azure, and Google Cloud Platform (GCP)—garnering only about 61% of total cloud computing revenue. More importantly, the different cloud platforms provide their own programming interfaces, their own dizzying array of proprietary software services, and even their own proprietary hardware (e.g., TPUs on GCP). Fragmentation is exacerbated by data gravity because once data is stored in one cloud provider, it becomes very expensive to move it out.
Placed in the context of companies’ propensity to opt for multi-cloud strategies, this fragmentation can impact innovation. Combining best-of-breed hardware and software solutions can be difficult if not impossible. Take the case of new hardware accelerators for deep learning. Users of AWS cannot use TPUs, specialized hardware accelerators only available on Google Cloud. Similarly, public cloud users cannot use specialized hardware (e.g., Cerebras) unless it becomes available on their cloud provider.
Sky computing envisions cloud computing platforms that are closer to the “utility computing” vision described by McCarthy. In this post, we describe the cloud computing models that will become more prevalent on the way toward realizing Sky computing.
Evolution of cloud computing

Cloud platforms enable companies to modernize software across two key dimensions:
- Technology Level: Companies are moving toward “service stacks” and software that rely on services owned and managed by disparate teams. This dimension describes the level of software abstraction available for the disaggregation of monolithic software and for building systems that rely on microservices. Progress in this dimension also eases the transition toward using managed software services.
- Management Level: This describes the level of management needed for key building blocks (hardware, software, and operating systems). Low means all the building blocks are self-managed by users, and High means the building blocks are managed by the cloud service provider (CSP) or the application provider.

To illustrate what software could become as the vision toward Sky computing is realized, we describe how computing models have impacted GrayDB, a hypothetical database company. We assume that GrayDB was originally released toward the end of the last century as an analytical distributed database aimed at the data warehouse market. To ease implementation, GrayDB offered customers a reference architecture that included hardware elements such as server configuration, compute, and storage, coupled with software elements and other software prerequisites.
With the emergence of public cloud service providers a decade later, GrayDB’s customers requested that its software be made available on public cloud infrastructure. Cloud enablement refers to the process of taking software originally made for static environments on-premise and making it run on virtual machines (VMs) in the public cloud. GrayDB made adjustments to their reference architecture to run on top of public cloud virtual machines.
Public cloud providers continued to innovate and climb the customer value stack beyond infrastructure-as-a-service (IaaS) and started offering various managed software services, such as databases, analytics, business applications, and machine learning. Five years ago, as containers became ubiquitous and as systems for automating deployment, scaling, and management of containerized applications started to gain traction, GrayDB embraced these technologies and became cloud native. The company started using containers, serverless, orchestration technologies, Docker, and K8s to run highly scalable GrayDB in modern, dynamic environments such as public, private, and hybrid clouds.

GrayDB most recently began offering GrayDB as a Service (GaaS) on all major cloud providers. GaaS is centrally hosted software that runs on hardware controlled by the service provider. Access to GaaS is provided on a subscription basis. Customers who want to use a SaaS product that operates on all major cloud service providers must first choose a CSP. Enabling an existing SaaS product on a new CSP is an effort that requires many engineers and could take many months, depending on the complexity of the SaaS.
Looking to the future, GrayDB sees a world where users will be able to use SaaS without needing to commit to a CSP upfront. As we noted in our previous post on multi-cloud native applications, this is akin to buying an application once and being able to use it on both a Mac and a PC. As a multi-cloud native product, GrayDB abstracts away cloud providers while providing the best service to the customer at the optimal price. In the context of the Sky computing vision, this hypothetical example shows how a more commoditized version of cloud computing will make it easier for companies to build multi-cloud native software.
Closing Thoughts
We close by speculating on how the enterprise software market will evolve alongside Sky computing.
- Software will increasingly be able to leverage multiple CSPs: In a previous post, we noted that a vast majority of companies already are multi-cloud. The rise of microservices, containers, and Kubernetes enables the software community to move away from monolithic software. This desire for greater agility pertains not only to hardware and services, but also to cloud service providers. As technology for building multi-cloud native applications matures—and as the vision for Sky computing comes to fruition—companies will seamlessly be able to use a variety of CSPs.
- Stateless apps will be the first to become multi-cloud native: Stateless applications handle transactions that can be understood in isolation—a single request accompanied by a single response. For example, when a search query is interrupted, it is easy to resubmit a new query. Stateless applications were the first to move to cloud computing platforms and serverless computing offerings. We expect stateless applications will migrate more quickly to both multi-cloud native and Sky computing settings.
- Stateful applications will continue to be SaaS first: Stateful applications are used in situations where previous transactions impact current processes. Examples include email, online banking, DBMS, and other systems. In recent years, most new data management, BI, ETL, and AI solutions—like Snowflake, Databricks, DataRobot, and more—are already SaaS-first products, and we expect this trend to continue.
- We’ll see an increase in specialization among cloud service providers: There are many parameters to consider when choosing a cloud provider. Some parameters are related to cost, others pertain to availability and throughput, while others might focus on latency and location of CSPs’ data centers. Alongside these factors is the realization that different workloads and different verticals might represent large enough opportunities to warrant their own specialized cloud providers. Early examples include Axoni (financial services), Vast and Genesis Cloud (GPU cloud platforms), POD (HPC cloud platform), and more.
Related content: Other posts by Assaf Araki and Ben Lorica.
- Get Ready For Confidential Computing
- Data Management Trends You Need to Know
- Taking Low-Code and No-Code Development to the Next Level
- What is DataOps?
- The Growing Importance of Metadata Management Systems
- AI and Automation meet BI
- Demystifying AI Infrastructure
- Software 2.0 takes shape
Assaf Araki is an investment manager at Intel Capital. His contributions to this post are his personal opinion and do not represent the opinion of the Intel Corporation. Intel Capital is an investor in Anyscale and Axoni. #IamIntel
Ben Lorica is co-chair of the Ray Summit, chair of the NLP Summit, and principal at Gradient Flow. He is an advisor to Anyscale and Databricks.