Four Cloud Trends To Watch In 2018

Cloud Trends

As cloud technologies become an ever-more-critical part of the IT landscape, advancement in the DevOps and infrastructure space doesn’t look likely to slow down in 2018. We have outlined four trends that we expect to continue disrupting the ever-changing cloud ecosystem this year. 

Google and Microsoft Close Gap with Amazon

Over the years, Amazon Web Services has been the only leader in cloud services thus monopolizing the cloud market share and setting the pace of innovation. This year we expect Google and Microsoft to close the gap and begin to position themselves as true competitors to AWS.

As the number of fully-managed AWS services has grown — now including multiple machine learning PaaS offerings, hosted graph database and “serverless” relational databases, as well as on-demand deep-learning-enabled video analysis — Azure and Google Cloud have quietly been catching up in the market for core cloud services (storage and compute) which are core decision drivers for large enterprises.

Google has recently announced the lofty goal of building its cloud-services revenue to match what the company makes in advertising sales by 2020, an ambition which should leave Amazon wary given the Alphabet family’s ability to vertically integrate with its other robust brands.

At Mobomo, we have also noticed a pattern of customers wanting to build out parallel infrastructure in Azure or GCS, or even downsize AWS footprints to meet increasingly complex cloud posture requirements. 

Multi-Cloud is the New Hybrid Cloud

As market share in the public cloud space becomes less monopolized, “multi-cloud” will become the buzzword in 2018 that “hybrid cloud” was in years past.

More and more, the three major providers will compete on cost in mostly-commoditized markets like on-demand compute and short-term storage. This means the ability to bundle out workloads among the major public cloud providers will be at a premium.

Open-source tools like Terraform, which uses a lowest-common-denominator approach to allow architects to define cloud-agnostic infrastructure-as-code and deploy resources across multiple providers according to arbitrary metrics (resource cost per time unit, network latency to target, desired redundancy level or SLA requirements, etc).

The clear benefit of these open-source tools is the ability to avoid provider lock-in by using cloud-specific offerings like Amazon’s CloudFormation to define architecture.

If another public cloud offers better metrics for a particular workload, transferring those resources becomes a simple matter of lift-and-shift rather than a laborious re-architecture process.

One toolset that will continue to grow exponentially are microservice and container orchestration technologies, especially those based on the Kubernetes ecosystem.

With Amazon’s announcement of server-less ECS and Fargate in 2017, and competitor technologies from Azure and GCS there is no doubt containerization of workloads will be the most straightforward path to a true multi-cloud architecture.

Look for the microservice/container space to become even hotter in 2018, and for cloud consultancies to build practices around the Kubernetes ecosystem to enable true multi-cloud cost arbitrage.

Internet-of-Everything Brings Compute to the Edge

The story of cloud to date has been about decoupling compute power from physical hardware, enabling on-demand workloads to access arbitrary amounts of processing capacity.

Yet this model has retained the classic client-server architecture inherent in the previous generation’s paradigm: cloud compute exists in the cloud, and packets must make the full round-trip to cloud provider data centers in order for inputs to be transformed to outputs.

The year ahead looks to finally disrupt this last redoubt of traditional IT thinking. With the rise of ubiquitous Internet-of-Things devices and cloud-aware microcontroller hardware such as Amazon GreenGrass (not to mention extremely latency-sensitive applications like self-driving cars which need to communicate in real-time with other physically-adjacent devices without making an Internet roundtrip), compute capacity will move much closer to the edge in 2018.

Devices will intelligently determine which portions of compute workloads to process locally or offload to the cloud, based on factors like network availability and latency, output priority, compute market price, and application-level metrics.

Machine Learning Advances Insights-as-a-Service

As more deep learning compute moves to edge devices that push ever-increasing amounts of data into cloud storage, AI will find itself at a crossroads in 2018, with organizations of all sizes clamoring to implement machine learning algorithms to draw insights from larger and larger datasets.

At the same time, ML is moving further and further from the “metal”, as seen by 2017’s rapid advance from deep-learning IaaS solutions (such as hosted Apache MXNet) toward fully-managed ML PaaS services like Amazon SageMaker.

That means 2018 is the year machine learning takes another great leap toward the business intelligence user, becoming a key vertical in the turnkey SaaS market — or, more accurately, enabling a new Insights-as-a-Service (iNaaS) solutions space in which cloud analytics platforms compete to combine multiple data streams, structured and unstructured (such as from sensors and IoT devices in addition to traditional application metrics and logs), and extract actionable conclusions for organizations.

The shift from machine learning PaaS to iNaaS will unlock artificial intelligence solutions for businesses at any scale, and all without the undifferentiated heavy lifting of building Big Data infrastructure and algorithmic compute platforms.

What is your cloud adoption strategy for 2018? Are you thinking about migrating to the cloud? Take our cloud readiness assessment to see how you compare in the market or speak with one of our cloud engineers to determine how these trends will impact your business objectives!