Hybrid Cloud Costs Risks

Since public cloud providers can offer enormous economies of scale and automation, they can manage at a good price, however, when scaling performance, your cost per hour fee will rise which results in higher operational expenses.

With adopting to the cloud, where you do not need to invest in hardware, software and other IT infrastructures, you rather go for a pay as you need service fee by shifting from a CAPEX spending on IT hardware and software to an OPEX third party leasing spending model. However, by just moving spending from capitalization with write offs over a period to monthly service costs does not necessarily mean that you are reducing your spending.  The more complex the hybrid cloud environment is, the bigger the risk that OPEX costs could explode.

The spending predictability has become more complex and challenging. Costs need to be monitored and controlled at the whole hybrid environment, per location and globally consolidated. Focus is on monitoring bandwidth, networking and load capacity. Traffic or usage are the cost factors.

The vendor’s pricing schemes need to be translated and cost of services like getting data into the cloud and out of the cloud need to be fully understood before the Service Level Agreement is signed with the vendor. Prices vary where the data is run and stored (location based). As companies expand their cloud usage and leverage their locations infrastructure, it results in additional product and solutions which makes it more difficult to keep cost control.

With a consolidated cloud cost monitoring, companies are able to identify which parts of the company are the heavy spenders of cloud resources services and can initiate the necessary cost adjustments.

But not only that. Companies need to make sure that the services they subscribed are also used. In the Service Level Agreement, it is clearly listed for what the customer is using and paying. Companies have to make sure that they are actually using those services which they are paying for. Servers which are managed on premises can be easily switched off when not used. However, in the cloud, you do not manage servers anymore.

For example, if the IT department needs to use a virtual machine for a specific business requirement and once deployed to the production system, the data stays in the virtual machine and is not used anymore but the customer would pay for the services for this virtual machine not using. Companies need to ensure they update their policies with shutting down infrastructure not used in the hybrid cloud.

Cost risks appear when the workload moving to or from the hybrid cloud is not taking advantage of flexibility and elasticity. It is often cheaper to run stable and static workloads on premises instead in the cloud. Especially withdrawal data from the cloud to on premises applications can become expensive. In addition, moving data between different countries and locations can become expensive too.

Companies also need to compare “what-if” CAPEX scenarios against OPEX business cases. This means that you need to define an end of life service fee, calculate capital investments, maintenance and upgrade costs from your CAPEX spending and then coming up when the OPEX spending becomes higher than the CAPEX case. With this comparison, you may end or replace this service or move it back to your own on premises data centers.

Cloud technologies change rapidly with an enormous pace. Companies face a cost risk if they keep what they are doing without checking better and more cost effective solutions. Cloud providers come up with innovation quite often and companies should review their use of the cloud on a regular basis to check if there are new features or cloud services which will decrease service costs.

The number of cloud providers is also a cost factor. As easy as it is to onboard for example a new SaaS vendor, the more complex and expensive it gets to handle and integrate multiple vendors. The number of vendors should be small, however, it should not prevent companies to add a new vendor when this can solve a particular problem or business requirement.

Choosing a private cloud where you can use a dedicated cloud infrastructure, it has higher costs than using a public cloud. Whether you manage it yourself or hire a cloud service provider, and whether you host it in your data center or off-premises, you are still responsible for operating and maintaining your data center, hardware, software, security and compliance. Companies should ensure that cloud costs are transparent, knowing which services are used by whom and why, so that cost surprises get avoided. Believing that everything moving to the cloud is cheaper is an illusion.

Hybrid Cloud Risks

As hybrid clouds become more common, the complexity and related risks increase significantly.

Using just a lift and shift approach is not enough to exploit the advantages of the hybrid cloud. It takes some homework before to understand and design a cloud migration. At the beginning, it should be clarified why the new technologies are being used, what requirements result from them and who is building up or providing the necessary skills.

An extended infrastructure using on premises and cloud applications requires proper mapping and needs to be controlled to ensure business continuity combined with accurate and reliable IT systems.

Companies aim to find the best way to combine on premises applications, public cloud services, private cloud services and services from multi cloud structures. It is often not easy to adapt and change quickly to remain agile in the digital business transformation since business critical and security sensitive applications are mostly to remain on premises. The same applies to services which are regulated under law and not all services can be obtained from the cloud.

In order to adapt the business needs and choose wisely which cloud model fits best, an overall hybrid cloud operating model is required. IT departments face the risk to diverge infrastructure and applications and missing the holistic approach. For example, the infrastructure sponsors favor Microsoft’s Azure and the application sponsors favor AWS because of their artificial intelligence or machine learning capabilities.

Shadow IT Risks

An organization may end up with a multi cloud strategy by accident, via a shadow IT. That is technology adopted by business units independently of the IT department, which may subsequently be ‘reined in’ for oversight by the CIO.

Often in the past, single subsidiaries or regions choose services from the cloud on their own initiatives to transform their business more quickly. This resulted in a shadow IT, diverged from the corporate IT strategy, with a mess of different and unaligned SaaS products.

The extent of shadow IT revealed by McAfee’s 2019 Cloud Adoption and Risk Report is startling: 1,400 IT professionals in 11 countries were asked to estimate the total number of cloud services in use in their organization and came up with an average of 31. The actual average figure was 1,935:

Estimated versus actual levels of cloud service usage. Source: McAfee

At this point, companies recognize the need for a sourcing strategy. It forces you to understand why you want to go to the cloud.

No one should move a well-established on premises service to the cloud just because the cloud seems cool and modern.

There is no value if the services remain the same. The service from the cloud should contain new and transformative results which bring benefits to the customer.

Security Risks

Choosing a public cloud means less control over data security. Risks are mainly about data transfer, data leaks and data privacy. You depend on your vendor in which country and under which regulations your data is managed. All the data is travelling on the internet between the company’s network and the cloud service provider’s network. Secured channels must be ensured.

The risk analysis depends on the workloads, contractual controls with the public cloud service provider, architectural controls in the software and application environments and technical controls regarding the IT infrastructure. Just migrating to the cloud does not mean that companies become more vulnerable. It depends on the right architecture and how the cloud security tools are integrated.

The data which customers upload to the cloud is with their own responsibility. Cloud service providers take care about the cloud but not what is inside the cloud because customers configure the applications, they define access control and data sharing rules.

Companies should pay attention to the following questions before going to the public cloud:

  • Can the public cloud service provider meet the companies information security needs?
  • Does the contract with the cloud service provider and the terms and conditions ensure the companies and security standards?
  • Does the cloud service provider guarantee international governance standards such as the ISO 27000 series of security and privacy standards, especially ISO 27001 and ISO 27017 for IT security, ISO 27017 for cloud services security and ISO 27018 for cloud services related to data protection such as privacy and GDPR compliance?
  • Does the cloud service provider follow architecture standards such as ISO 17789 for the cloud computing reference architecture, ISO 18384 for SOA reference architecture, ISO 24760 for ID management architecture and ISO 29101 for privacy architecture framework?
  • In what extent is the cloud service provider liable for data leaks?
  • How does the cloud service provider handle information security for its own staff and data centers?
  • How well is the software configured where the data go to the cloud, especially when workloads are running on both private and public clouds?
  • What cloud workload protection solution does the vendor offer? What are the further protections apart from endpoint protection platforms?
  • Where do we have lack of skills to understand the new environment and architecture of the hybrid cloud?
  • Which API controls exist to monitor and regulate the data flow between on premises servers and the public cloud?
  • What kind of Hardware Security Modules (HSM) are in place to ensure trustworthiness and integrity of data in the hybrid cloud?
  • What Trusted Execution Environments (TEE) does the cloud service provider offer to ensure a secure and trusted runtime environment for applications? This relates both to cyberattacks and Identity Access Management (IAM) questions.

The biggest risk is during the migration to the cloud, however, as functions and services will change over the time, the risk analysis is a repetitive task and the international standards provide minimum audit cycle recommendations.

A high risk threat for cloud workloads is related to misconfigurations. Incorrect set up of Identity Access Management (IAM) systems and weak data transfer protocols increase cloud workload vulnerability. As the code of cloud applications change, the permission changes too and this can lead to more misconfiguration. Outside attackers focus on accessing data through phishing emails by targeting cloud workloads as they are exposed though the internet. Companies need to set up mechanism or check with vendor to detect malicious code in their productive hybrid cloud environment. Cloud workload protection is critical and the new threat model against cloud workloads is new and different from protecting on premises environments.

Companies should ensure that the security tools offered are compatible with their operating systems and other applications used from the same vendor. For example, if a company is using Microsoft’s Azure Cloud solution, they offer security tools such as Azure Active Directory, Identity Access Management (IAM) tools, encryption and network isolation services. If the customer does not run the latest Windows operating system, they cannot use the full system management capabilities offered.

Nowadays, cloud service providers respond to the risks by having such high security standards and measurers so that customers would spend much more costs and time to develop those skills and standards in house.

So it can also be seen as a benefit if the in house measures have a lower standard than what the vendor provides.

Risks and Benefits of the Hybrid Cloud

In a hybrid cloud environment, companies started to use a combination of public and private clouds that operated as separate entities but were integrated. Now, companies are expanding their hybrid cloud by more and more integrating on premises applications.

This means that in a hybrid cloud, a customer combines services from a private cloud and a public cloud by being connected through an effective WAN and is sending and sharing data between those different cloud services. The company depends on the availability of a public cloud platform from a trusted third party cloud service provider whereas a private cloud is either at the company’s premises or connected through a hosted private cloud service provider.

Hybrid cloud can include on premises data centers and the customer can still use on premises servers and manage the workloads between different environments such as the public cloud, private cloud and on premises servers.

Originally, the hybrid cloud concept was to extend capacity from a private cloud to a public cloud without using on premises servers. Today, it is a mixture of all three – private cloud, public cloud and on premises applications. This allows companies to be more flexible for deploying workloads, whether it is the best use of on premises for compliance or sensitive data reasons or achieving a higher scale for high volume and low sensitive data in the public cloud.

Depending on their IT strategies, companies tend to go for vendors offering solutions across different technology platforms such as Anthos from Google, OpenShift (Red Hat) from IBM and VMware Cloud from VMware. With this approach, companies are using tools to manage different environments by stretching down services to locations. They manage their clouds at different vendors.

The other approach is to go for vendors who more reflect what the customers already have in their data centers and it allows customers also to control their data centers. Examples are AWS Outposts from AWS, Azure Stack Hub from Microsoft or Cloud@Customer from Oracle like explained in below approach.

For example, a company uses a private cloud environment (i.e. Oracle’s Dedicated Region Cloud@Customer) for a part of its data and outsources other data to a public cloud (e.g. Microsoft Azure, AWS, Google or another public cloud service provider).

The aim is to create real business value by combining services and data from different cloud models through a consolidated, unified, automated and well managed IT environment. Companies take advantage of system interoperability and expect from a hybrid cloud to scale out and quickly provision new resources while being able to move workloads and data between the private and public cloud environments. The connection of a company on premises private cloud services with an external vendor’s public cloud services into a single infrastructure allows a company flexibility for running its applications and workloads.

Reasons to use the hybrid cloud is historically conditioned where companies started moving to the cloud and ended up in a hybrid environment. It can also be from using analytics and workloads in the cloud to using whole applications. Other reasons can be end of life of data centers which require business transformation to the hybrid cloud. Some companies only transfer some workloads into the hybrid cloud whereas other companies take the full advantage of using frameworks with calculation models, artificial intelligence and machine learning, often provided by cloud service providers or dedicated vendors.

Hybrid cloud adoption also depends whether the company is regulated such as finance, insurance and pharmaceutical companies or whether the company is already a digital native company like the ones from Silicon Valley. On the other side you will find a traditional company or a government institution which is more cautious towards the cloud.

In the past, public clouds and private clouds were easily defined by location and ownership.

Today, this became more complex:

  • Public clouds traditionally ran off-premises, but public cloud providers are now running cloud services on their clients’ on premises data centers.
  • Private clouds traditionally ran on premises, but organizations are now building private clouds on rented, vendor owned data centers located off premises.
  • Business wants to obtain services faster and easier instead of investing into hardware, building skills and having administration staff to take care of operations.

The fact that companies can get the full managed service from the cloud is a strong argument for companies for whom IT is only a tool for their core business. Nevertheless, IT has moved more to a consumption based model instead of investing in your own infrastructure. You can buy and use hybrid cloud services as you need them.

For example a company runs basic and non-sensitive workloads on the public cloud and keep business critical and security intense applications and data in the private cloud or on their premises legacy systems, both safely behind the company’s firewall where data can be send to the private cloud and vice versa. The company can leverage their SaaS application and move data between their private or data center resources. Business processes are designed as a service so that they can connect with different private and public cloud environments as though they were a single environment.

Hybrid clouds become increasingly complex infrastructures and in the meantime, companies are challenged to keep operations stable and efficient.  The question remains on how companies decide which part of their processes to manage, purchase and use. To come to the right balance, companies need to define the future state of their responsiveness and flexibility for their business, customers, partners and end users.

The third cloud wave: The Multi Cloud dilemma

The third cloud wave

While cloud computing is used in most companies, the cloud becomes more important to reach digital transformation.

As we go through different waves in industries, there are rising and falling technology cycles which obviously also apply to the cloud technology.

Companies are now updating their processes and upskill their teams to maintain the necessary control while cloud technologies reach new waves.

During the first wave, companies focused on moving their systems to the cloud based on IT driven initiatives and often without business involvement. The focus was on cost savings and being agile. When companies learnt that there is not a single cloud strategy, they started to differentiate towards vendors who are strong in their services and offerings, rather to depend on one vendor.

As a consequence, in the second wave, multi cloud strategies became more popular. Because cloud providers became differentiated, companies started to choose the best cloud service for their specific IT systems and applications.

This kind of general merchandise trade approach resulted in an increased getting out of cost control situation and CIO’s had difficulties to justify their total return of cloud technology investments.

The Multi Cloud dilemma

The multi cloud strategy reduced vendor dependencies, however, application portability and integration still remains a challenge.

Companies therefore started a withdrawal from the different multi cloud solutions, especially in the public cloud and moving more towards an on premises, data center and a private cloud approach.

Workloads for the private cloud are still critical and companies modernize their private cloud infrastructure in parallel with public cloud initiatives because they need to understand how much and what kind of workloads remain in the private cloud. Most private clouds are hosted and approaches are about virtualization, orchestration and control plane.

One of the major drivers supporting the third wave is the return to cost control. On one hand, companies want to maximize their return on their application investments. On the other hand, going through the digital transformation, costs of transforming those applications become much higher because of the volume and the returns do not meet the expectations, despite the promised economy of scale factor when using public clouds. However, this is not met when the applications are just shifted to the cloud based on a traditional on premises architecture. A container approach for example can be seen as an answer to a better cloud adoption.

So the question goes back to redesign the appropriate architecture which allows to achieve better cost savings and using modern cloud technologies to get the maximum from the cloud service providers.

Examples described in my previous blog series about Rising Cloud Technologies are

  • Distributed Cloud architecture
  • API-Centric SaaS
  • Cloudlets
  • Block chain PaaS
  • Cloud-Native architecture
  • Containers
  • Site Reliability Engineering
  • Edge Computing
  • Service Mesh
  • Micro Services

Through the next couple of years, legacy applications migrated to the public cloud infrastructure as a service (IaaS) will require optimization to become more cost effective.

The focus is on small and discrete functions and to scale only those business functions which are in demand instead of scaling all functions as in previous waves. The homework though is to understand the current architecture, identify silos and redundant applications and replace them for example with micro services, cloud native architectures and other concepts from the rising cloud technologies to achieve reliability while reducing complexity and costs.

Coming from a traditional IT infrastructure with the burden of existing silo architectures, data is unstructured, available in various file formats and distributed in different storages with different hierarchies. This makes it difficult to come up with a unified data model to avoid inconsistent data in the cloud. This is often found in regulated industries such as financial services where these companies depend on high complex and old legacy systems which cannot be just “lifted and shifted” to the cloud.

Companies getting the maximum from the cloud will challenge their multi cloud environment by investing only in the right cloud services which are best for the business. The leading cloud service providers will increase their portfolio by serving a subset of their services for low latency application requirements.

The cloud in your own data center

Most regulated companies such as banks, government and pharmaceutical companies still run their IT systems on onsite premises instead in a public cloud infrastructure because of security reasons to keep their data in their own data centers.

These companies are missing the advantages of the cloud technologies such as embedded machine learning, artificial intelligence and autonomous databases which reduce costs and security risks by eliminating user errors through automatization. The cloud does not work the same way as our datacenter does.

Cloud service providers are now coming with a new business model to give those companies the benefits of the cloud such as pay as you go and pay per use, rapid elasticity and the latest patches with an infrastructure run by the vendor but that physically is in the data center of the customer.

Vendors put their own cloud hardware and software in the customer data center, for example a cloud based autonomous database, which has the advantage for system and data base administrators and developers to focus on innovation instead of time consuming maintenance which exposes them to higher security risks of data breach and failures.

Users will only pay for what they use and the infrastructure is the customer’s data center and behind their firewall. The data is not travelling between a public or private cloud on the outside internet to reach the user, it is all in house.

The same business model can be leveraged for a public cloud inside an enterprise data center. The companies can then use a wider range of cloud services in their own data center including new technologies the cloud service providers have in their portfolio such as machine learning, artificial intelligence, Internet of Things and blockchain.

Conclusion

Companies using in-house cloud services from an external vendor have the advantage that the hardware, software and data are in their own data center and the vendor manages their infrastructure, patching, updates, security and technology updates through a remote connection. Using for example autonomous services provided from the vendor, the customer can benefit from machine learning instead of using 3rd party tools and trying to integrate them into their systems. And if the performance can be increased through running the infrastructure in-house, the impact on cost savings is also beneficial.

However, the vendor must not have access to sensitive data. The vendor’s role is the same as if the customer were using a public cloud but now this cloud is physically inside the enterprise and the data does not travel on the outside internet.

It is important to understand how the cloud is different from traditional data centers. Companies need to pay attention to leverage the skills of their IT data center staff to learn new things as the cloud requires a different approach, cost model and infrastructure management than building or replicating a new data center in the cloud.

The focus should be on real value through transformation how you operate today and benefit from data analytics, automation, machine learning and artificial intelligence, becoming more agile and efficient. Whether this is achieved with public, private, hybrid or multi cloud, it does not matter. If a company wants to survive in the future, you need to transform the people and the companies’ culture.