Rising Cloud Technologies: Service Mesh

New technologies help companies to transform organizations into digital organizations. Identifying the emerging cloud technologies and understanding their impact on the existing cloud landscape can help companies to become more successful.

While some companies do not have a formal cloud strategy in place, most companies are using at least a cloud technology such as SaaS, IaaS or PaaS – whether in a private, public or hybrid cloud.

Other companies follow a multi cloud strategy since it allows them to select different cloud services from different providers because some are better for certain tasks than others. For example, some cloud platforms specialize in large data transfers or have integrated machine learning capabilities.

Most popular cloud models are the hybrid and multi cloud as of today. Seeing the first benefits of cost savings and increased efficiencies, companies focus now more on agility, speed and time to market to enable digital business success.

The new cloud capabilities increase the deployment options. Companies want the benefits of the cloud in all of their IT systems with the increased offering of cloud service providers, customers can now decide on the technology, services, providers, locations, form factors and control.

Since the digitalization journey raises new considerations and expectations, companies are now looking into technical areas to improve their cloud landscape such as the distributed cloud, API-Centric SaaS, Cloudlets, Blockchain PaaS, Cloud Native, Site Reliability Engineering, Containers, Edge Computing and Service Mesh.

Service Mesh

A service mesh controls how different parts of an application share data with each other. Unlike other communication management systems, a service mesh is a configurable and dedicated infrastructure layer directly integrated into the application. It can be used to document how well (or poorly) the various components of an application interact. In this way, communication is optimized and failures can be minimized, even as the applications grow.

Each part of an application, a “service”, is again based on other services that provide the user with the desired function. For example, if you buy a product via an e-Commerce application, you want to know if the product is in stock. So the service that communicates with that company’s inventory database needs to communicate with the product website, which in turn needs to communicate with the user’s online shopping cart. In order to increase business value, this retailer may eventually develop a service that recommends products to the user in the application. This new service communicates for these recommendations with a database of product tags, but also with the same inventory database that the product website accessed. So we are dealing with a large number of reusable moving parts.

Modern applications are often unbundled in this way, as a network of services, each service performing a specific business function. To perform its function, a service may need to request data from other services. But what happens if some of these services are overloaded with requests, such as our retailer’s inventory database? This is where Service Mesh comes in, a feature that routes requests from one service to another and optimizes the interaction of all the variable parts.

The difference between a Service Mesh and Micro Services

With a micro service architecture, developers can change the services of an application without having to deploy it from scratch. In contrast to application development in other architectures, individual micro services are built by small teams that can freely choose tools and programming languages. Micro services are basically developed independently of one another, communicate with one another and can fail individually without this leading to a complete failure of the entire application.

The basis of micro services is the communication between the individual services such as an inter-service communication. A communication logic can also be programmed into any service without a service mesh, but a service mesh becomes more and more useful as the complexity of communication increases. In cloud native applications, which are integrated into a micro service architecture, a service mesh can combine a large number of separate services into a functional application.

Sidecar Proxies

In a service mesh, requests between micro services are transmitted via proxies in a separate infrastructure layer. For this reason, individual proxies that make up a service mesh are sometimes called “sidecars” because they run in parallel to each service and not in it. Together these sidecar proxies, which are decoupled from each service, form a mesh network.

From a technical view, the sidecar proxies are assigned to the micro services and through which the entire communication is conducted. Sidecar proxies use standardized protocols and interfaces for the exchange of information. The proxies can be used to control, manage and monitor communication. The introduction of the additional infrastructure layer of the service mesh offers numerous advantages. The micro services interact securely and reliably. By monitoring the communication, the service mesh detects problems in service-to-service communication and reacts automatically.

Without a service mesh, all micro services must be programmed with inter-service communication logic, compromising the developer’s focus on business objectives. It also means that communication errors are harder to diagnose because the logic for inter-service communication is hidden in each individual service.

Each newly added service or each new instance of an existing service running in a container makes the communication environment of an application more complicated and poses an additional risk of failure. In a complex micro service architecture, it can become almost impossible to diagnose the root cause of problems without a service mesh.

This is because a service mesh captures all aspects of inter-service communication and performance metrics. Over time, data made visible by the service mesh can be applied to the rules of inter-service communication, thus improving the efficiency and reliability of service requests.

For example, when a service mesh fails, it can collect data on how long it took to successfully retry a particular service. Based on the collected downtime data, rules can then be written that determine the optimal waiting time until a new service call is made and ensure that the system is not overloaded by unnecessary retries.

The known service mesh products are Istio, Linkerd, Tetrate, Kuma, Consul, Maesh and inhouse products from cloud provides such as App Mesh from AWS.

Advantages of a service mesh

By creating an additional infrastructure layer through which all micro services communication is routed, a service mesh offers numerous advantages. All aspects of service-to-service communication can be captured, controlled and managed. Efficiency, security and reliability of the service mesh increase. In addition, services can be scaled more easily and quickly because the functionality is decoupled from the communication.

  • Developers can fully concentrate on programming the micro services without having to worry about the connections of the services.
  • The query logic shows a visible infrastructure parallel to the services, making problems easier to detect and diagnose because the service mesh detects dysfunctional services and automatically redirects requests.
  • The micro service architecture becomes more stable and fault tolerant because the service mesh redirects requests to non-functional services in time.
  • The authentication of the services and the encryption and decryption of the transmitted data by the sidecar proxies creates additional security in the service mesh.
  • Micro services can be seamlessly integrated into the service mesh regardless of the platform and provider used.
  • Traffic and load control are possible regardless of the respective cloud or IT environment.
  • KPI’s show possibilities for optimizing communication in the runtime environment.

Disadvantages of a service mesh

A service mesh must be understood conceptually in order to decide whether it is worthwhile for an application and which technology is the most suitable. The development team is then challenged with the complex task of configuring the service mesh, which involves not only functional but also technical effort. The components of the Control Plane and the additional service proxies that are provided to each container require additional CPU and memory resources, which in turn affect the cost of operating the cluster. The actual additional resource requirements depend on the number of requests and the service mesh product and its configuration. Depending on the service mesh product used, Istio for example needs more resources than Linkerd.

Another disadvantage of a service mesh is that the usage of the sidecar proxies can impact performance compared to direct communication of the services. Thus, latency times can increase due to the processing of the data in the proxies and can affect the end-user experience. The higher latency is caused by the additional call of service proxies for each request. Instead of a direct call between containers, two proxies – on the sender and receiver side – are now involved in a service mesh. The delay of the requests is dependent on the specific micro service system and the service mesh configuration and therefore should be tested before the service mesh is deployed to the production system.

Conclusion

A service mesh enables central control of monitoring, resilience, routing and security, which are implemented decentralized in the sidecars. It fits well into a micro service architecture and can replace API gateways and many libraries. From a vendor perspective, Istio is the most popular service mesh product and has its strengths in environments such as Kubernetes and also allows to integrate single virtual machines or containers. Kubernetes is an open source system for automating the deployment, scaling and management of container applications, originally designed by Google and donated to the Cloud Native Computing Foundation.

The required effort using a service mesh is apart from the cost and skills required of introducing new technologies the increased resource consumption and a higher latency.

If companies are using micro services, they should consider using a service mesh since it improves stability, extension, transparency and security of the applications.

Leave a Reply

Your email address will not be published. Required fields are marked *