4 Strategies for Incrementally Migrating from VMs to Kubernetes using an API Gateway

4 Strategies for Incrementally Migrating from VMs to Kubernetes using an API Gateway

  • October 5, 2019
Table of Contents

4 Strategies for Incrementally Migrating from VMs to Kubernetes using an API Gateway

An increasing number of organizations are migrating from a datacenter composed of virtual machines (VMs) to a “next-generation” cloud-native platform that is built around container technologies like Docker and Kubernetes. However, due to the inherent complexity of this move, a migration doesn’t happen overnight. Instead, an organisation will typically be running a hybrid multi-infrastructure and multi-platform environment in which applications span both VMs and containers.

Beginning a migration at the edge of a system, using functionality provided by a cloud-native API gateway, and working inwards towards the application opens up several strategies to minimize the pain and risk. In a recently published article series on the Datawire Ambassador blog, four strategies related to the planning and implementation of such a migration were presented: deploying a multi-platform service discovery system that is capable of routing effectively within a highly dynamic environment; adapting your continuous delivery pipeline to take advantage of best practices and avoid pitfalls with network complexity; using traffic shifting to facilitate an incremental and safe migration; and securing your infrastructure with encryption and network segmentation for all traffic, from end user to service. During a migration to cloud and containers it is common to see a combination of existing applications being decomposed into services and new systems being designed using the microservices architecture style.

Business functionality is often provided via an API that is powered by the collaboration of one of more services, and these components therefore need to be able to locate and communicate with each other.

Source: getambassador.io

Share :
comments powered by Disqus

Related Posts

Rate Limiting at the Edge

Rate Limiting at the Edge

I’m sure many of you have heard of the “Death Star Security” model—the hardening of the perimeter, without much attention paid to the inner core—and while this is generally considered bad form in the current cloud native landscape, there is still many things that do need to be implemented at edge in order to provide both operational and business logic support. One of these things is rate limiting. Modern applications and APIs can experience a burst of traffic over a short time period, for both good and bad reasons, but this needs to be managed well if your business model relies upon the successful completion of requests by paying customers.

Read More
Server Name Indication (SNI) Support Now in Ambassador

Server Name Indication (SNI) Support Now in Ambassador

We’ve discussed many interesting use cases for SNI support within the edge proxy/gateway with both open source and commercially supported users of Ambassador. In a nutshell (and with thanks to Wikipedia), SNI is an extension to the TLS protocol which allows a client to indicate which hostname it is attempting to connect to at the start of the TCP handshaking process. This allows the server to present multiple certificates on the same IP address and TCP port number, which in turn enables the serving of multiple secure websites or API services without requiring all those sites to use the same certificate.

Read More