BLOG: KUBEEDGE, A KUBERNETES NATIVE EDGE COMPUTING FRAMEWORK
KubeEdge becomes the first Kubernetes Native Edge Computing Platform with both Edge and Cloud components open sourced! Open source edge computing is going through its most dynamic phase of development in the industry. So many open source platforms, so many consolidations and so many initiatives for standardization! This shows the strong drive to build better platforms to bring cloud computing to the edges to meet ever increasing demand. KubeEdge, which was announced last year, now brings great news for cloud native computing! It provides a complete edge computing solution based on Kubernetes with separate cloud and edge core modules.
Read moreDEEP DIVE INTO CILIUM MULTI-CLUSTER
Let’s review some of the use cases of connecting multiple Kubernetes clusters before we dive into the implementation details. High availability is the most obvious use case for most. This use case includes operating Kubernetes clusters in multiple regions or availability zones and runs the replicas of the same services in each cluster. Upon failure, requests can fail over to other clusters. The failure scenario covered in this use case is not primarily the complete unavailability of the entire region or failure domain. A more likely scenario is temporary unavailability of resources or misconfiguration in one cluster leading to inability to run or scale particular services in one cluster.
Read moreHOW IBM WATSON OVERPROMISED AND UNDERDELIVERED ON AI HEALTH CARE
In 2014, IBM opened swanky new headquarters for its artificial intelligence division, known as IBM Watson. Inside the glassy tower in lower Manhattan, IBMers can bring prospective clients and visiting journalists into the “immersion room,” which resembles a miniature planetarium. There, in the darkened space, visitors sit on swiveling stools while fancy graphics flash around the curved screens covering the walls.
Read moreINSIDE KUBERNETES RBAC
Kubernetes is a Container Orchestration Engine designed to host containerized applications on a set of nodes, commonly referred to as a cluster. Using a systems modeling approach, this series aims to advance the understanding of Kubernetes and its underlying concepts. The Kubernetes API is an Http API that provides Create/Read/Update/Delete access to query and modify the Kubernetes Object Store. Kubernetes supports multiple authentication and authorization strategies to control the access to the API. This post provides a concise, detailed model of Kubernetes’ Role-based Access Control (RBAC), but may not be suitable as introductory material. The model is supported by partial specifications in TLA+.
Read moreBACK TO TRAEFIK 2.0
Back in 2015, a revolution was under way. We were moving from manual, handcrafted infrastructures, to container-based, industrial, and human-free platforms. In those dark ages of orchestration, edge traffic was remarkably difficult to manage. On one side, we had traditional reverse-proxies that were built for static infrastructures, on the other side, we were building dynamic clusters made to deploy and manage thousands of microservices. The idea of having a simple and automatic edge router, all in one binary, was appealing, but also idealistic. The foundation of Traefik was laid that year, paving the way to building a project with strong values: simplicity of configuration, modern features, and open to the community.
Read moreOPEN SOURCING PELOTON, UBER’S UNIFIED RESOURCE SCHEDULER
First introduced by Uber in November 2018, Peloton, a unified resource scheduler, manages resources across distinct workloads, combining separate compute clusters. Peloton is designed for web-scale companies like Uber with millions of containers and tens of thousands of nodes. Peloton features advanced resource management capabilities such as elastic resource sharing, hierarchical max-min fairness, resource overcommits, and workload preemption. As a cloud-agnostic system, Peloton can be run in on-premise data centers or in the cloud. At Uber, Peloton is a critical piece of infrastructure powering our compute clusters. It is currently running many kinds of batch workloads in production, and we are planning to migrate stateless services workloads to it as well.
Read moreUSING MACHINE LEARNING TO ENSURE THE CAPACITY SAFETY OF INDIVIDUAL MICROSERVICES
Reliability engineering teams at Uber build the tools, libraries, and infrastructure that enable engineers to operate our thousands of microservices reliably at scale. At its essence, reliability engineering boils down to actively preventing outages that affect the mean time between failures (MTBF). As Uber’s global mobility platform grows, our global scale and complex network of microservice call patterns have made capacity requirements for individual services difficult to predict.
Read moreHOW A KUBERNETES BUG WON’T LET YOU EXPOSE A SERVICE OVER TCP AND UDP ON A SAME PORT
How I wasted hours of my life because of an unfixed 2016 Kubernetes’s bug that didn’t want me to expose a service over both UDP and TCP on a same port. Long story short, I wasted hours of my life because of an unfixed 2016 Kubernetes’s bug that didn’t want me to expose a service over both UDP and TCP on a same port. May this article come up in your Google search and save you hours of suffering.
Read moreAMBASSADOR AND THE CLOUD NATIVE ECOSYSTEM—PART 1: MONITORING
In a Cloud Native world, microservices are running with ephemeral containers that are regularly deployed to multiple availability zones, regions, and even multiple clouds. As these cloud native applications become more complex, our supporting solutions like monitoring, have also had to become more complex. Today, more traditional monitoring responsibilities are being automated, and monitoring has become less human centric. In the first part of this series, we’ve summarized some of the most popular monitoring solutions with Ambassador. Prometheus can be used for real-time monitoring of Ambassador instances. We’re fans of the Prometheus Operator, which automatically creates and manages Prometheus monitoring instances.
Read moreENVOY AND THE “PROGRAMMABLE EDGE”: THE CHANGING ROLE OF EDGE PROXIES AND DEVELOPER EXPERIENCE
At the inaugural EnvoyCon, which ran alongside KubeCon in Seattle last December, several large organisations discussed how they have recently begun using Envoy as an edge proxy, such as eBay, Pinterest and Groupon. Moving away from hardware-based load balancers and other edge appliances towards the software-based “programmable edge” provided by Envoy clearly has many benefits, particularly in regard to dynamism and automation. However, one of the core challenges presented was the need to create an effective control plane that integrates well with the existing engineering workflow or developer experience.
Read more