VERIFYING SERVICE MESH TLS IN KUBERNETES, USING KSNIFF AND WIRESHARK
Alongside Nic Jackson from HashiCorp, I have recently presented at several conferences and webinars about the need for transport-level encryption that spans end-to-end, or “user to service”, within modern applications. TLS encryption (and termination) for traffic from a user’s browser to the application edge has been a long-standing feature of API gateways, CDNs and edge proxies, but only recently has service mesh technology made implementing TLS for service-to-service traffic a realistic approach for most of us. A lot of service mesh implementations promise low-touch TLS implementation, allowing operators to enable this with a single config option or a few lines in a YAML file.
Read moreFINNISH COMPANY MAKES FOOD FROM THIN AIR
The impact of the beef — and for that matter, poultry, pork, and fish — industries on our planet is widely recognized as one of the main drivers behind climate change, pollution, habitat loss, and antibiotic-resistant illness. From the cutting down of rainforests for cattle-grazing land, to runoff from factory farming of livestock and plants, to the disruption of the marine food chain, to the overuse of antibiotics in food animals, it’s been disastrous. The advent of a promising source of protein derived from two of the most renewable things we have, CO₂ and sunlight, gets us out of the planet-destruction business at the same time as it offers the promise of a stable, long-term solution to one of the world’s most fundamental nutritional needs.
Read moreKUBERNETES POD AUTOSCALER USING CUSTOM METRICS
In this post we are going to demonstrate how to deploy a Kubernetes autoscaler using a third party metrics provider. You will learn how to expose any custom metric directly through the Kubernetes API implementing an extension service. Dynamic scaling is not a new concept by any means, but implementing your own scaler is a rather complex and delicate task. That’s why the Kubernetes Horizontal Pod Autoscaler (HPA) is a really powerful Kubernetes mechanism: it can help you to dynamically adapt your service in a way that is reliable, predictable and easy to configure.
Read moreTEACHING COMPUTERS TO ANSWER COMPLEX QUESTIONS
Computerized question-answering systems usually take one of two approaches. Either they do a text search and try to infer the semantic relationships between entities named in the text, or they explore a hand-curated knowledge graph, a data structure that directly encodes relationships among entities. With complex questions, however — such as “Which Nolan films won an Oscar but missed a Golden Globe?” — both of these approaches run into difficulties.
Read moreFIRST PROGRAMMABLE MEMRISTOR COMPUTER
Michigan team builds memristors atop standard CMOS logic to demo a system that can do a variety of edge computing AI tasks Hoping to speed AI and neuromorphic computing and cut down on power consumption, startups, scientists, and established chip companies have all been looking to do more computing in memory rather than in a processor’s computing core. Memristors and other nonvolatile memory seem to lend themselves to the task particularly well. However, most demonstrations of in-memory computing have been in standalone accelerator chips that either are built for a particular type of AI problem or that need the off-chip resources of a separate processor in order to operate.
Read moreEVOLUTION OF NETFLIX CONDUCTOR
Conductor is a workflow orchestration engine developed and open-sourced by Netflix. If you’re new to Conductor, this earlier blogpost and the documentation should help you get started and acclimatized to Conductor. In the last two years since inception, Conductor has seen wide adoption and is instrumental in running numerous core workflows at Netflix. Many of the Netflix Content and Studio Engineering services rely on Conductor for efficient processing of their business flows. The Netflix Media Database (NMDB) is one such example. In this blog, we would like to present the latest updates to Conductor, address some of the frequently asked questions and thank the community for their contributions.
Read moreTHE TRAITS OF SERVERLESS ARCHITECTURE
Whenever new technologies emerge, the first priority for a technologist is to understand the implication of adopting it. Serverless architecture is a case in point. Unfortunately, much of the current literature around serverless architecture focuses solely on its benefits. Many of the articles —and examples used — are driven by cloud providers —so, unsurprisingly talk up the positives. This write-up attempts to give a better understanding of the traits of serverless architecture. I’ve deliberately chosen the word trait, and not characteristic, because these are the elements of the serverless architecture that you can’t change.
Read moreSUPERCHARGING DATA DELIVERY: THE NEW LEAGUE PATCHER
For the past 8 years, League has been using a patching system called RADS (Riot Application Distribution System) to deliver updates. RADS is a custom patching solution based on binary deltas that we built with League in mind. While RADS has served us well, we felt we had an opportunity to improve some key areas of the patching experience. We knew we could deliver updates much more quickly and more reliably by using a fundamentally different approach to patching, so we set out to build a brand new patcher based on content-defined chunking. To compare our old and new patching solutions under the same conditions, we’ve been rolling out the new patcher incrementally over the past several months. This has allowed us to validate our assumptions about the effectiveness of a content-defined chunking approach.
Read moreEGG: A TOOLKIT FOR LANGUAGE EMERGENCE SIMULATIONS WITH NEURAL NETWORKS
EGG is a new toolkit that allows researchers and developers to quickly create game simulations in which two neural network agents devise their own discrete communication system in order to solve a task together. For example, in one of the implemented games, one agent sees a handwritten digit and has to invent a communication code to tell the other agent which number it represents. A lively area of machine learning (ML) research, language emergence would benefit from a more interdisciplinary approach.
Read moreANNOUNCING HASHICORP VAULT 1.2
We are excited to announce the public availability of HashiCorp Vault 1.2. Vault is a tool to provide secrets management, data encryption, and identity management for any infrastructure and application. Vault 1.2 is focused on supporting new architectures for automated credential and cryptographic key management at a global, highly-distributed scale. This release introduces new mechanisms for users and applications to manage sensitive data such as cryptographic keys and database accounts, and exposes new interfaces that improve Vault’s ability to automate secrets management, encryption as a service, and privileged access management. KMIP Server Secret Engine (Vault Enterprise only): Allow Vault to serve as a KMIP Server for automating secrets management and encryption as a service workflows with enterprise systems. Integrated Storage (tech preview): Manage Vault’s secure storage of persistent data without an external storage backend, supporting High Availability and Replication.
Read more