Simplifying Kubernetes with Automation – The New Stack

VALENCIA — Management of the cloud virtual machines (VM) on which your containers run. Running data-intensive workloads. Scale services in response to traffic spikes, but do so in a way that doesn’t increase your organization’s cloud spend. Kubernetes (K8s) seems so easy at first, but it brings challenges that increase in complexity as you go.

The cloud-native ecosystem is populating with tools aimed at making these challenges easier for developers, data scientists, and Ops engineers. Increasingly, automation is the secret sauce that helps teams and their businesses work faster, safer and more productively.

In this special On the Road edition of The New Stack Makers podcast recorded at KubeCon + CloudNativeCon EU, we unpacked some of the ways automation helps simplify Kubernetes. We were joined by a trio of guests from by NetApp: Jean-Yves “JY” Stephan, senior product manager for Ocean for Apache Spark, as well as Gilad Shahar and Yarin Pinyan — product manager and product architect, respectively, for

Simplify Kubernetes with Automation

Also available on Apple Podcasts, Google Podcasts, Overcast, PlayerFM, Pocket Casts, Spotify, Stitcher, TuneIn

Until recently, Stephan noted, Apache Spark, the open-source unified analytics engine for large-scale data processing, couldn’t be deployed on K8s. “So all these regular software engineers were getting cool technology with Kubernetes, cloud-native solutions,” he said. “And the big data engineers, they were stuck with technologies from 10 years ago.”, he said, enables Apache Spark to run on Kubernetes: “It’s much more developer-friendly, it’s much more flexible, and it can also be more cost-effective.”

The company’s Ocean CD, which is expected to be generally available in August, aims to address another Kubernetes problem, Pinyan said: canary deployments.

Previously, if you were running regular VMs, without Kubernetes, it was pretty easy to do canary deployments, because you had to scale one VM, then see if the new version was working fine, and then gradually scale the others,” said he declared. . “In Kubernetes, it’s quite complex, because you have to manage many pods and deployments.”

In companies, where DevOps and SRE team members are likely serving a multitude of developers, it’s critical to automate as much work as possible for developers, Shahar said. For example,’s tools allow users to “split the configuration into pieces”, he said, which can put developers in charge of the percentage of responsibility for the configuration deemed best for their case. use.

“We try to design our solutions in a way that enables DevOps [team] to fix things once and basically provide pre-made solutions to developers,” he said. “Because the developer, at the end of the day, knows better than anyone what their application will need.”

Source link

Steven L. Nielsen