infrastructure-as-code iac kubernetes crossplane

Getting my feet wet with Crossplane

In the early days of IT, we manually configured servers–each one a precious snowflake, lovingly maintained and documented. But the size of the infrastructure grew and this approach couldn’t scale. Chef and Puppet popularized the idea of Infrastructure-as-Code: engineers would define the state of the machine(s) in text files, stored in Git–hence the name. A global node would read these files to create a registry. Then, a local agent on each machine would check the desired state at regular in

opentelemetry vcluster kubernetes

Even more OpenTelemetry - Kubernetes special

I have presented my OpenTelemetry demo many times, and I still do. Each time, the audience is different. To make no two presentations the same, I always ask attendees what stack they are more interested in. I also regularly add new features for the same reason. I was a victim of the IT crisis last summer, and my company fired me, so I no longer work on Apache APISIX. They say that the Chinese ideogram for crisis contains the ideogram for opportunity. I used this opportunity to join LoftLabs. Lo

vcluster devops kubernetes sizing

One giant Kubernetes cluster for everything

The ideal size of your Kubernetes clusters is a day 0 question and demands a definite answer. You find one giant cluster on one end of the spectrum and many small-sized ones on the other, with every combination in between. This decision will impact your organization for years to come. Worse, if you decide to change your topology, you’re in for a time-wasting and expensive ride. I want to list each approach’s pros and cons in this post. Then, I’ll settle the discussion once a

wasm webassembly kubernetes wasmedge

WebAssembly on Kubernetes

Like a couple of innovative technologies, different people have different viewpoints on where WebAssembly fits the technology landscape. WebAssembly (also called Wasm) is certainly the subject of much hype right now. But what is it? Is it the JavaScript Killer? Is it a new programming language for the web? Is it (as we like to say) the next wave of cloud compute? We’ve heard it called many things: a better eBPF, the alternative to RISC V, a competitor to Java (or Flash), a performance booster

devops testing integration testing kubernetes vcluster

Pull request testing on Kubernetes: vCluster for isolation and costs control

This week’s post is the third and final in my series about running tests on Kubernetes for each pull request. In the first post, I described the app and how to test locally using Testcontainers and in a GitHub workflow. The second post focused on setting up the target environment and running end-to-end tests on Kubernetes. I concluded the latter by mentioning a significant quandary. Creating a dedicated cluster for each workflow significantly impacts the time it takes to run. On GKE, it t

devops testing kubernetes google cloud gke github actions vcluster

Pull Request testing on Kubernetes: working with GitHub Actions and GKE

I’m continuing my series on running the test suite for each Pull Request on Kubernetes. In the previous post, I laid the groundwork for our learning journey: I developed a basic JVM-based CRUD app, tested it locally using Testcontainers, and tested it in a GitHub workflow with a GitHub service container. This week, I will raise the ante to run the end-to-end test in the target Kubernetes environment. For this, I’ve identified gaps that I’ll implement in this blog post: Create

kubernetes crd vcluster

A solution to the problem of cluster-wide CRDs

I’m an average Reddit user, scrolling much more than reading or interacting. Sometimes, however, a post rings a giant red bell. When I stumbled upon If you could add one feature to K8s, what would it be?, I knew the content would be worth it. The most voted answer is: Namespace scoped CRDs A short intro to CRDs Kubernetes comes packed with existing objects, such as Pod, Service, DaemonSet, etc., but you can create your own: the latter are called Custom Resource Definitions. Most o

container kubernetes debug devops

Fearless Distroless

With the rise of Docker came a new focus for engineers: optimizing the build to reach the smallest image size possible. A couple of options are available. Multi-stage builds: A Dockerfile can consist of multiple steps, each having a different Docker base image. Each step can copy files from any of the previous build steps. Only the last one will receive a tag; the others will be left untagged. This approach separates one or more build steps and a run step. On the JVM, it means that the first s

devops kubernetes

Introduction to Kubernetes extensibility

Kubernetes offers a lot of benefits: an enormous ecosystem with plenty of actors, self-healing capabilities, etc. There’s no free lunch, though. It also comes with downsides, chief among them its complexity and operating costs. However, the more I work with Kubernetes, the more I think its most significant asset is extensibility. If you need something that the platform doesn’t provide by default, there’s an option to develop it yourself and integrate it. In this post, I’

kubernetes gateway api apache apisix

A quick glance at the Kubernetes Gateway API

In one of my recent blog posts, I described several ways to access Kubernetes pods. One can access a pod through its IP, but pods are naturally transient. The nominal way is to configure a Service: its IP is stable, and Kubernetes' job is to keep the mapping between a Service and its underlying pods up-to-date. Different kinds of services are available: internal only, NodePort to finally allow access from outside the cluster, and LoadBalancer that relies on a third-party component - in general, a

kubernetes security good practices

Learning by auditing Kubernetes manifests

Last year, I spoke at the National DevOps Conference that took place at the British Museum. I had already visited the museum before, but speaking there was a fantastic experience. Besides, we had the museum all for ourselves for a couple of hours. If you’ve ever visited the place, you know what I mean. Anyway, I also attended a talk about Checkov: Checkov scans cloud infrastructure configurations to find misconfigurations before they’re deployed. Checkov uses a common command

kubernetes apache web server demo

Port management in your local Kubernetes cluster

Most of my talks contain a demo. A fair share of these demos require multiple 'infrastructure' dependencies: a database (or more), Elasticsearch, you name it. To ease my setup and avoid stuffing my machine, I use either Docker Compose or Kubernetes locally on my Mac. Both rely on Docker Desktop. To expose a cluster Service on my host, I use nodePort. Hence, I set a dedicated node port for each service. I need to remember each of them for each demo. Worse, services might be (are) declared across

rust controller kubernetes

A Rust controller for Kubernetes

This is the 6th post in the Start Rust focus series. To teach myself Kubernetes in general and controllers in particular, I previously developed one in Java. This week, I decide to do the same in Rust by following the same steps I did. The guiding principle is the creation of a Kubernetes controller that watches pods' lifecycle and 'binds' a sidecar to them. When the main pod is scheduled, the controller schedules the sidecar; when it’s deleted, it deletes it as well.

kubernetes minikube kind

Goodbye minikube

I’ve been using minikube as my local cluster since I started to learn Kubernetes. But I’ve decided to let it go in favor of kind. Here’s the story. A couple of weeks ago, I gave my talk on Zero Downtime on Kubernetes. A demo is included in the talk, as with most of my presentations. While rehearsing in the morning, the demo worked, albeit slowly. Two days before that, I had another demo that also uses Kubernetes and it was already slow. But I didn’t take the hint. Duri

kubernetes controller java graalvm

Your own Kubernetes controller - Improving and deploying

In the first post of this series, we described the concept behind a Kubernetes controller. In short, it’s just a plain control loop that reconciles the desired state of the cluster with its current state. In the second post, we implemented a sidecar controller in Java. This third and last post will be focused on where to deploy this Java controller and how to improve it to be on par with a Go one. Running outside the cluster or inside? As mentioned in the first post, there’s no re

kubernetes controller java fabric8 sidecar

Your own Kubernetes controller - Developing in Java

In the previous post, we laid out the foundations to create our own custom Kubernetes controller. We detailed what a controller was, and that its only requirement is to be able to communicate with HTTP/JSON. In this post, we are going to finally start developing it. The technology stack can be Python, NodeJS or Ruby. Because this blog is named 'A Java Geek', it’s normal to choose Java. As a use-case, we will implement the sidecar pattern: every time a pod gets scheduled, a sidecar pod w

kubernetes controller operator

Your own Kubernetes controller - Laying out the work

It’s hard nowadays to ignore Kubernetes. It has become the ubiquitous platform of choice to deploy containerized applications. In a few years, Kubernetes has entrenched itself deeply in the DevOps landscape under the tutelage of the Cloud Native Computing Foundation. One could speculate about the reasons. IMHO, one very compelling argument is that it allows users to be independent of the API of a single cloud provider. If you’ve been living under the monopoly of Microsoft on the deskt

kubernetes container initialization configuration

The versatility of Kubernetes' initContainer

There are a lot of different ways to configure containers running on Kubernetes: Environment variablesConfig mapsVolumes shared across multiple podsArguments passed to scheduled podsetc. Those alternatives fit a specific context, with specific requirements. For example, none of them allow you to clone a Git repository before the container starts. It would be possible to design that feature into the image itself. Yet, that would introduce coupling, and defeat the Single Responsibility principl