Skip to main content

Kubernetes: A Rolling Deployment...



This is the last module of our beginner’s tutorial! My how time flies.

So, what’s the last piece of knowledge?

Rolling updates that allow Deployments updates to take place with no downtime.

How?



 By incrementally updating Pods instances with new ones, scheduled on Nodes with available resources.

Before, we just told our application to run multiple instances so we could update and not affect availability.

The default maximum # of pods that can be unavailable during an update is one (1), but you can make that another number or even a percentage of active pods.

Kubernetes updates versions and Deployment updates can be reverted back if it doesn’t work out so nicely.

What if said Deployment is exposed publicly? Then the Service will load balance the traffic to Pods available (aka, not being updated at that exact time)

Rolling updates promote applications from one environment to another by container image updates, or rollback to previous versions, with the continuous integration and delivery of applications - No downtime!

Deployments and Pods - Still 4/4!



kubectl describe pods gives us a lot of information, but somewhere in that infodump tells us that our Pods are running version 1 - We want to update them.

Changed the layout!
Seems like we updated Pods called jocatalin. See what happens next!









Let’s make sure the update is working;

Enjoy stats such as the

  • Name (kubernetes-bootcamp)
  • The node port (<unset> 31139/TCP)
  • The IP and port (10.101.170.210 and <unset> 8080/TCP).


Make an enviroment variable called NODE_PORT with the port assigned.



And there’s our variable with the corresponding number.

curl $(minikube ip):$NODE_PORT
 Hits a different Pod with every request - all Pods are running the latest version.

Let’s also confirm with a rollout status command;

kubectl rollout status deployments/kubernetes-bootcamp

The response we get is essentially “The deployment is out!”
Let’s make sure the update is working;

Enjoy stats such as the name (kubernetes-bootcamp), the node port (<unset> 31139/TCP), and the IP and port (10.101.170.210 and <unset> 8080/TCP),

Make an enviroment variable called NODE_PORT with the port assigned.

And there’s our variable with the corresponding number.

curl $(minikube ip):$NODE_PORT hits a different Pod with every request - all Pods are running the latest version.

Let’s also confirm with a rollout status command;

kubectl rollout status deployments/kubernetes-bootcamp

The response we get is essentially “The deployment is out!”

Perhaps the update isn’t what we wanted, so let’s roll it back.

The version is updaated to 10 and let’s see our deployments.



Hmm. One too many. That’s more resources than we need.

Using the describe modifier gives us a lot of information, but in short; There’s no image called v10 in the repository.



So let’s roll it back with the rollout command to go back to v2, and there are four pods once again.



Comments

Popular posts from this blog

Connecting IoT Devices to a Registration Server (Packet Tracer, Cisco)

In Packet Tracer, a demo software made by Cisco Systems. It certainly has changed a lot since 2016. It's almost an Olympic feat to even get started with it now, but it does look snazzy. This is for the new CCNA, that integrates, among other things, IoT and Automation, which I've worked on here before. Instructions here . I don't know if this is an aspect of "Let's make sure people are paying attention and not simply following blindly", or an oversight - The instructions indicate a Meraki Server, when a regular one is the working option here. I have to enable the IoT service on this server. Also, we assign the server an IPv4 address from a DHCP pool instead of giving it a static one. For something that handles our IoT business, perhaps that's safer; Getting a new IPv4 address every week or so is a minimal step against an intruder, but it is a step. There are no devices associated with this new server; In an earlier lab (not shown), I attached them to 'H

Create a Simple Network (Packet Tracer) + A Walkthrough

Again; I've done this, but now there's so many new things, I'm doing it again. The truly new portions were...everything on the right side of this diagram; The cloud needed a coax connector and a copper Ethernet connector. It's all easy to install, turn off the cloud (Weird), install the modules. Getting the Cable section of Connections was an unusual struggle - The other drop down menu had nothing within. It required going into the Ethernet options and setting the Provider Network to 'cable', which is the next step AFTER the drop-downs. The rest was typical DHCP and DNS setups, mainly on the Cisco server down there. The post is rather short - How about adding a video to it? Find out what A Record means - This site says 'Maps a name to an IP address', which is DNS. So it's another name for DNS? You can change them (presumably in a local context) to associate an IP address to another name.

Securing Terraform and You Part 1 -- rego, Tfsec, and Terrascan

9/20: The open source version of Terraform is now  OpenTofu     Sometimes, I write articles even when things don't work. It's about showing a learning process.  Using IaC means consistency, and one thing you don't want to do is have 5 open S3 buckets on AWS that anyone on the internet can reach.  That's where tools such as Terrascan and Tfsec come in, where we can make our own policies and rules to be checked against our code before we init.  As this was contract work, I can't show you the exact code used, but I can tell you that this blog post by Cesar Rodriguez of Cloud Security Musings was quite helpful, as well as this one by Chris Ayers . The issue is using Rego; I found a cool VS Code Extension; Terrascan Rego Editor , as well as several courses on Styra Academy; Policy Authoring and Policy Essentials . The big issue was figuring out how to tell Terrascan to follow a certain policy; I made it, put it in a directory, and ran the program while in that directory