Skip to main content

Kubernetes: Are You Being Serviced?

In Pokemon, but one of them was Paras, a little bug with two little mushrooms on its back.

It evolved into Parasect, which was the shell of the body controlled by the parasite mushrooms.

I was trying to make an allegory with today’s subject matter but it doesn’t quite fit.

 If a Worker Node dies, the Pods running on it die too.

A Replication Controller ensures that a specific number of pod replicas are always up and running by creating new pods in an instance like this.

Remember that each Pod in a Cluster has a unique IP, even those on the same Node, so how do pods let everyone else know about the changes so everything keeps working.





Services are abstractions defining a logical set of Pods and a policy on how to use them, and defined using YAML or JSON (human readable languages).

They also allow your applications to receive traffic, because IP addresses aren’t exposed outside the cluster without them.

Exposure happens in different ways.

* The default, ClusterIP, exposes on an internal IP, only reachable from within.
* Exposure on the same port of each Node with NodePort and NAT [<nodeIP>:<NodePort>]
* External LoadBalancer in the current cloud (varies) with a fixed external IP.
* Give an arbitrary name specified by [externalName] to expose the service by returning a CNAME record and no proxy.

These services loosely couple between pods, usually targeted by a LabelSelector, though if you don’t use selector, it won’t make something called an Endpoints object that lets users manually map a Service to select endpoints.

(Or you’re using [type: ExternalName]

But now there are LABELS, the bits that match a set of Pods, allowing logical operation on Kubernetes objects. They can be attached to objects upon creation or later on, and modified whenever.

Key/value pairs attached to objects to
- Embed version tags
- Classify an object
- Designate said objects for development, test, and production.

(Also, make a Service at the same time you make a Deployment with --expose)

—————————————

Application: Running.

Services:



Kubernetes, enabled by default when the cluster starts.

Make a new one, expose it to traffic, (—expose)

I type it in by hand (You can click the image and have the code automatically load and run, but what fun is that?) and get a “There is no need to specify a resource type as a separate argument” error.



I probably added a spare space in there somewhere.

But I try again and the service is now exposed.

But what port did we open? (Well, 8080, because that’s what the code said, but look at the describe services command.


[list of information]

Let’s make an enviroment called NODE_PORT with a value assigned.



Not 100% sure what happened. Although it’s the age-old computing adage - sometimes, if nothing shows up, you did a great job!

So we test it with curl;


Hi, terminal!

==================

The second part are LABELS
kubectl describe deployment -


We’re going to query our list of pods with this label with kubectl get pods -l (that’s a parameter)

(We also added it to existing services)

And we store it in the enviroment variable. Remember the command?


No.

Apply a new label with the command + object type + object name and new label:


kubectl label pod $POD_NAME app=v1

Check it (there’s a lot of information here), and see the pods.


Okay, let's delete it now!
It's a simple command, and we can also delete the label with it. Let's check what happened.



The route also isn't exposed anymore, giving off a (7) Failed to connect to 172.17.0.53 port 30810: Connection refused error message.

Luckily, the application is still running!


Comments

Popular posts from this blog

Connecting IoT Devices to a Registration Server (Packet Tracer, Cisco)

In Packet Tracer, a demo software made by Cisco Systems. It certainly has changed a lot since 2016. It's almost an Olympic feat to even get started with it now, but it does look snazzy. This is for the new CCNA, that integrates, among other things, IoT and Automation, which I've worked on here before. Instructions here . I don't know if this is an aspect of "Let's make sure people are paying attention and not simply following blindly", or an oversight - The instructions indicate a Meraki Server, when a regular one is the working option here. I have to enable the IoT service on this server. Also, we assign the server an IPv4 address from a DHCP pool instead of giving it a static one. For something that handles our IoT business, perhaps that's safer; Getting a new IPv4 address every week or so is a minimal step against an intruder, but it is a step. There are no devices associated with this new server; In an earlier lab (not shown), I attached them to 'H

Building, Breaking, and Building A CRM with Retool

 I like no- or low-code solutions to things. I've often wanted to simply push a button or move some GUI around and have the code implement itself.  I've thought about building something that's like a customer relationship management (CRM) system for keeping up with my network better than my little spreadsheet where I click links and then go like something. The general idea in this CRM Development is:  To have a GUI to add people to a NRM (Network Relationship Management).       Attach it to a database (MySQL is what I went with eventually using Amazon Relational Database service, but you can use PostGRES, and probably others).     Make sure components are connected to each other in the retool interface. This video is a good start. Watching the tutorial video, heard some SQL commands and went 'Oh no 😳" before going "Wait I know basic SQL", which is good, because you'll see.  When you get set up, there's a plethora of resources you can use -- Incl

Securing Terraform and You Part 1 -- rego, Tfsec, and Terrascan

9/20: The open source version of Terraform is now  OpenTofu     Sometimes, I write articles even when things don't work. It's about showing a learning process.  Using IaC means consistency, and one thing you don't want to do is have 5 open S3 buckets on AWS that anyone on the internet can reach.  That's where tools such as Terrascan and Tfsec come in, where we can make our own policies and rules to be checked against our code before we init.  As this was contract work, I can't show you the exact code used, but I can tell you that this blog post by Cesar Rodriguez of Cloud Security Musings was quite helpful, as well as this one by Chris Ayers . The issue is using Rego; I found a cool VS Code Extension; Terrascan Rego Editor , as well as several courses on Styra Academy; Policy Authoring and Policy Essentials . The big issue was figuring out how to tell Terrascan to follow a certain policy; I made it, put it in a directory, and ran the program while in that directory