Kubernetes provides different resources for deploying applications, we will be looking at them and the differences between them and how can we persist data using each of them. Kubernetes provides different resources for deploying applications, we will be looking at them and the differences between them and how can we persist data using each of them.

Key takeaways:

  • What is controller concept in Kuberenetes
  • How Deployments, Statefulsets & Daemonsets work 
  • What is the difference between them
  • How do they differ while persisting data

Watch the presentation below given by Ali Kahoot, DevOps Engineer & Trainer at Tarabut Gateway.

Ali Kahoot:

I am working in the MENA region on Open Banking. So I am heading the DevOps team. And we are setting up the complete infrastructure and all of the startups and we are growing rapidly. I’m basically the DevOps guy over there. 

Bart Farrell:

Good stuff. How long have you been working there for? 

Ali Kahoot:

I switched in January. Before that, I was a working Instigator. I was working as a lead over there. So I switched in January to Tarabut Gateway.

Bart Farrell:

And you studied Computer Science?

Ali Kahoot:

I just completed my bachelor’s in Computer Science. 

Bart Farrell:

And you’ve also worked as a trainer. So on the training side, when did you start working as a trainer?

Ali Kahoot:

It’s been almost two years now. I started in June 2019. What we do is we have sessions of DevOps, that covers Kubernetes, Docker containers, Gates, Ansible, and all of those tools. So we have an eight-week session, and we cover that over there. So basically, that is what I do part time on Saturdays. So whenever an iteration starts, I do training on the History of DevOps

Bart Farrell:

Hey everyone, we have Ali Kahoot with us, who’s an amazing engineer and very active in the community. Can you just tell us a little bit about yourself before we get started?

Ali Kahoot:

My name is Ali Kahoot. I’m a DevOps engineer and a DevOps trainer part time. It has been almost six years that I have been working in this field. I previously worked at Instigator  as a DevOps lead, I have basically set up the complete team over there. And we basically used to provide Kubernetes offerings over there. I am a computer scientist by background. So what I believe that helps me is because I know what issues a developer would face. So that’s why I believe that I can help developers reach over here and they can work on or learn the DevOps tools and technologies that show what issues they would face so I can work on that. Other than that, I am currently working as a DevOps engineer at Tarabut Gateway. It’s a radian startup which is working on Open Banking in the MENA region. So I am currently the DevOps guy over there, we have set up the complete infrastructure on Kubernetes. And all the pipelines and all of that are on GitHub action. 

Bart Farrell:

It was a good summary. This is meet-up number 49. We have meet-up number 50 this week, we see lots of different perspectives. It is interesting to see regarding sectors, I would say the financial sector, along with telecommunications and healthcare does seem to be one of the strongest examples. What is it about that sector that you think makes it a good use case for technologies such as Kubernetes?

Alir Kahoot:

One of the most important things would be security because in the financial sector, you need security because we have quite sensitive data and which we need to basically secure. One thing is that obviously microservices based architecture, we are working on different sorts of applications, the small applications which are based on the microservices principle. So we all have sort of covered all of those principles. And we are trying to move towards the cloud native sort of applications. So Kubernetes sort of covers each and everything we had moved to containers previously, as well. And I have worked on Kubernetes previously, as well, on containers. But obviously, managing containers is quite hard. Simply managing containers is quite hard. So you need a container orchestrator like Kubernetes, or any other such as Openshift, to offload things from the DevOps engineer. So you need automation to offload things. You should not be asking your DevOps engineers each and everything or pinging him for each and everything, Kubernetes handles most of those things itself. 

Bart Farrell:

It’s a really good point. And I’m really glad you mentioned that as well because we’re interacting more and more with younger folks. And I think a lot of times whether it’s younger or older, but people that are new to this world, a lot of times it seems really overwhelming. And like you said, it’s a question of focus, like don’t worry, we’re going to be adding some things here. They’re going to be some modifications, but you don’t need to be jumping into absolutely everything. I know there’s a lot of excitement and when you look at all the memes in the Kubernetes ecosystem, a lot of times for folks that get like, how can I possibly learn all this? I think the question is, maybe you don’t have to learn exactly all of that. And take it a little bit further, because obviously, our community is focused on Data On Kubernetes. Today, we’re gonna be talking about stateful-sets, deployments, daemon-sets. When was your first experience? Because you’ve been working on this for a while. I’ll link the blog post that you put in that you put out a couple years ago, which lays this out very clearly. But when the data question came up for the first time, what did you think about it? What did it seem like? What do you think?

Ali Kahoot:

Basically, we had an application that we had to convert to microservices. And we had to basically create the database of it as well, considering the principles of microservices. So we did that. But at that point, we also were looking to deploy databases. So what we did was we looked at the Helm charts, and we deployed those charts. But one thing that I was confused about was Why aren’t they using deployments for data, we are using deployments for our application, but why not the Helm chart when I’m deploying a Helm chart for deploying, for example, Postgres or MySQL, or anything of that database, and they’re using StatefulSets. So that was one question: why do we need a stateful set? Why is the stateful set another resource provided by Kubernetes? How can’t we just use it to keep kube-deployments for those things? So that was one thing that grabbed my attention that all the Helm charts do use stateful sets. Because previously, I didn’t know about it. But obviously, the first thing that every developer does is he deploys a Helm chart of a database whenever he wants. And all the Helm charts deploys stateful sets. So that was one major, I would say breakthrough in my career, that I started reading about what are stateful sets? And why do we need stateful sets and not the deployments or simple pods, why can’t they be as efficient as stateful sets. So then I started to research it. And then I found that there was no specific blog that caters to all of this. So then, when I started reaching out, listening to different videos, creating an application myself that I deployed using three different models. So then I started to understand the differences. And then I thought that it is the best opportunity to write a blog on it, because there was none at the moment. So that’s how that blog came into the picture.

Bart Farrell:

That was over two years ago. So we can see you’re an early adopter into this space. Our community’s been around for about a year now. We’ve interacted with lots of different folks, and that is one of the reasons why our community exists, so that there is a space where those resources can be shared, so that it’s not overwhelming. You obviously answered those questions with a fair amount of background experience, things like the capacity to build your own application, start working with it directly. But for example, we got a question right here already from Leonard and he’s asking, “As a Kubernetes newbie, I know about persistent volume claims. But the thought of pairing an SQL or Postgres database to a cluster is daunting”. Like we were saying the word overwhelming earlier. Could you describe quickly how we actually pair databases in a Kubernetes deployment? Or perhaps you’re going to get to this later in your presentation?

Ali Kahoot:

I would get that in my presentation.

Bart Farrell:

Let’s hold onto that question. Let’s jump into the presentation right now. I think it’s a perfect time. And for whatever reason if we don’t get to that question. Please remind us and we’ll definitely get to it later. And just as a reminder, feel free to ask questions here in the Zoom or if you’re joining us on YouTube, or Slack whenever and get those questions out there, and I already told that if we’re not able to get to everything, during the meetup in this one hour, we can definitely continue the conversation later in Slack.

Ali Kahoot:

Welcome everyone. My name is Ali Kahoot. I’m a DevOps engineer at Tarabut Gateway, I have covered most of it already. I am a certified under data delivery specialist, I am not covering most of it again, because I have covered it. So let’s jump on towards containers because moving towards Kubernetes we need to know what a container is, and how a container helps. And what does the Kubernetes do to a container? So what is a container, I will be going through word by word. So previously, what happened was if you wanted to ship a car to another country, or if you wanted to ship a barrel to another country, or for that matter any box or anything, you had to have different sorts of shipping methodologies. But once the containers were basically introduced, so what you had to do, you had to ship your car into a container, you had to basically move your oil barrel to a container or a box or container, and you had to make a shape or anything that would just transport that container. 

So you would not be worrying what’s inside that container, whether it’s a car or anything of that sort. Similarly, when the Docker containers were introduced, you had to just basically modify your application so that it runs on containers, whether it’s a back end application, a front end application, a queue, a database or anything, you just have to run that on a container. And on your infrastructure, you have to install any sort of a container runtime instance, for example, Docker, that would be running those containers. So whether it’s a back end app or some database or anything, you just have to make a Docker image out of it, and run that container and make your infrastructure capable of running those containers. So this diagram is the basic container architecture, you must have seen it a lot of times, whenever you search for containers on Google. There is a base infrastructure, the hardware, then there is a host OS. And then there’s a container runtime, for example, Docker, or CRI-O or anything. And then in your image and your Docker image or in basically you install the libraries, the binaries, and then your application code on top of it. And you run those containers on top of using the container runtime on top of the host operating systems. So basically, this is how our container works. But for example, how can you pass an environment variable to a container? I have an application, for example, a Java application 

How would I be passing environment variables having the volumes override, how can I persist data on a container, all of these resources are provided by Docker, or whatever the container runtime we choose, they are provided by those. For example Docker provides us with an opportunity to add an environment variable to our applications, Mount any volume, mount any data path to a persistent volume. So that’s how we can persist data for an application we have networking, how containers would be communicating with each other with the command which command you want to run whenever your container would be running, health check and resources. So these are all the resources that are provided by Docker or container itself. And these are some basic commands for it. For example, you add a health check for your application in your Docker file, and then you run your application by using this command
docker run -it -e NAME =”Ali Kahoot”. So I’m passing an environment variable named Ali Kahoot to my container. The network is DOK , the volume I’m mounting of path:/data/on/kubernetes. And specifically, I’m providing the resources for it to be used one GB of memory only, the image I’m using and the command, so all of these resources are being covered by this one command. 

So now, if you have to manage Docker, that is a headache in itself. If you are managing containers, yourself and individually, that is quite a headache, you cannot do that individually, you would face a lot of issues, especially if you are in production or any of that sort of environment, there will be quite downtime. So for that matter, Google introduced Kubernetes. I will not be jumping into the details of what Kubernetes is, it’s just a brief example or the definition of what Kubernetes is, it is a container orchestrator, which means that it manages containers. Previously, if a container went down, there was no one that would be working on it or basically pulled that up. So what Kubernetes does is it orchestrates that, it manages your containers, whenever a container goes down, it will keep looking into it and then it will try to bring it back up. Whenever there is a load you can scale it and the networking is quite easy in Kubernetes. For example, previously, you had just one machine, which has, for example, 8 GB of memory and 2 CPUs so you cannot run containers more than that. What Kubernetes does is it provides clustering, sort of an architecture where you can cluster different nodes onto it. So that’s some of the benefits of Kubernetes. 

So coming to pod, what is a pod? A pod is a deep, smallest deployable unit in Kubernetes. You cannot deploy a container on Kubernetes. Rather you would be deploying a pod, which in itself will be deploying containers. So, a pod is just a wrapper around containers, it is the way to ship containers or to run containers in Kubernetes, it is a way of provisioning containers. And this is a descriptive way you can try to gamble for it or in JSON, you can declare your containers and you can run those containers. So, one pod can have one or more containers as well. But in an ideal scenario, you would never be running more than one application per container per pod. So, you can have a sidecar container in your pod, but not two applications running in a single pod. For example, you would never be running a front end and a back end application in a single pod, but what you can do is you will be running a database and the sidecar container which will be taking backups of your database. So, that is a sidecar helping you contain it, not the main application. So the ideal rule is you would never be running more than one application per pod, you can have more than one container but they will be helping or sidecar containers. And this is the whole architecture of a container in a pod. For example, this is a worker node in a Kubernetes cluster and it provides two different architectures one is the master node and other other worker nodes. So whether it’s a worker node or a master node, the infrastructure remains the same, there is the hardware, the host operating system, the container runtime talker and another tool named kubelet

So, now in Kubernetes you will never be running commands like docker run. Those commands will be managed by kubelet. You will be creating a pod and the API server will be scheduling those pods using the Kubelet that I have received for the specifications of a container or a pod. And then Kubelet will be running those pods and containers on specific worker nodes whichever are free based on the resources. Similarly, the same structure,  application, libraries, binaries, images, made by you Kubernetes does not do that, you have to build that image then you can pass on the environment variables, the volumes, the network’s commands and all of those to your containers using the pod definitions. So, that is how Kubernetes pods work and how you can pass on different information to your containers using the pod definition. For example, this was the same command that we ran earlier using the Docker command and now the same application it is run using the Kubernetes pod definitions for example, I have created a pod named  dok-app and a container .There’s an array, I am mounting a volume over it that is coming from a PVC and I am mounting on this part. So, this is how you define a pod. Now, coming to controllers, what is a controller concept in Kubernetes? And how do controllers work? 

The Kubernetes architecture is based on controllers. If we don’t know what a controller is, then obviously it would be quite difficult to understand the whole Kubernetes architecture. In simpler terms, controllers are just a security guard, that would be continuously trying to sync the current state and the desired state. So, whenever the current state is not equal to the desired state, it will become unhappy and it will ask the Kubernetes API server to do something to fix this. Doing something would obviously be to sync the current state into the desired state. So, this is the basis of Kubernetes itself, the controller architecture. There are a lot of controllers that I will be mentioning later on as well, but this is the base of it. And whenever the current state gets equal to the desired state, the controller gets happy and it again starts sitting in a loop that whenever the current state goes out of sync with the desired state, it will again ask the Kubernetes API server to try to fix it. So there are different controllers in Kubernetes – node controller, service controllers, Ingress controllers, you have to deploy yourself then there are these different controllers that you can deploy for deploying your applications replica-set deployment, replication controllers have been depreciated, so you don’t need for that but replica set deployments, stateful sets and daemon sets you can use these to deploy your application.

Bart Farrell:

I think we got a question here. Why can we not set multiple replica sets on an EKS cluster?

Ali Kahoot:

We can set multiple replica sets on an EKS cluster. Replica sets that have nothing to do with an EKS. It’s the base of Kubernetes and you can create multiple replica sets, we will be creating replica sets. They are quite straight forward, which we will be covering next. And we can create multiple replica sets. There is no compulsion on that. So, the issue with pods for example, if I simply deploy my application using pods using individual pods, the issue is that these pods cannot be updated. For example, I have a new version of my image, it cannot be updated or it cannot be scaled, I cannot scale my pod. For example, if I have a load on my application, I can scale my pod if I have individual pods, obviously I understand that if a pod is deleted, it will never be re-created on its own. What a replica set does is it manages the pods and it’s a replica. For example, I want to have three replicas of my application. So what I will be doing is I will be specifying three replicas in my replica-set definition. And what it will do is it will create the exact same copy of those pods that I mentioned, it will be creating the exact same copies of my application, the replica sets, what it simply does, it just creates replicas of your pods. The replicas are exact copies. The pods of a replica-set can be used interchangeably. As I said, they are exactly similar to each other. So a replica-set is a controller that what it will be doing is it will basically see that the desired replicas of a pod are three, current replicas are zero, so it will ask the API server to increase replicas. And when the current will be one, again, one is not equal to three, it will again ask the API server to increase a replica. So, now what if I go and I delete a pod manually? If I delete a pod manually, the current state is again two the desired state is three. So what the replica-set controller will do, it will again, ask the API server to create a new pod, create a new pod with all of the specifications, and the API server will then create that pod on any of the nodes. So this is a basic replica set controller. What it does is it just creates replicas of your application, once the current and the desired states are equal, then the replica set controller gets happy. And it then again starts to look into the loop. So what are the shortcomings of a replica set? Why don’t we use replica sets, you must have heard about deployment stateful sets and daemon sets but never replica sets as an option to deploy your applications on a production setup.

Bart Farrell:

Is a replica set the controller that controls horizontal pod auto scaling?

Ali Kahoot:

Yes, basically, if you use a horizontal pod autoscaler, basically, it will update the replica set to be using auto updating replicas for that specific replica set. So what are the shortcomings for replica sets? Replica sets do not handle versioning. The issues for pods that we mentioned earlier were that versioning is not handled and the scaling was not handled, but replica set handles only the scaling part of it and versioning is not handled. For example, my application is running on version v 1.0. and I want to update my application to V 2.0. Replica set doesn’t do that, we would have to delete the previous pod and only then the new pods will be created with the newer version. So, in an EKS cluster, I have created a cluster with my name Ali Kahoot. It is a 3 node cluster. So three nodes are running. Currently, there are only three volumes each for one node and this is the counter application that I have created for it. So, if I showed you the simple pod and if this is the cluster, this is Lens, a Kubernetes IDE and this pod is now created. And if I check the logs of it, it says the attendees are awesome. So all of you are awesome. You can have Pod and Kubernetes certifies that you are awesome. But the issues that we mentioned about pod, so what if I delete it manually or by mistake or anything, it will not get recreated, or it is terminated. So it’s not recreated, it is terminating. Now if I create a replica set, this is an RS replica set, and I have two replicas to specify two replicas for my application. And if I run this, we can see it is still in the terminating state. I don’t know why it should be terminated till now. But this is the replica set. So you can see that what I created was a replica set with the name DOK-RS. But the pod was created with DOK-RS as a unique ID. So whenever a replica set creates a pod, it uses a random ID for it, and it creates that. So if I see again logging, “attendees is awesome” after every 10 seconds, because that is what I have asked it to do. After every 10 seconds, it will know if the attendees are awesome. So it is a replica set. But for example, I want to update my image to this. Previously, I was using BusyBox latest, but now I want to update my image to 1.3 3.1. And I apply the application, it will show me that the replica set has been configured. And if we look over here, the replica set is configured. But the pods do not take that change. Or basically that change is not reflected on the pods. So if we see the image over here we received, it’s still the image BusyBox, the pods have not been updated. So the replica set does not update the versioning of the pods. But what if I delete the pod manually, then the new pod will have that new image. But obviously, we don’t want such a behavior that we have to manually delete every time. So here you can see the new pod has the newer image. But the previous pod, it still has the old image BusyBox. 

So that is why we cannot use replica sets for deploying our application. So if I go back again, and we basically talk about deployments, what deployment does is it handles the replica sets based on the update of configuration. For example, I apply a configuration, what deployment will do is it will create a hash out of it, and it will create a replica set for it. And when I update that configuration, for example when I update the image, it will create a new replica set, it will provide a rolling update and it will provide a new replica set and what if that new replica set or that new image has some error or bug in it? So what it will do is it also provides the support of rollback. 

Bart Farrell:

I’ll ask really quick questions from Saudi. Is it possible to have multiple replicas, but each replica container having access to an environment variable that would reflect a replica’s ID and having this value accessible from within the application code? 

Ali Kahoot:

Basically, we can do that, we can use Kubernetes API for that. I am not sure if we can use that natively, whether we can do it or not. But I don’t think that we can do it. Because we can get the data from the metadata. I’m not sure, but obviously, we can do that if we write a Kubernetes API inside of it. So whatever language application you’re writing, you use the Kubernetes SDK, and you implement the methodology of getting the replica set ID of this pod, you can do that based on the code. So you have to implement that yourself, if you can get that natively from Kubernetes. I’m not sure about that. But I don’t think that you can. You can only get the pod name. Okay, so moving forward. So my desired state is in the previous example, the Dok app with BusyBox 1.3 3.1 image and the current state is that the BusyBox latest images deployed. So what the deployment controller will do now, it will create a new one it will ask the API server to create a new replica set with BusyBox this image and downscale the previous replica set as the new replica set is drifting up. So basically, and once that is up, it will be green. So let’s show the exact example. I will just delete the replica set that was created earlier. And now if I deploy the deployment, now we can see it. Its name is dok- deploy. And the board name is, if you see, Dok- deploy a unique ID – a unique ID. So how, why are these two unique IDs and How come these are two different IDs. So deployment, what it did was it created a replica set, the six d 294. Basically, I created this deployment do kadesh deploy, it created a replica set Dok- deploy-6d. And this replica set, then in turn created these two pods, whose names are basically 699, the name of the replica set, – a unique ID for the pod. And now if I update my deployment with 1.3 3.1 images, what will happen is it will create a new replica set. So you can see over here, a new replica set is created, which is being scaled up and the previous one is being scaled down. So that’s how the versioning is handled. 

For example, there is an error in the new deployment or the new replica set. What I can do is, simply scale this one back down, and I can scale this one back up, there is a command from kubectl for this called kubectl roll back. You can roll back to the previous version if there is an error in your new version of your application. So that is why deployments are the default method of deploying your applications in a production setup or even whenever you want your application lifetime, because they provide you the versioning of your applications, if the newer version gets an error, you can easily stay rollback and also you saw that it did a rolling update, it scaled this one up, and it scaled this one down. So for example, if this one caught an error in between or if I had mentioned the wrong image name, for example, if I go over here, and I mentioned a wrong image name, this is the wrong image. And obviously, this image would not exist in Docker Hub. And I apply it, what it will do is it will create a new replica set, but current will never get one because it will be stuck in image pull back. And your application is still up due to the previous replica set that was created. So one of the most important benefits of deployment is that it provides rolling updates. 

So these are your two applications that are still up even though you have mentioned the wrong image name, you have basically committed a blunder at your end. But Kubernetes has a fault tolerant system, so it is sort of tolerating that fault when you are using a controller deployment. Now another controller is a daemon-set, a daemon set is almost similar to a replica set. The only thing different from a replica set is that each replica of a daemon set will run on each node of your cluster. For example, if you have five nodes, there will be five replicas of your application, and each one of them will be running on each of the nodes. So that’s how daemon sets work. They also provide rolling updates, but they cannot do rollback, because as we mentioned, that replica set does not have rollback option. Deployments have rollback option, but daemon set is similar to replica set. You would be basically thinking that why should we deploy a daemon set? Why would we be needing replicas equal to the number of the nodes? For example, I want to monitor my cluster, I want to monitor each node. For example, my node one is consuming 100% or 80% of the memory, node two is consuming 60% of memory, node three is consuming 70% of the memory. So I should be able to know how much memory and how much all of those things are being consumed. So what I will be doing, I will be deploying a daemon set up by monitoring exporter. So they will run on each node, and they will be sending the data about your application or about your nodes to Kubernetes. So in that case, obviously, you’d need a daemon set which will be running on all the nodes. 

Bart Farrell:

What are some examples of applications that we will run as DaemonSet, FluentD, anything else? And what are some of the more interesting apps you’ve run as DaemonSet?

Ali Kahoot:

Sure as I mentioned FluentD, this is the logging exporter and will basically scrape all the logs from all of your nodes. Basically it needs to run on all nodes. It will be scraping the logs and it will be pushing the logs to Elastic-search or any other storage that you are using. And the other thing, as I mentioned, the monitoring exporters, what you also need are the basic metrics of your nodes. So you would be needing a daemonset and that as well as, similarly, another daemon set that comes by default, that is a kube-proxy. kube-proxy is another daemon set that is running. kube-proxy is a tool that handles the networking of your cluster. So, it should be running on all the nodes as it needs to manage the network in which pods are running on this node, and it manages all the, basically the routing table of your Kubernetes cluster. So, kube-proxy is another thing. I am not sure what this does but obviously, it seems that AWS runs this on its EKS cluster, to monitor anything or to do anything for that sort. So, anything that needs to run on all nodes, gather information, or do anything, do any update on your nodes. One other example that I used on my production cluster once was, if you remember Docker Hub, they implemented the API limiting on their basic infrastructure. So at that time, we had a production downfall, because at once the API limit started reaching and the images were unable to download because the API limit was reached. 

So what I did was I created a daemon set that would update the Docker config of your nodes. And it will basically provide my credentials over there so that the API rate limiting does not hit. And we had a paid account. But we were not using it because we didn’t think that API rate limiting would ever cause any issues for us. But what we did was we created a daemon set that updated the Dockers configuration for each of our nodes. And at once we rolled that to production, because for some time, for a brief amount of time, we were affected because of that. So that is another example of a daemon set. Okay, so again, demonstrate is a controller. But what it will do is it will make sure that each replica is running on each node of your Kubernetes cluster. So five node cluster five replicas, three node cluster three replicas, and once all of them are up, it will be happy and stateful sets, they are only used for absolute persistence, so I’m not mentioning them over here, I have a complete section for persistence, which I will be covering. So just an example of a daemon set, if I run the same example as a daemon set. If I go over here, and just choose the default instance, this is the DoK DS daemon set. And you can see that each of them is running on a different node. This one is running in 2083. This one is running on 2252. This one is running on 2122. So all of these are running on different nodes. And I have a three node cluster. So three replicas, and each one of them running on each node. So that is another example of how daemon sets how it works. It does the same thing. Because I have used the same application that just prints the attendees are awesome. So now coming to persistence in Kubernetes it is how persistence is handled. This was just a brief difference between deployments, straight replica sets and daemon sets now coming to persistence in Kubernetes. How does that happen? So for this section, I will be using a sample application as a counter application, what it does is that it reads from a file, the last line, increments the number and writes back to the file and sleeps for a random time for one to three second, and then again, repeats the step. This is the script that I have written. And this is available over on this URL. I have open sourced it. So what it does is simply it reads the file, the counter.txt, it increments it, and it basically increments it and then again, writes back to the same file and then sleeps for a random of about one to three seconds. And it just keeps that and it continuously does that. So what happens when we talk about persistence in Kubernetes? As I mentioned, volumes and all of those things are part of containers already. You do not need to do anything for that.

Bart Farrell:

Daemon sets support replica sets, can I run two daemon sets on the same node?

Ali Kahoot:

No, no daemon sets escort replica sets, they run only on different nodes. You cannot run a DaemonSet on Google replica to DaemonSet on a single node, we cannot do that. 

So basically what happens is, for example, you have a container and you want to mount the slash app slash data path on a volume, you want to persist this. For example, MySQL has a path of slash MySQL slash data or anything or any databases. So I want to persist this data. So what I will be doing is, I will create a PVC, I will show you the definition of a PVC, I will create a PVC, I will use that PVC as a volume in my pod. For example, I created a container of one dash PVC, I use that as a volume in my pod. And then I am mounting this volume on this part of my container one. So for example, I have three containers, I can mount the same volume on those three containers. But as I mentioned that it’s not a good policy, it’s not a good practice to have multiple containers running in a single pod. So, we should avoid that. So, but coming to persistence, what will happen is I will create a PVC, it will create a persistent volume, which will create an actual physical storage on the cloud or wherever your cluster is hosted, it will actually create that a block storage or file storage, whatever you use, and you will be basically creating a volume, which will use that block storage and you will mount this data on that. So whenever I write anything on this path slash app that slash data, for example, I create a file named Ali Kahoot.txt, it will be stored on this physical storage. 

So,I have mentioned the persistent volume claim name to be DoK, I am using 1 GB of storage and this is mounted on Data on Kubernetes path. So, now, if you see over here, this is my pod, it is still waiting for the persistent volume claim to be created. If I go to storage, PVC, this is created. This was just created 21 seconds ago and this is the PV that is created in response to it and if I go to my actual cloud, as I showed you previously there were only three volumes. But now if I refresh it, I have a new volume that has been created, if I check the tags, so basically this is the PVC name i.e DoK that i created and this is the namespace default. So, this is how the actual persistence is happening. For example, I have created a file a.txt and I exit from it, I delete my pod, and now the pod is deleted, and I create another pod persistence pod-2, but the volume and the PVC are still the same.

So, you can see over here it is successfully assigned- pulling image- successfully pulled image. once this will terminate only then this will start. Okay. So this is where I started. If I go to a cd of Kubernetes, and I do ls I can see that same file that I created on the previous pod. The data still persists.

Bart Farrell:

Quick question, can you show the demo for persistent volume?

Ali Kahoot:

We are creating persistent volume claims, which in turn creates the persistent volume. So this is the persistent volume if I edit it over here, so this is the actual spec of the capacity. We set the AWS blocks to the access mode, the claim reference, the storage class name, because I didn’t mention the storage class name over here. So it takes the default storage class name on AWS, the default storage class name is GP two. So it created this, this is the PV, the persistent volumes spec, all of it. So you can create PV yourself as well, but that would be a manual step over here. What we are using is we are using claims so I created a claim Which in turn automatically created a PV persistent volume for me. Okay, so I will be deleting this.

Bart Farrell:

And one more question, so it’s created dynamically with the help of storage class?

Ali Kahoot:

Yes, it’s good. And MPM, especially whichever storage class you use, will create a provision that blocks storage for you, whichever storage class that you have used already. So persistence in deployments, how does the above persistence, which we checked on a single pod? How does that work for deployments? Because deployments create replica sets, then comes pods. So how does that work? So over here, for example, I have a deployment named counter, which has one replica, it will create a replica set, which will create a pod and I have a PVC counter that I’ve already created and mounted on my deployment. It is the same as it was in pods. But what if I scale up my deployment, I still have my deployment to have three replicas now. So how will that work? I have a deployment with three replicas, it created a replica set, each of the replica sets now created three replicas, and each of them is sharing the same PVC, the same volume, each of them are sharing the same volume. So this is how persistence is happening in deployment. Because deployments are not meant for stateful applications. In deployments, what they do is they all share the same volume because as I said, that board x y, z one, or one, two, this all of these three boards are interchangeable, they should be interchangeable, one can replace another. So that’s why there is no specific order for the pods, they can be interchangeable. So all of them share the same volume. Same is the case in deplored daemon sets. The only difference is that the daemon sets the replicas would be running on different nodes, all the different but they will be sharing the same PVC. 

For example, I showed you the counter application that reads from a file, increments the variable and then writes to the file again and then sleeps for a random 1-3 seconds. If I go and deploy a deployment over here, this is my deployment.. This is my image that I have created with this specific shell script that I’ve mentioned. The only difference is that now I am using an EFS storage class, because EBS, the block storage, cannot be attached to more than one instance at a time. It can only be attached to only one replica, other replicas will start failing and say that the block storage cannot be attached to more than one. The file storage means that you can attach it to multiple instances at one time. So we are using the Elastic File System of AWS for this example. Now if I deploy the deployment, and we see over here, now my deployment is created, it has only one replica at the moment, the PVC is created and the storage class is EFS. Basically it started from 1234, it will read the file implemented and log it back again. So it is all in a sequential form. 

Bart Farrell:

One question. In practice do pods that are deployed with PVC take longer to spin up? And how reliable are PVCs in helping pods to persist state? Do you notice any latencies? 

Ali Kahoot:

No, basically, what happens is that an EBS or whatever cloud you are using gets attached to your node. So for example, when I showed you volumes over here, these three nodes are also attached and this is one another volume that is attached to your node now. So there is no latency on it. The default way of persisting data is using PVCs. If you have quite sensitive data, you can choose a better storage class or a better volume claim. For this, the IOPS is 100. You can choose a better storage class, which has a better IOPS but obviously that would be expensive. So pros and cons are on both sides.

Bart Farrell:

As well in terms of your experience with some of the different solution options that are out there. When we say Ceph, OpenEBS, Which one of you had the most experience with?

Ali Kahoot:

Most of my experiences, I would say I have not used that much of Ceph. I have used Ceph but not that much, because I don’t have that much of a latency issue on sensitive data of that sort. So I have been using the old storage classes myself, but you can use Sef rook these are all enterprise create solutions for your databases and for your data. So you can use that if you have sensitive data, you have tenancy issues and all of those things. Okay, so now I have scaled it up to three replicas. So now you can see that this pod’s logs didn’t start from one, rather it started from 15. Because this pod had already the first pod, this which is running from 118 seconds, had already logged something on the file. And now if you see that the data is totally inconsistent, 17 18 20 20 21 22 23 are missing. So data is inconsistent, because all of these three are sharing the same volume and are basically trying to change the data of that volume. So all of these are sharing the same data. So you can see that 70 is above it. 68 is after it. So because we have a random sleep function, so this container might, this pod might have slept for three seconds, and this might have slept for two seconds, but when it wrote back to the file, so the data got inconsistent. So for that matter, we don’t use pvcs or persistence for debt deployments, when we have sensitive data. So as you can see that totally the data is out of order, the data consistency is totally not in place when we are using one PVC with three replicas of a deployment. So, let me delete it. The same is the case with the daemonset, I will just apply the daemon set and I will try to move forward with stateful sets.Okay. So again, persistence demonstrates it will be totally the same, the they all will be sharing the same PVC, they all the logs will be totally out of water. Again, in case of dementia, it is also never used. This is just to show you the differences between the three in terms of persistence.

Bart Farrell:

What if I have a volume or drive with data? It’s needed for the application? How can you coordinate the app startup and make sure that the data from the volume is available for it?

Ali Kahoot:

Okay, so, basically, you mean to say that the volume is provisioning the time that it takes to provision those things. So, for that, basically, we have Probes in Kubernetes, you can have liveness and readiness probes. So, what the readiness probe will do is it will not mark your pod as ready, unless you specify your command. So you can create a command that it should check whether the volume has been created yet or not or you can have an Init container for it as well. So, in the Init container, what it will do is that you can have that container less than if you can check in the init container once that condition passes only then the main container should start running. So you have two options for that. So, coming to stateful sets the main thing that is used for Data on Kubernetes. Stateful sets is the main controller that is used for managing the stateful applications. You declare the PVC template inside the stateful manifest and it guarantees the ordering and uniqueness of the pod by having incremental naming convention. The replica pods are not interchangeable. They have a unique numbering, for example 0, 1, 2, they have an incremental numbering, whenever you have multiple replicas of a stateful set. Each replica has its own state; they will not be sharing the state of each other, they are not interchangeable, they are separate entities or you could say have their separate state. So they are useful for databases. So this is how persistence in stateful sets is done. This is a stateful set one replica, it created a pod counter-0 in the deployment. As I said, the naming of the pods was totally random.

Bart Farrell:

Just curious what happens if we use two replica sets in YAML of the deployment and use EBS block storage for PV?

Ali Kahoot:

One will connect to it the other one will fail saying that the EBS cannot connect to more than one instance one will be able to connect to it, because EBS can only be connected to one instance so one will be able to connect to it but the other will be failing, saying that we cannot connect to the EBS more than instances. 

So this is a pod. It created a PVC counted as zero and it all looks good, but what if I scale it up? I scale it to three replicas. So what it will do is it will not create three replicas at once as deployment did what deployment it was it scaled it up all at once, because as I said that they are interchangeable. In statefulsets, it will create a new pod counted as one, it will have its own PVC. It will then once the counter one is totally up, it’s ready, it will create the other part counter – two, which will have its own PVC. So now counters zero and counter two cannot be interchangeable, because it has its own PVC it has its own data, and this one has its own data, you cannot say that the counter zero can be replaced with a counter because they both have their own data. So if we use deployment for database stateful applications, data consistency can be compromised, which we just saw, we were just writing to a file and reading that file for deployment were not able to consist of the data. The data consistency part high availability cannot be achieved. If the PVC is deleted from this image, if I delete the PVC, then not all of the data is gone for each of the replicas. So better high availability cannot be achieved with stateful sets; one drawback of stateful sets is that they don’t have replica sets. So you have no rollback options; you have to manually roll it back to the previous version. And I will just show you the example for the statefulset. Okay, one thing that I missed, for a stateful set is that when you delete a stateful set, the PVC does not get deleted, that is a benefit for stateful sets so that even if you delete the basically stateful set the PVC does not get deleted. 

So I’m making sure that the PVCs are deleted before applying the stateful set so there are no stateful sets and no PVCs. Once it’s terminated I will have to manually delete this. it is now deleted. So now I am applying a stateful set. So now we will see we have three replicas of my stateful set, we can see first the first PVC is created counter-0 and the first pod gets created once it gets in the running state, then the next pod will be created. I have scaled it up. It is creating a new pod counted as one new PVC is being created counter-1 and a new pod counter-2 is also being created on the PVC.So each PVC has its own pod and each pod has its own PVC. Let me just apply that deployment again. So you can see the difference and can confirm that I am telling the right thing from the PVC. So over here you can see that there are three pods connected to the same PVC counter-7-GRC, counter-7-R9, counter-7-V2. So the same PVC is connected with the three pods over but in stateful sets. Each of the pods has its own PVC. So how is it beneficial? And how is it beneficial for databases? So what happens is whenever you deploy a database, in a Kubernetes cluster, you want it to be highly available. So what happens is you create a cluster for your databases. For example, if we are deploying Postgres on our Kubernetes clusters, what we want is a cluster of Postgres, basically a database. So one would be the primary replica, other two will be the secondary replicas, which will just be sinking the data from the first primary one. So these two are secondary replicas, and this is the primary replicas This is read-write. And these are only writing or reading the data from Postgres-0 and basically saving the state. 

Basically activity that will be performed on this pod only; these are only read only replicas, the secondary replicas. What is this to close down, the pod goes down or the PVC goes down? What will happen is, one of the two secondary, they will become the primary one. And as stateful-set is a controller it will provision a new pod with the same name. So as it was before for deployments, another issue is that we don’t know the exact name for stateful setss, the name will be the same and now it will become the secondary replica. Now Postgres-1 is the primary replica and it became the secondary replica, which will, again start copying the data from the primary replica. So that is a benefit of using a stateful set; each has its own state. Each is maintaining its own state, you can have a highly available database in your cluster. And that’s how it works. And why do you need stateful sets for databases? Databases form clusters, you can never have replicas of your database, always you will be having clusters one will be the primary one other would be the secondary one. Each individual pod should have a unique identity because you can communicate with each other as the cluster, it communicates with each other, the primary one should know the name, or it should be able to predict the other pod names. As we said in the stateful set, it increments the name, counter 0, counter 1, counter 2, counter 3, we all can easily predict what the name of the next replica would be and even if the one pod goes down, it will come back with the same name as before. So the networking will be easy for deployments, the name we offered obviously will be different whenever a new pod will come up. For stateful sets the ordering, the uniqueness will remain always over there. And the PVC doesn’t delete automatically. Even if pods or stateful sets get deleted. For example, I will just delete the stateful set, we can see that the PVC will not get deleted, the pods will get deleted, the pods are in terminating state, the PVCs will not get deleted, even if the stateful set is deleted. So that’s another benefit of stateful sets. So whenever you recreate the stateful set, it will again use that same old PVC. 

Bart Farrell:

Would a front end stateless deployment tap into a back- end stateful DB database pods?

Ali Kahoot:

Basically, the front end there tends to be stateless applications, they do not have any sort of a state in them, they are just serving the front user. So you can have deployments for it. And you can have multiple replicas, each replica will be the same, whether the call goes to first replica or the second replica, but the databases, they will obviously be using stateful sets or any application that has state involved in it, you would be using stateful sets for it.

Bart Farrell:

A couple other questions. So what one thing to follow up with is that the database stateful sets are served by PVCs in terms of hard disk storage.

Ali Kahoot:

Yeah, basically, the PVC is that obviously, you need to choose whichever storage class or whichever storage volume you want to choose. You can use EBS, ZFS,or any event in your node you want to have, for example, I have three node clusters. So what I could do is I can have the storage of these nodes as well as my PVC. But obviously that is totally up to you, whichever, whichever you want to choose.

Bart Farrell:

One of the things that I really liked about this having seen so many meetups is starting with the building blocks. And then we finish with the things that are a little bit more complex. I think you can tell you can see it. You can tell that you have experience as a trader explaining these things. Also handling the questions. Have you had so many questions in a meetup? So it’s really nice to see those things getting answered for folks. What are the resources that you go to to learn about Data on Kubernetes?

Ali Kahoot:

First of all, the Kubernetes, it’s own documentation, that’s quite good. Given it’s own kubernetes.io documentation, they are quite cool, you can go to that. But other than that, obviously, you need a community for it, the DoK community is also quite helpful. Other than that, you have a Kubernetes Slack in itself. Whenever I get stuck on something related to Kubernetes it is the first thing I do. I ask a question over there. And then I do my research. Meanwhile, some members from the community will be able to answer your help. Similarly, for the data specific questions, the Data on Kubernetes community is quite a good initiative but for basic searching or for basic knowledge gathering Kubernetes own documentation is enough, I would say.

Bart Farrell:

We got another question, do we have a demo with no SQL and relational databases?

Ali Kahoot:

Not at the moment.

Bart Farrell:

Feel free if you want to continue the conversation in our Slack. You can also check us out on Twitter. We’re also on LinkedIn, constantly sharing stuff. We will be happy to answer all your questions in slack as well. We have another meetup coming up this week, we have plenty of things going on. This was a very informative meetup. And this is another reminder to all the folks that are out there, this is a perfect example of what we’re going for as a vendor neutral community. It’s very difficult sometimes to say like these are the kind of talks that we’re looking for. “Corporate slide decks”, that’s not what we’re looking for. 

Bart Farrell:

As you can see the feedback is really good. As we said, that it’s not so daunting, isn’t so overwhelming, we can unpack these concepts and see how they’re working. And so for folks taking off, I know you met Ángel in the very beginning, and what he was doing while you were explaining everything so wonderfully, was creating this graphic recording. So you can check out some of the visual summary of the different things that we were talking about. It’s always nice to see how we started and kind of how we finished. Once again, he did an excellent job of just explaining the basic concepts of containers and pods, showing the questions that Kubernetes was answering in the early days. So it’s a very nice visual summary of all the stuff that we were able to touch on. I’m still getting great feedback from our guests who were here today. Your blog we’ll link that on Twitter and LinkedIn. But as you saw folks, he’s very capable of answering questions very quickly. So feel free to reach out on slack if you have anything else. Are there any other closing remarks you’d like to make before we finish up?

Ali Kahoot:

No, I basically have summed it up quite well, just the only thing is that what I believe is that Kubernetes is the future of hosting applications. So jump on it, research it and don’t just try to deploy applications and then know the basics out of it. My main purpose of this talk was that you should know the basics of it, because I myself was deploying Helm charts. My task was being done whenever I was deploying applications using a Helm trip, but I didn’t know what was the main difference between these resources. Similarly, other resources on Kubernetes as well, this is quite complex, but try to grasp the basics of different things. So that’s my ending point.

Bart Farrell:

I think it’s a great way to finish. Ali, thank you again, and we’ll continue the conversation on Slack. Thanks, everybody.