Data on Kubernetes Day Europe 2024 talks are now available for streaming!

Watch Now!

Building a Digital Factory for the Sheet Metal Industry

CamData 3D, one of the newest projects of Kurago in partnership with Bystronic, provides a multi-tenant cloud service aiming to improve the digitization of the sheet metal industry. In this talk, Senior DevOps Engineer Elie Assi elaborates on why and how they choose to host their service in Kubernetes instead of virtual machines.

Elie Assi  00:02

Hello everyone from Kurago Software Operations, my name is Elie Assi, I’m a Senior DevOps Engineer at Kurago Software. And on a daily basis, we work at Kurago with cloud-based services like Kubernetes, AKS, and a ton of automation tools like Ansible, Puppet, Terraform, and a bunch of other things. I live in Bilbao in Spain. And like I said, I’m part of the DevOps team at Kurago Software. 


Kurago itself is part of the Bystronic group. Bystronic is a Swiss company that produces machinery made for bending metal sheets, for cutting metal sheets, for automation, for robots, and a bunch of other things. We are working with the Bystronic group, as part of the Bystronic group rather, to build a smart factory, which is the entire software solution from giving codes to the actual delivery day of the product to the market. That’s controlling the robot on the factory floor to get code to get everything.

A lot of our software is factory-based and some of it is also cloud-based for our multi-tenant clients. 


By the way, these are the machines that are produced at Bystronic, that’s part of them. These are cutting, and bending, this is part of the automation on the shop floor factory. 


One of the recent projects that we have been dealing with is called CAD-CAM data, our project that’s between Kurago and Bystronic. This service is cloud-based and multi-tenant. We wanted to containerize this software and have it hosted in the cloud. Apart from the software itself, the backend is a MongoDB and also we needed a Mongo Express, to be able to have a UI for the Apollo teams to be able to do any changes inside the MongoDB itself. This is a resume of what we wanted to do pretty much, this is the project itself. 


Our first approach was actually to use Azure virtual machines as every project starts, we wanted to use our virtual machine to do our testing and evaluate the infra how well it will be in production with relation to being in a virtual machine, how it will scale up, etc. So when for a basic setup in Azure, with a normal public IP, at the end we went from managed disks to be able to, if something happened to the virtual machine, not lose the data. Our logging solution also was linked to that. So logs can be transferred to the machine outside for a development team to be able to view the logs, we containerized here, our application itself. And we also wanted to make MongoDB and Mongo Express into containers as well because we didn’t want to deal with the actual operating system itself and its dependencies. We wanted our applications to be easy to deploy within containers. 


We had some challenges with this. The first challenge was related to scaling, not scaling fast enough as you can see, we had a couple of virtual machines and high availability. But as this service will be multi-tenant,  our first challenge was to see how we can scale this up depending on usage. It’s not as easy in virtual machines as in Kubernetes. Apart from this, as we have the databases inside the virtual machine running in the container, we wanted our solution to be able to replicate the DB between the different virtual machines that are running, and with every new virtual machine that comes up to support the new load, we wanted this to have the exact same replication of the DB that’s running on the previous machines. 

Another problem that we had was maintenance, all the maintenance that had to do with the virtual machine, whether the operating system updates, whether with deployments, whether it was like I said with high availability. All these things had a negative effect on our test, on our performance testing, and our CI environment. So we decided to go for, or at least, to give it a shot to see how we can replicate all of this into Kubernetes. We already have several Kubernetes clusters running and our development and production environments are running several of our applications. So we wanted to give this a go. We have not used MongoDB or actually stored databases on Kubernetes before this, we have our own applications running, and the backend has always been SAS and(on??) Azure for example. So this was a bit of a challenge for us and I will tell you a bit about how we went about it: the move from the original machine to Kubernetes itself. 


Actually, we already had the image of the application in our container ready, so moving that to Kubernetes was an easy task. Having the images already being produced in our own pipelines with our own Docker files and pushed on ACR (Azure Container Registry), so having those images deployed to the Kubernetes clusters, which we already have in our ACR was not a big deal for us. That left behind all of the database and MongoDB on a virtual machine, the most scaling that we wanted was actually for our frontend, for our main application, that’s it, because that’s what’s going to scale up and the database is gonna have more or less the same size, while the other pods keep on scaling, depending on the load about them. So we didn’t want to leave our MongoDB and Mongo Express inside the virtual machine, we wanted to move everything to Kubernetes as a part of solving the scaling problem that we had in the virtual machine for the application. Here we went for the Mongo Express, we use the helm template to be able to deploy Mongo Express, and we also use the helm template to be able to deploy MongoDB. The traffic as you can see goes through from the browser to the Mongo Express. So our development team is able to configure that database, create tables, and do any changes that they need in MongoDB. MongoDB itself, obviously, is not exposed. The Mongo Express has its own ingress controller and inside the Mongo Express deployment, we have the secrets and the content map to be able to connect to the MongoDB itself that’s running as an internal service. We use Helm charts, like I said, we use the Bitnami Helm chart for MongoDB and we installed it fairly easily. And then for Mongo Express itself, we use this repo, to be able to install the same release of Mongo Express into the Kubernetes cluster itself. 


So our own workflow and pretty much from stage one was to do all tests in our CI environments with virtual machines, etc. Then, when we wanted to think about scaling, we moved to Kubernetes. And then we automated the entire process of deployment of our application, deployment of MongoDB, and deployment of Mongo Express, all through our Azure DevOps pipelines. That we will see in a bit how we do this. 


Let’s switch on a bit to the technical part. I split this up into four parts for this demo, I want to talk about in the first two-part briefly about our application, and then the MongoDB and the Mongo Express. Our application, which is part of this, is obviously this entire project split into two parts. The first part will create the image itself and we push it into our ACR, and once the images reside there we are able to deploy those images with our templates to the Kubernetes cluster. As you can see, this is a basic Docker file that we use to be able to build our image on our tests. And then once this image is created, this entire process is automated in our pipeline. So as soon as our pull request happens, the pipeline will run if it’s successful, the image will be pushed to the CR. The second part is the actual deployment of the application. As you can see, we created four files that we use for our deployment. The deployment one is what we used to deploy, this is the first pull on the image that we create with the Docker file, for example from our ACR. And then as you can see here, there are values that are not specified. These values like the image tag, the replica number, sensitive values here, database, URL, database name, etc. All this is not hard coded into our templates, these get replaced at the moment of deployment. So within our deployment pipeline, the first step is to always look for these variables defined between these two symbols here and then do a proper replacement. And once those files have been replaced correctly, deployment will happen directly from that pipeline. So this is the deployment to expose the polar replication, our metrics store to be able to grab metrics into Prometheus and Grafana. The resources that we also limit, both limits and requests, that we are able to easily change after with variables, and also the health probe that we have for this application. So any new deployment that happens, and new pods that go up will have this dash health being checked before the deployments finish correctly, and the other pods are removed. Lastly, as we have two different node pools in CI, for example, any production, more than one, some of them are Windows underneath this application is on Windows-based containers. So we want to make sure that any deployment that happens goes directly to the Windows node pool. So we specify the node selector part in here for that. The Ingress is pretty, pretty basic, as you can see in here, the URL, and then we specify the orientation for the cert-manager to be able to grab the proper certificate for this ingress. The namespace where we deploy this deployment tool and then a service that will connect to the deployment and expose the correct port to be able to connect this application. 


So we deploy, like I said, our application, this is how we build it, and we deploy it to Kubernetes with these templates, then, for the more interesting part, the MongoDB, we use, like I had mentioned, the sound chart, there are some things that we do change from the default value, I’m gonna go through them, I bet with you guys. So the first part that we do actually change is the architecture, we use standalone architecture for our CI environment. Then the authentication part, whatever root user and root password for the MongoDB that you need at the moment of creation we specify and also gets replaced at the moment of deployment in here, these two values, also the username, password, and default databases that get created on a moment, because we don’t want to use the root user and password for operations from our application so we generate another username and password and database for the application. Another interesting part in here, will be if we scroll down a bit more is the service part. Obviously, this gets deployed to a Linux node pool. So here we specify with the node selector that we want to use the Linux node pool to deploy this, this prompt is only needed in case you have several node pools, Windows and Linux, and you want to make sure this always goes to the right one, if you only have Linux, you don’t have to worry about setting up the node selector port. And if we scroll down a bit more, we specify the resources, the limits, and the request for this deployment. Usually, we adjust this based on what we see in Grafana from the metrics that we get. And the persistence part is by default, set to true within this template. And then the interesting part is the servers that we configure to be able to connect to this MongoDB. We use a cluster IP, so we want to access this MongoDB only internally from inside the cluster, we don’t want to have it exposed externally, especially since our application runs within the same cluster. So our application that needs to access this MongoDB actually will access internally from within the cluster, it will store data internally within it. The same thing for Mongo Express is that it will connect internally to the MongoDB and then have its own ingress to be able for developers to make changes to the database itself. So here you specify the type which is cluster IP and the port that will be open. And this is how we connect to the database, external access is disabled. So we’re not deploying any load balancer IP or public IP for this database. And then, like I said, the actual persistence is set to true within this template. There we go, persistence enabled true. We move on to the Mongo Express part. In here we can see also some interesting stuff that uses the Mongo Express official image. And here, scroll down a bit more. Also, the service type is cluster IP, and this is the port of the service for the Mongo Express, we specify the ingress enabled as true because we want our ingress to be able, our own UI, to be able to access this Mongo Express, we limit the CPU and memory, whether limits and requests, the same thing with nodeSelector as earlier, this is the Linux container so want to make sure it’s running on the Linux one. And here is the interesting part, it needs to connect to the Mongo server, which is running internally on this cluster here, instead of specifying the IP of the service for the MongoDB. Obviously, when a service gets deployed, it gets assigned its own private IP. But imagine there’s the case that we have to remove the service or remove the application or redeploy, this is bound to sometimes maybe change this service IP in the center IP. So we wanted to use instead the internal DNS resolution of the service, which will not change even if the IP does change. So we’re going for MongoDB.cluster.local, and this is how Mongo Express is able to find MongoDB internally on this server and on this port. It also will need the MongoDB username and password to be able to connect to it. So we find that with this local DNS entry, and then to be able to authenticate and write data or access it, we need, obviously, the credentials, which are also specified in here, that get replaced at the moment of deployment. 

If we have a look at the service that we’re talking about. So this is the service itself, it’s using a cluster IP, and this is the IP that I was referring to, the Mongo Express will try to connect to this exact service on the name itself, or the internal DNS resolution to be able to find it the same thing with our application, it will connect to the MongoDB with the MongoDB service with the actual name internally, instead of the actual IP here. And then, as you can see here, this has been running for around 247 days, it’s been running for quite a while in our CI environment without any issues, we went through several changes in the clusters, several Kubernetes upgrades, several deployments, several recreations, and the data has remained intact every single time with persistent storage without any issues that we have. 


This is a persistence store that gets bound to this, it will come up. In the meantime, this is the Mongo Express itself, this is the UI in here. This provides our developers a portal to be able that they want to check anything with a database, and do any changes, to be able to access our code, this is protected over there, with the login details for that part. And as you can see, here, the persistence storage is bound to this very deployment, we’re using 8Gi, this is a value that can be specified in the template itself for CI, it’s more than enough size for us, but it will also depend on your need. But this value is also specified within the helm template itself.


For the final part, our takeaway from this experience that we’ve had is that having helm makes everything very easy, especially for stuff like MongoDB or Mongo Express, not to have to write our own templates, and with a few changes that you have seen, we can deploy a MongoDB with persistence storage with all that very easily with just a few changes to the values depending on our needs. In this entire endeavor, you need the entire Kubernetes ecosystem to make everything easier, because you’re gonna make use of this application and this database, or getting into your stuff, your other stuff in Kubernetes, you don’t need your cluster, you’re gonna need your ingress controllers, you’re gonna need your cert-manager again, and your Prometheus and Grafana to get that. So obviously, starting from scratch, it’s a big project to create Kubernetes and then deploy an application but having that already existing, which we did, of course, made our life easier. And within a day, we’re able to create these resources and add them to our already existing cluster. And like I said, we need more tools for this application to be able to monitor correctly, for everything to work correctly, not just this, but that’s how things go. And the last takeaway that we have is that it’s more cost-effective, we compare to have virtual machines running in our case, through this application, having these pods and Kubernetes, it’s far more cost-effective for us. We don’t have to worry about having several machines for high availability, we don’t have to worry, we have to worry about far fewer things. And also, it’s much more scalable, easily changing R-value for a replica set, or having a Horizontal Pod Autoscaler we can control since we get metrics so that we can control scaling up and down our application. This doesn’t really apply that much for our database that says, or the Mongo Express, but on an application, this definitely applies. 


And for the last bit, I want to thank everyone in the community and have a great day.