Kubernetes has the potential to change the game by enabling users to run complex databases anywhere they want. Its portability coupled with a SaaS user experience makes it a powerful tool.
Bart Farrell 00:00
Hello everyone, we are very excited to be here on Data on Kubernetes livestream #130! We’ve got many things going on right now since May is a hectic month for us. Our speaker for today will tell you about the stuff he’s going to be doing in May because he’s also very active. Also, you can now sign up for DoK day. You can very quickly do so with just a couple of clicks. We were hoping you could get involved in a challenge we launched. As you know, one of the core symbols of the Data on Kubernetes community is the horn that we use to announce meetups. You may have noticed that the promo for today’s meetup was without a horn. The DoK horn has gone missing, and we need all of your help to find it. We’ve given clear instructions about what to do on Twitter. I will drop the link here so you can see exactly what’s happening.
Also, as I said, if you haven’t signed up for DoK day, go for it! It’s free; you just have to add it to your coupon registration. We’ve got over 30 talks and panels. Further, the gentleman I’m with now happens to be one of the people who will give a talk at KubeCon. His name is Robert Hodges, the CEO of Altinity. Robert, welcome back! How are you doing?
Robert Hodges 02:02
I’m doing great, thank you for the invite.
Bart Farrell 02:07
Can you tell us quickly what you will be speaking about on DoK day?
Robert Hodges 02:12
I had a couple of topics. I believe I’m talking about replication on Kubernetes and moving data at speed. It’s an important topic, like assessing how you move things around. I’ll be taking 10 minutes on it. It turned out that what we have found working with hundreds of users of data warehouses, Kubernetes, is turning out to be the place to build data pipelines because you can spin it up. I’ll discuss this further in my talk. Then, you can also drop various pieces of the data pipeline and have them easily connect up to each other through the wonders of DNS inside your Kubernetes cluster. It’s a very powerful model and one that we’re interested in. We’ll be doing several talks about it in the near future.
Bart Farrell 03:08
For today’s talk, though, what will we be learning?
Robert Hodges 03:14
We will be talking about portability on Kubernetes and how it enables a new management model. Along the way, I will shamelessly describe what we’re doing because that led to an understanding of this model. However, the real point of this talk is that Kubernetes opens up some very interesting new doors that allow people to manage data in a distributed, decentralized way. That’s what I’ll be digging into.
Bart Farrell 03:41
As usual, folks, get your questions in the chat. Robert is very knowledgeable and won’t mind me interrupting him. I won’t do it mid-sentence. But this is a solid title, leveraging the goodness of Kubernetes getting on the inside, making it happen. Robert, take it away.
Robert Hodges 04:07
I’ll be talking about this Kubernetes portability model and how we are using it to open up what is becoming apparent as a new management model. A little bit about me, I’m a database geek. I’ve been working on databases for more than 30 years. I put that down because if I said what it is, about 39 years, it makes me sound old. But I’ve been doing it for a long time. I’ve worked with about 20 different types of databases and a number of startups. The last one I worked for, we sold to VMware. My day job is the CEO of Altinity. I’m not describing my own experience but the experience of our engineering team. Once again, we are database geeks. We have centuries of experience with databases and applications, particularly strong in analytic databases. It’s good because what we do as a business is support ClickHouse. Further, we have a cloud platform for it called Altinity.Cloud. We’re also the authors of the Kubernetes Operator for ClickHouse. These are how we’ve gained the experience that enabled us to give this talk.
Let me dive in and tell you where this story started. It didn’t start well, and this part is personal. Back in 2018, I was applying for this job as CEO. At the time, I was working for VMware. I went and had dinner in Frascati in San Francisco with some guys on the board, namely Peter Zaitsev and Vadim Tkachenko. If you’re involved in MySQL or Postgres at any level in the community, you know who they are. They’re world-famous; they wrote a book on MySQL performance. We’re sitting at the table highlighted here and talking about what they wanted to do with this little company they had helped to start. At some point, Peter said, “We also have an idea that there’s a real opportunity here to run databases on Kubernetes.” I thought this was a terrible idea, and it’s a common reaction with database people. We ended up having almost a knockdown argument about whether it was possible to run large databases or databases of any complexity on Kubernetes. This was not quite as dumb as it sounds because I was working on Kubernetes at VMware. We had a lot of problems with it. Back in 2018, storage was not easy to manage since a lot of it was local. Hence, it was a difficult environment to work in.
A few weeks before this conversation, I had managed to destroy the Helm installation on a production system by issuing a command that affected the services running in the Kube system namespace, which I could not see. This was my first taste of the complexities of Kubernetes security. It was not impressive. I wiped out Helm, got calls from people, and it was an overall bad experience. Coming in, I had some reasons to be doubtful. We’d never really settled the argument there. I got the job not because I was a Kubernetes expert but because I offered to work for free for six months. This allowed them to overlook some of my other faults, but it was the beginning of a journey with developing Kubernetes and building the infrastructure that will allow us to manage data there. Eventually, it turned out to be a great experience.
There were three parts to it. The first was getting Kubernetes to be an effective platform to host ClickHouse. One of the things I should also say was in this conversation that I had with the board members, and I didn’t know about operators. The first thing that I learned about was how operators could help manage databases. ClickHouse is an excellent example of an application on Kubernetes that needs an operator. If you’re not familiar with them, ClickHouse, like many databases, is a large distributed application. It allows you to shard data and replicate data within shards. It also connects to Zookeeper, and there’s a new service called Click Housekeeper. The point is you have a separate service that helps all the replicas keep track of what information they need to share. It’s complicated. You can see the basic topology here spread across availability zones, replication within the shards, and communications between shards when we’re doing queries and need to fetch data. If you set this up on Kubernetes, it is a nightmare if you’re doing it from scratch. The framework presented on the screen gives you a partial illustration of the complexity that you’re facing. One of the things that troubled me about Kubernetes and Helm is that Helm is just not up to doing this level of management; not only do you have to lay these things out, but there’s also a complex infrastructure, and you have to deal with changes. Say, for example, upgrades, where you have to carefully mutate the state of what’s going on in Kubernetes. Further, Kubernetes has ways of dealing with databases. For stateful sets, they are not up to dealing with something as complicated as a data warehouse where things may run in different versions, and pods may have other definitions. As a result, this becomes a complicated problem. However, it turns out that operators solve this very well, and there are a couple of great things about them. One is that they can reduce the complexity, like what you saw on that previous screen would be stateful set definitions, PVCs, configuration maps, and all these things you have to do individually. With an operator, you can reduce this to a single file that’s relatively simple. You can go ahead and apply that.
Then, what will happen is that ClickHouse Resource Definition is defined as a resource type in ClickHouse, or it can be in Kubernetes; when it shows up on Kubernetes, it will be intercepted and hand it off to the operator, which runs as a container, and decide what to do with it. It looks at what you have and inspects the resources allocated in Kubernetes. Then it will adjust to reality so that the resource definitions in your CRD impose the reality that Kubernetes implemented. It’s a long way of saying that you get a database set up, and not only is it a setup, but it’s what you might call a best practice deployment. Anything that we think makes us work better, and the operator will take care of configuring things properly. This is a huge improvement over setting things up directly using deployments and other low-level resources.
I’d like to do a quick example of the CRDs. I won’t belabor it, but this runs to three pages; given the system’s complexity, it’s relatively simple. For example, we have things like shards and replicas on the first page (A ClickHouse CRD, Part 1: Cluster topology). Further, rather than having to specify the topology in a complicated way, say, let there be two shards and replicas, we can tell where the Zookeeper is. Given where consensus is held, we can refer to templates for things like pods and volume claims. We can expand this as templates are powerful. For instance, the pod template I’m using says, “What container do we want for ClickHouse? The image name is one of our builds of ClickHouse (A ClickHouse CRD, Part 2: Pod definition). Then, a very important set of properties, setting Altinity”. It enables us to force the distribution of the pods across nodes. Now, if you’re a Kubernetes expert, these types of things you can do yourself, but it’s just better to have this simplified syntax.
Same thing for storage (A ClickHouse CRD, Part 3: Storage), we volume claim template and do some interesting things like set and retain so that if our cluster gets deleted by accident, our storage doesn’t vaporize. Then, we ask how much storage we want. This is not a simple document. However, compared to the overall complexity of the allocated resources, it is very simple. Moreover, what’s interesting about it is not only is it relatively simple, but it will also run anywhere. This is where we see the first example of this portability’s power. Typically, when I’m testing, I run it on minikube, which is great. It’s wonderful to vote for development as it runs brilliantly on Linux. I can test this out and set up the cluster in minikube. But what I can do once I’m satisfied with it is I can turn around, take the same resource definition and load it on a cluster in Amazon EKS; it will run there as well. It will allocate a ClickHouse cluster for me which I further turn around and run in the Google Kubernetes engine. In both cases, I get a cluster that comes up with the same topology and properties as minikube.
That is a very powerful feature because, as a developer, I can test things out and quickly iterate through things, and when I’m done, I can push it to big Kubernetes running in the cloud. Now, there are some other portability solutions, like Java, which was very popular back in the day. There are certain things that are portable and some which are not. The things that tend to map very well are the cluster topology; what are the pieces? How many replicas, shards, configuration map values, pod definitions, availability zones, and storage definitions? What’s cooler about the resource definition is that it would allocate persistent storage using the default storage class, which every Kubernetes cluster has. Hence, this would make me get a cluster to come up with persistent storage, and it would just work. I can tweak it beyond that to get to be more specific about the quality of service, but basically, all these things are portable.
Now, what’s not portable? As you move across clusters, you quickly realize that services, which are the front door to your applications and enable connections to come in (i.e., get load balanced across pods), are not portable. Mapping to external network resources like Amazon elastic load balancing, DNS, and certificates; Kubernetes doesn’t know about that. Then topics like security, in general, are not covered as there are various ways that you can protect ports. I’d like to give a concrete example of this because when we’re setting up these clusters, one of the most difficult to deal with is trying to figure out what your service definition looks like. Here’s an example of annotations we’ve just provided that will allocate different kinds of appropriate data annotations for different clouds where this service might run. As you can see, all of these services, like Amazon, Azure, OpenStack, etc., have different annotation types. They are not commensurate. For example, if I run this on Amazon, I will get an NLB. A network load balancer is going to be at my front door. Moreover, it should use an internal VPC address and shouldn’t appear publicly on the internet. How do I know? Well, the fact is that I have to try it and find out because there are enough differences between different cloud implementations that, without trying it, I don’t know if these properties are going to work. This is a very important place where Kubernetes is not portable. We need to do extra work to move between different environments. Generally speaking, the CRD, other than the networking and the things I mentioned, works well across a wide variety of environments. That portability was important to us because it meant that this Kubernetes operator we developed could now be a building block. It could be even more than a building block. It could be the foundation of building a cloud service to run clickers. We call this service Altinity.Cloud, and I would like to tell you how it works. The basic idea is simple. Like almost all other clouds, we have a central management plane that runs inside Kubernetes. You have the Kubernetes control plane as your API, and that’s how we get stuff into it, but it runs a service called the Altinity Cloud Manager. It’s our management plane, and it knows about the different tenants and where they’re located. We manage tenants in each case because each tenant gets what they call environments. They are places where they can bring up clusters and run in particular regions of either Amazon or Google Cloud. In each case, the environment is a Kubernetes cluster which we spin up using GK or EKs, depending on where we’re running. You can then go ahead, and this has the operator inside it; I’ll show you shortly. You can build, change, or even upgrade new clusters.
Moreover, we take care of arranging the external connectivity so that you can connect to Kafka — this is where you load events. You can also connect to Amazon S3. You can build your applications. Golang is also popular for loading events; they can all talk to the data warehouse. On the other hand, you can build end-user applications, which then turn around and issue queries on the data loaded from the upstream data sources. This is all controlled by your development team, who don’t have to know any details about how this is done underneath. Instead, they go through a panel showing their clusters and enabling them to do operations on them easily.
I’m going to give a quick example of this and show you what Altinity.Cloud looks like. As shared on the screen, each cluster is represented as a card. It’s our staging environment. This is where we put the test clusters and further data that we’re using internally. One cool feature about this cloud that I love is that you can set a schedule on it so that if you don’t use it for an hour or so, it just turns off. It means that we free up the VMs you’re using, which costs you nothing to run. We have many active clusters, and one of them (rhodges-data-ai) I’m using for a talk that will be at the next Databricks conference at the end of June. I’m using it to generate or load a trillion-row data set that we can run interesting queries at. The things we can do when I’m just playing around with it, for example, I have a single node (m5.2xlarge), one of the things I can do at any time is I can set it up to something enormous. Say I want some horsepower, I will rescale it and set it at m5.8xlarge. If I go ahead and punch the button, it will deallocate the old VM, respin a new one and reconnect it to the block storage where my data lives. In this manner, I will have a new VM. I can also add shards, replicas, change the storage size, and the like. This is just one example of the kinds of things that I say that I can do with this.
Other things that are useful in this environment allow me to access the explore option — this is where we can look at the data. For example, let’s see what’s happening in the schema here; there are no big secrets. These are files that I’m using, which I’m working on as part of the development for this paper. I can see how big they are, and I can also go ahead and look in more detail at important properties like column compression. This is incredibly important in data warehouses. It allows me to show the level of compression that I get in different columns. To sum it up, this is great if you want to work on ClickHouse because it isn’t a complicated application. You don’t even have to know what we did to prepare the CRD. We took care of everything and gave you external connectivity. You can get to it, play around, and develop your applications quickly. This is similar to the cloud experience that we get practically everywhere else. I will show you how it works underneath and some of the portability issues we ran into next.
What’s going on under the cover? As I mentioned, we have Altinity.Cloud, the centralized management plan running in Kubernetes. It uses a VPN to connect to the control plane API. Once we spin up the cluster, which we do use Terraform, we then inject a bunch of software into it. For example, the Altinity operator that I described to you, we inject it and set up monitoring inside using Prometheus and Grafana. When people start a cluster, it will spin up the cluster and Zookeeper to keep it company and track what needs to be replicated. Then, other services also get injected in so that this whole environment can function properly. One interesting thing that I should point out, and this part is important to our business, and a lot of people that run data, is that within this Altinity.Cloud, the runtime components that we inject are all open source. For example, ClickHouse is Apache-licensed, and so are Apache Zookeeper, Grafana, and Prometheus. These are all open and can run anywhere. You can pull down your own copies of these and run them yourself. This is something that will become important later. This is 100% open source.
Now, I’d like to talk about some of the portability issues we faced and one particular problem that we were able to solve. If you look at the overall portability of this service, it’s pretty good. The containers are identical across Kubernetes clusters. This is a wonderful feature of cloud-native applications; those containers, since they encapsulate all the dependencies, they will run anywhere. ClickHouse is distributed as a Docker image and will run anywhere that supports Docker. As a result of this, and the fact that we’re using Kubernetes, most of the control logic is taken care of by the Kubernetes operator. We were able to port from Amazon, where we started to GCP and get customers out at it in under two months. If you’re dealing with software, being able to port this quickly across clouds is a big thing. This was when we told ourselves we would do it by the time we could load the first customer. It was only two months, which is fast.
However, there are still portability headaches, which were very similar to what we already knew in the existing portability issues from the Altinity ClickHouse Operator. They include things like networking and ingress configuration. Considering that we’re running a service, we also ensure that we use cloud vendor resources efficiently. For example, network egress costs, which are infamous on Amazon, then the security. These are all the problems that you have to deal with. It’s also interesting to observe that to the extent of these portability issues around networking, one of the ways that you can solve them is to push more services into Kubernetes itself. I want to show a specific example of this that turned out to be important to us. When we first built the cluster and started running it as a service, we used the Amazon Elastic Load Balancing, specifically on network load balancers. As I showed you a few slides ago, you can set this up using the operator.
However, what gets deployed when you create a cluster, you’re more likely going to end up having a network load balance (NLB). It will forward the traffic to your cluster service. Then, the round-robin service gives you access to the pods with individual nodes on your Kubernetes cluster. While this whole thing kind of works, there were several problems we ran into.
First, the NLB is not capable of load balancing more than one service. It’s tied to the service that created it. Here, we have the NLB pointing to the cluster service. Moreover, for example, we have pod services that would allow us to get into individual pods, but we still can’t see them through the NLB. Another thing was that the NLB does TLS termination. We don’t allow connections that are not encrypted (TLS encrypted), so when they arrive at the NLB, you get TLS termination while the traffic gets forwarded over to the cluster service. At that point, you lose the originating IP address. Our ability to do whitelisting and other things like that was severely limited. The final thing is that every time you spin up a cluster, you have to allocate an NLB that entails two problems. One, it’s expensive, and second, it’s slow. This is one of the slowest resources. When we allocate clusters, we use auto-scaling (carpenter) to allocate nodes in the Kubernetes cluster, and it’s quick. However, the NLB could take 20 to 40 minutes, which means you had this tremendously long period waiting for the NLB to show up. Then, you have to pay for each one individually. This was not suitable and a problem on Amazon. It also meant that we would have similar issues arranging the external connectivity every time we went to another cluster.
We managed to solve it. What we did is, if you see something you don’t like inside Kubernetes and you can get at it from inside Kubernetes, what you can do is write a service that solves the problem for you. In our case, we wrote something called the Altinity Edge Proxy — the service that accepts the connections and the traffic forwarded from the NLB. It means that we have the NLB forward the traffic without opening it up straight into the edge proxy. Then, within the edge proxy, we implement things like SSI routing. Your server name indications allow clients to specify which specific node inside the ClickHouse cluster they want to go to. It’s great because it allows us to go to individual pods, which is important for operation. We also do TCP/IP proxying so that when these arrive at the service, we know the IP address. Then, we can do things like IP whitelisting globally. Best of all, there’s just one NLB for the Kubernetes cluster. When we spin up new clusters, we do a quick configuration change on the edge proxies, which are very lightweight and can instantly accept configuration changes. Now, we have a very fast, efficient, and portable solution that gives us the gateway for the external network connectivity coming into our clusters. This was an example of the fact that you can run services inside it, giving us the ability to make Kubernetes more portable.
Running a cloud service only means that we can run more easily wherever Kubernetes runs. There’s been an important change in the Kubernetes ecosystem since we started working on this over the last few years. Every public cloud worthy of the name has not managed the Kubernetes service. The table provided shows you the major ones. For example, Google was the first out of the gate with GKE, where they went into production in 2015, all the way to DigitalOcean, which managed Kubernetes in 2019. With this portability, we can quickly set up in these different clouds. It’s a huge advantage if you’re running a cloud service because we can be multi-cloud without too much difficulty. At this point, the portability of the CRDs and the portability you can get just because you’re running within Kubernetes as you can encapsulate problems inside Kubernetes itself, then distribute the software to solve them, gives us a great portability story. However, we’re still operating a cloud.
Now, I’d like to talk about how the things we’ve seen so far enable an entirely new method of operation. This mounts to a new model for data management. I call this “cloud experience anywhere,” but allow me to dive in first on a quick review of server computing. Server computing has bounced around interestingly. As flashed on the screen, the things below the provided timeline are the IBM mainframe, which is one of the first places I worked on when I got my first job with Scott McNealy, talking about networks connected to Sun server, all the way to the public clouds such as Amazon, Google, Azure, and others. These have all sort of been advances that have promoted centralization. The mainframe is a single shared computer.
In the case of McNealy and Sun’s work, centralization enabled the whole SAS explosion at the end of the 90s because you could buy these servers and sort of increase them quickly, then connect them over networks. Eventually, the huge centralization we’ve seen in the public clouds. At the same time, above the line, there are also things like the PDP-11, which first appeared around the 1970s, and software like VMware or Kubernetes enabled people to run in a decentralized fashion. For example, VMware allowed corporations to build what amounted to private clouds. Further, around the 2010s, VMware was dominant in enterprises because it was a huge advance and allowed enterprises to scale computing, buying a number of servers and running completely different applications on top of it without caring about the servers’ boundaries.
In the current iteration of the cloud, we see a tremendous amount of cloud-native data management where the data is centralized. Products like Snowflake, Databricks, and so on were very popular. The obvious reason is that SAS services themselves are a great model for users. You can go to a screen, ask for a database to spin up, it appears, and you connect it with your application. You don’t have to worry about how it was done. You don’t have to worry about what happens when it needs to get upgraded or when there is a bug because there are people behind the scenes who solve all those problems for you. This model is enormously popular. If you talk to some people, for example, with Gartner, their view is that all data will move into the cloud where all the innovation and data management will happen.
I guess some questions you may ask are the following: Is this inevitable? Do we see a pendulum swing permanently towards the centralization of data? A famous Miller beer commercial back in 1997 talked about the same problem. They weren’t talking about managing data. Instead, they were discussing managing the beer. Say, “Hey, we all want to drink beer; it’s time to drink beer that’s the size of Rhode Island,” and this commercial was beautiful because it just celebrated what had happened to American beer-making, which had become enormously centralized and everybody got their beer from a small number of brewers. They were trying to combat the whole movement of microbrews. This commercial was a satire of that. The interesting part of this is if you think comparatively, is this what’s also going to happen with data? There’s a lot of religious argument about this. Now, one of the things that you can do is actually to ask users. Do you think you want your data to be fully centralized? In our latest customer survey, we had at least 200 customers and talked to them constantly.
Say, for example, you have a big database on Snowflake, and you have to get off it for whatever reason. What you see on the screen’s left side is the process you go through to pull data out of a cloud service. This is the short version of the plan, and I’m not making this up. In my past experience, I’ve worked with SAS for 20 years, and now and then, you have to get off of SAS, which is painful. In the case of Snowflake, I have to find an alternative and build a new database. I have to go through these steps through deployment. I might have to cycle on it a few times, and it will take weeks or months to do so. In a real application, it’s something that you would rarely do in less than six to nine months. This is complicated. If you need to get off the service that we’re building, it’s simple. You just disconnect. You own the data. You’ve got the code for the runtime, and it’s all open source. You disconnect and run it by yourself. This is a completely different level of experience because it means that migration off becomes something that you can do with almost zero effort. There are also some other things that come out of this. For example, if you were already running the ClickHouse data warehouse in Kubernetes, you could go the other way as long as our operator manages it. Our service can dial in, adopt it, and begin managing it. You can also do other things. As we’ve talked to customers, one of the questions goes along, “The systems that were operated are used for trading, and during business hours, we don’t want you guys touching them. We don’t want it to be touched during trading hours. Can you build a gate that would prevent any access outside to this system?”. It is an interesting question. As you begin to look at questions like that, it’s clear that there’s a different management model for data that Kubernetes can enable.
Here’s just a summary of the architecture that you can now build and that we are building now. In your management plane, you have very limited data. It’s more tools and automated procedures with logs so you can do quick fault diagnosis and telemetry for alerting. On the other hand, in your data plane, you have user data, system tables, and monitoring. These are all things that are locked or cannot be seen from the outside. You can build the interfaces so that customers can directly control that access on the data plane side; also, things like credentials and audit trail. Another interesting thing to have in these environments is if you have ClickHouse users or users for any database, you can store them in that environment. Then, if you want to run a procedure to create a new user, the management plane has to ask for the credential before it can use it in this procedure. This is no longer the centralized cloud model because you’re in the management plane. As I showed on the screen, from the point of view of a user, you can still do the same kinds of things that are important for the developers, like being able to spin up a cluster or change it quickly. However, the data and the code are living inside the data plane. We have had this model in mind for a long time. This is possible. We’ve gotten far enough along the path that we know it’s possible and will work for data warehouses and any database.
The outline for this model is that you can have a single pane of glass cloud management and see all these environments that the users and we own. You have on-demand DBA support. You can drill into the environments, even the user own ones, and quickly solve problems. However, users have the choice to own data and the environment. This is another important thing that many businesses focus on open source because they don’t want to lock-in. Thus, it gives you the ability to have this fully open source runtime that if you need to move, you already have the data, but you also have code that you can run yourself. All of these migration issues back and forth become more tractable and much easier for businesses to handle.
This is a model that you can apply not just to data warehouses but to data in general. This decentralized management for data gives us the opportunity for people who want it to combine with the best of the cloud experience, where you have automated procedures and other sorts of high levels of support. At the same time, you have the ownership many users need to run their businesses.
We haven’t officially announced this, but watch this space! If you have interests in this, or you’re doing something similar yourself, we would love to talk to you. I’m available on the DoK Slack workspace, or you can send an email to me. Beyond that, we also have products, like the Altinity.Cloud and Kubernetes operator for ClickHouse, feel free to check them out. The Kubernetes operator, for example, is Apache 2.0, which is used by thousands of installations. Finally, we’re hiring! If this topic seems interesting to you and you like what we’re doing, check us out. We’re always looking for talented people.
Bart Farrell 53:51
Very well put together! You did talk about the evolutions of servers over time, and it’s by coincidence that we have a talk tomorrow, the DoK students day, from a young man who’s 15 years old and going to be talking about serverless databases. Any comments on this?
Robert Hodges 54:19
My first comment is, what does he mean by serverless? There are two kinds of serverless. In the database field, I find people commonly mean that you don’t have to know what servers you’re running. This is a big advantage of cloud systems; you have an abstract notion of compute. You have virtual data warehouses. There’s something out there that is probably a collection of VMs, but you don’t have to worry about how it arrived and exists. This is something you can do in Kubernetes. It’s a matter of how your database understands things like how you decouple, compute, and store. If your database understands it, then this serverless experience is something that you can create in Kubernetes. However, we don’t do that because ClickHouse, in its current form, requires some knowledge like data location (i.e., whether you can get to specific nodes to look at it or not). An example will be if you need to join very large datasets. This is difficult to do in ClickHouse right now because you have to know where exactly the data is located. This first definition of serverless is something that Kubernetes will enable and increasingly see going forward.
Bart Farrell 56:11
Good to know that! The other thing here is in terms of where we’re at, since we talked about this a lot, the data considerations on Kubernetes in terms of stateful workloads have a watershed moment, the arrival of stateful sets that was around three to four years ago (or longer) have relative maturity around that, and you also talked about CRDs. Now, it’s difficult to pinpoint an exact time, but what are the things that you expect to be happening in terms of this stuff not becoming so much like a wild west and more of just becoming an everyday thing where one could go like, “We know the standardized practices! This is how we’re doing.”
Robert Hodges 57:02
I think the biggest game-changer in this whole thing has been the emergence of operators and the ability to define custom resources. They still have issues. Here’s an example; one that has been painful for us is that when you define a customer and create a custom resource definition, it’s global to the entire Kubernetes cluster. It means that if we come in and want to adopt something, it’s easy to adopt click cost because presumably, you’re asking us in, and you want us to take over. However, what about all the other services? Grafana? Prometheus? Let’s say they have operators as well, but what about Zookeeper? If you’re already running one and you’re particularly attached to it, if we come in and install that operator, it’s going to create a CRD; it’s a good chance to wreck whatever you have there. That aside, operators have been brilliant. They have helped enormously. I think the biggest thing I’m looking for in Kubernetes is to manage local storage better. Right now, we use block storage to allow us to resize VMs. It’s expensive and has lower performance than an NVMe SSD. What we would love and what you need to enable this to work at scale is if you want to resize something, you need the ability to move storage efficiently between locations. This is a key capability that doesn’t exist inside Kubernetes. I know that people like Pure Storage are working on this. If there are storage people listening, I would love to hear from them.
Bart Farrell 58:49
We’ve got storage people in the community. This is something that you’d be happy to pose and see what their responses are.
Robert Hodges 58:56
I think it’s important to have this because last time I was looking at CSI and the outer storage management, this capability of resizing a local disk, I don’t see it in the abstractions. Maybe I’m missing it. But we, as a team, feel like this is something where we’re going to write more application logic to make this happen. Since Kubernetes doesn’t supply it yet, it’s an example of where we’ll build a service to do this in Kubernetes. This is how we’ll solve this problem. But it’d be great if Kubernetes had the abstraction, to begin with.
Bart Farrell 59:34
I like that as something to hang on to. We’re going to have a talk specifically about CSI on DoK day and KubeCon from someone who’s quite experienced in that area. I may have asked this before, but one of our community members, shoutout to Álvaro, who’s very passionate about Postgres, frequently said that CRDs are his favorite feature in Kubernetes — would you agree or not?
Robert Hodges 1:00:03
Hands down! It makes it possible to manage distributed applications. If you’ve distributed applications with data, stateful sets work great for simple applications (i.e., farm of web servers). However, they are inadequate for solving and building big systems. For example, With stateful sets, there’s no real guarantee of what availability zones your nodes will land on. If you go look at the Kubernetes docks, it’s just mush. I cannot extract from the docks and figure out the semantics of placement because they are random. This is an example of where you need an operator with CRD who can understand that these parts of replicas need to go in this availability zone, and if you got two other availability zones, sprinkle the rest of them across it and make sure they arrived there. CRDs do this. You can then write the logic to tweak your pods and annotate them correctly. Operators and CRDs are brilliant. They’re one of the biggest innovations in Kubernetes and the whole project history.
Bart Farrell 1:01:27
Strong answer there, no doubts. Álvaro will be happy. I’m curious to see what other people have to say too. We recently recorded a panel and will be putting it out there for the next couple of weeks. Its focus was on the operator paradigm. Hopefully, we’re going to expand it, and we would like to have you there to hear your thoughts.
Robert Hodges 1:01:48
If this talk has interested you, we want to collaborate with people to make this model work. The swing to this complete centralization, what we hear on the ground level by talking to people building these systems, is how varied their reasons are for placing things in different locations. We’re doing the centralized model right now because it’s convenient for vendors. But if you look at what users want, they want choice. They want the ability to choose whether to own their data or their infrastructure. This is something that Kubernetes has a real shot at enabling. It was not obvious that Kubernetes was going to make this happen, but it clearly has.
Bart Farrell 1:02:54
Very well said. It was also reflected in the survey results you showed about how users respond with their concerns, interest in control, data security, privacy, and the like. Vendors can sometimes suffer from being a bit of an echo chamber or not having enough contact with users, which can be tricky, and it’s something that I think everyone needs to keep in mind. I’m glad you put it out there.
As usual, we have our wonderful artist in the background doing an artistic representation of what was being discussed.
Robert, we look forward to your talk on DoK day on the 16th. You heard it here first, but we will be doing something in the Bay Area in July. We got to pick a place and a date. Maybe we can explore this operator paradigm.
Robert Hodges 1:04:17
If there are storage people in the Bay Area, I’m located in the Bay Area myself! I would love to talk about this stuff. I think there are great things that are opening up for Kubernetes. The more stable it could get, the more things we can build on top of it.
Bart Farrell 1:04:41
Likewise, Robert! Absolute pleasure to have you with us today.