Notions such as “databases like PostgreSQL cannot run on Kubernetes” are something that echoes in this industry all the time. However, Gabriele Bartolini, the Vice President of Cloud Native for EDB, joined by his colleague, Francesco Canovai, a System Engineer for 2ndQuadrant, proves us otherwise. 

In this DoKC session, Gabriele and Francesco lead us to a healthier and longer experience of the DBMS, even in Kubernetes. They share the appropriate procedures and the results obtained so far in benchmarking both the storage and the database.

 

Bart Farrell  00:02

Welcome, everyone! This is the second meetup we’re doing this week with someone from Italy. We’re joined by another Italian person who embodies the spirit of what we’re going for here as a community in two different ways: (1) he’s going to be talking about working with databases and stateful workloads on Kubernetes and (2) he’s embodying our community spirit of being resilient, not just in the data resilience sense, but as a person because of having to improvise. Gabriele is supposed to be joined by Francesco, who can hopefully join us. Francesco had an unexpected situation, but we’re hoping that he’ll be with us later. Gabriele is no stranger to Data on Kubernetes, having worked with databases and data issues for quite some time, and is currently doing so at EDB.

 

Bart Farrell  01:36

PostgreSQL is a very attractive topic. Why is it so attractive, Gabriele? 

 

Gabriele Bartolini  02:05

It’s one of those things that never fades away. It’s dynamically changing. There are new technologies, and Postgres adapts all the time, thanks to extensibility. 

 

Bart Farrell  02:29

It’s a good point being battle-tested. Postgres has been around for quite a while. It had iterations with different technological changes over time, but just as interesting as a community, the meetups that we have that involve Postgres generally attract a lot of attention, and yours is no exception. We had excellent feedback on LinkedIn and lots of wonderful support from the other folks at EDB. Thank you all very much. That being said, Gabriele, you can jump right into your presentation.

 

Gabriele Bartolini  03:40

Why and how should I run the database on Kubernetes? Why should I use Postgres as my database technology? Why should I benchmark my databases before production? 

I’m sure you have been asking yourself either one or more of the previous questions if you use Kubernetes, and you’re at least curious to use Postgres in a cloud-native fashion. Welcome to the 58th webinar of the Data on Kubernetes community. Tonight, I will be talking with my colleague, Francesco, if he comes back on time, about why benchmarking Postgres in Kubernetes is important and introducing a new open-source set of guidelines to help you with that process. Let me introduce myself quickly. 

My name is Gabriele Bartolini, and as Bart mentioned before, I live in Italy in a city called Prato, the second-largest city in Tuscany after Florence. It’s the city that hosted the first-ever Postgres community conference in Europe back in 2007. I’ve been using Postgres for almost 20 years now, and I’ve been a community member for the last 15 years. I’m extremely proud to be one of the four founding members of PostgreSQL Europe, the association that organizes the European conference. I’ve been working with the 2ndQuadrant since the start. I’ve covered several roles, including CEO of the Italian and Australian branches, head of global support, and lead of the cloud-native initiative. Since the acquisition of the 2ndQuadrant, I’ve been part of EDB, where I lead the cloud-native initiative as well. I’m also one of Barman’s founding members and contributors — a popular open-source tool for backup and recovery in the Postgres space. I’m also a big fan of Lean, Agile, and DevOps disciplines and culture. 

Francesco was supposed to be speaking today with me. But he had an unfortunate last-minute personal emergency, so hopefully, he’ll be making it on time for his part of the presentation. Otherwise, I’ll try and do my best to describe the actual work that he did with his team at EDB. Francesco started to work in my team in 2013 and is an expert in several areas, Postgres, especially business continuity, monitoring, infrastructure, and automation. Over the years, he contributed to the growth of the 2ndQuadrant automated testing posture, from the setup of CI/CD pipelines to test writing, from unit tests to acceptance tests, smoke tests, etc. He is always interested in container technologies and has become a very skilled engineer in Kubernetes. In his spare time, Francesco is a karate instructor. 

Moving on to our agenda, I will go through the why questions that I asked at the start of the presentation, then I’ll try and present the context in which our initiatives took place. I’ll try and explain why benchmarking the storage for a database workload in Postgres is important before benchmarking the database, then introduce our open source project called cnp-bench and share some results before the end.

Let’s start with the whys, which is always good, in my opinion. The first question I asked at the start of this presentation was why and how should I run a database on Kubernetes? One important consideration here, I don’t think I should go into much detail on the benefits of using Kubernetes. With this audience, I think we all know what Kubernetes is. I will try and describe my opinion, which might differ from yours. 

One of the advantages of Kubernetes is both the culture and the ecosystem where Kubernetes developed and nourished. At the end of the day, Kubernetes is just a tool. It might even be superseded by another tool in the future, but what is important, in my opinion, is the cloud-native aspect of Kubernetes. With cloud-native, I essentially mean three things. The first one is DevOps. DevOps-based culture enables teams and organizations as a team of teams to continuously change. Thereby innovating and accelerating the delivery of outcomes that produce value for the business in a safer, more efficient, and engaging way. The second is the microservice architecture that is based on containers. The third one is a way to manage and orchestrate these containers, using, for example, Kubernetes. 

In a microservice architecture, the microservice should own the data it manages and own it exclusively. These could be site files, queues, key-value stores, or in the case of Postgres, a relational database with both structured and unstructured data. Only the microservice can access the database, including schema management and migrations. This would be the ideal scenario. We should also anticipate the inevitable consequences of Conway’s law by leveraging microservices and organizing our products and systems in general. Considering the boundaries of each team’s cognitive load. With cognitive load, I like to use the definition by Dan North: “Software that fits in our head.” Beyond that, we human beings cannot operate efficiently. As a result, design loosely coupled microservice-based systems with clear boundaries defined by their APIs. 

All of the above concepts resonate with me. They are the reason why I believe that databases can logically fit well in a cloud-native environment based on Kubernetes, where everything is a workload — databases included. By owning the databases, applications can easily be tested in CI/CD pipelines designed for containers, distributions, and continuous software delivery. If you want to know more about this story, please scan the QR code to read a blog article that I recently wrote. 

Here we have an example of a microservice application with its own Postgres back end in a Kubernetes cluster that talks to another application with another Postgres back end in another Kubernetes cluster defined via  Rest API. In this case, they own their database and data store using Postgres. Since we talked about why databases can fit in Kubernetes, as a longtime community member and contributor of Postgres, I’d like to discuss why we should be using Postgres even in Kubernetes.

Let me spend a few slides on Postgres for those who are not familiar with it. For the third year in the last four, Postgres has been voted as the best database by DB-Engine. Postgres is one of the most successful open-source projects in human history. I’ll try and recap in a single slide the decades of development of Postgres, and based on my experience and opinion, I’ll try and select the most remarkable features of Postgres. Forgive me if I’m missing your favorite ones. 

For me, a quick way to describe Postgres is that it is the equivalent in the database area of what Linux represents in the operating system space. Postgres’s current latest major version is version 13, which ships out-of-the-box native streaming replication, both physical and logical. The details are in the next slide; continuous hot backup and pointing time recovery, declarative partitioning for horizontal table partitioning — a technique to improve vertical scalability on a single instance, JSON support which enables the multimodal hybrid database, holding both structured and relational. Structured relational and unstructured data to be queried and stored in the same database using SQL. Then, accessibility with extensions like PostGIS for geographical databases, parallel queries for vertical scalability, etc. In terms of architecture, Postgres natively supports the primary/standby architecture with optional and multiple replicas. The technology behind replication is very robust. It’s the evolution of Crash Recovery and Point-In-Time Recovery technologies, which were first introduced in PostgreSQL 8.2 about 15 years ago through WAL shipping and Warm Standby. Then, it later improved in PostgreSQL 9.0 through WAL streaming and read-only replicas with Hot Standby.

Further improvements include synchronous replication, which enables RPO zero clusters and backups. Zero data loss clusters and cascading replication means replicating from a standby. The last kind of replication that PostgreSQL natively supports is logical replication. Since string replication has been around for more than ten years, the technology is very stable, robust, and guarantees very high results in business continuity, measured by recovery point objective and report recovery time objective. 

Having said these, we can now explore how Postgres can be installed in Kubernetes. There are two approaches. The first one is the basic approach, using the self-healing capabilities of Kubernetes by having a pod running a Postgres container with no Postgres level replica. Replication and high availability are achieved through storage. However, we are interested in the second case. From now on, I’ll talk exclusively about the second case, which is using an operator to manage PostgreSQL. By doing so, we can rely on PostgreSQL native replication. We are moving up replication from the storage level to the operand and operator level. When I mean operand, I mean Postgres. The application and workload, in this case, might represent an anti-pattern in Kubernetes, at least outside the Data on Kubernetes community. Say, I expect this community to agree on application-level replication in this case. Native Postgres certification brings us synchronous commit at a transaction level, synchronous standby servers cascading replication, and Point in Time recovery. Everything we need to ensure business continuity at the database level with the possibility to restore at any point in time from the first available continuous backup in the catalog. An important note here, I get that we might not be objective here, but our team at EDB has been writing these features in Postgres for the last 15 years and continues to refine them. Everything has also been validated by supporting our customers over the years on some of the largest databases in the world that rely on an open-source database management system like Postgres. Thus, our message here is strong. We want to bring Kubernetes and Postgres everywhere. Kubernetes is where we need to be and why we need to do more — that’s our mission.

This is a non-exhaustive list of operators that are available for Postgres. Some open sources like Crunchy, Zalando, Stackgres, KudeDB, and Kubegres. I know that one of the following talks will be Álvaro on Postgres. 

 

Gabriele Bartolini  19:17

There is also a closed source operator from EDB. This is the only place I’ll mention these; it’s called the Cloud Native PostgreSQL. With a 30-day implicit trial license that you can use with the open-source PostgreSQL, which I remember is where EDB is a major contributor in code and developers, we invest a lot in the Postgres system itself.

Our goal at EDB is to exploit all the benefits of the cloud-native world with the best data database in the world — Postgres. It is important to note that to use Postgres on Kubernetes, you need a multidisciplinary team with skills in both Kubernetes and Postgres. Hence why I mentioned earlier the importance of DevOps. It’s not a coincidence. We need to master the whole system and know it well — weak spots, strong points, and limits. What is the best way to do this? The answer is benchmarking. 

Francesco and I come from a city in Tuscany which is not far from Pisa. Pisa is where Galileo Galilei was born. Hence, the scientific approach is imprinted on us. It’s part of who we are. Unfortunately, the whole team needs to continuously declare expectations, run experiments, measure the results, and compare them with the initial expectation. Over time, these approaches saved us several times. No matter the size of our organization, when we were small, larger, and so on, as well as the customers. It also allowed us to win other challenges that, at times, look beyond our reach. Another vital piece of advice is to try and move our focus from the technical means; the tools, or the way we’ve always done it to the actual goals, which might force us to go down different paths to reach our goals and always ask why we are doing certain things. 

My favorite ones, which were and still are, to some extent, recurring, trigger the question: what is your goal in terms of recovery point objective or recovery time objective? Postgres doesn’t have RAC. There are other ways to reach the goal. Another is, say, I want infinite horizontal scalability; where are the actual needs for horizontal scalability in terms of transactions per second? I want to clarify that. I’m not saying these because Postgres does not naturally support query and data distribution. Suppose our benchmark results showed that the observed transactions per second don’t meet our real requirements on a single primary system. In that case, that’s when we should look at different options, including horizontal scalability — data distribution across different servers and nodes. Remember that simplicity is key when business continuity is involved, keep your system simple is a fundamental rule that we’ve learned over the years. 

Finally, let your decision be driven by numbers and measurements rather than opinions because everyone knows it should be done that way. Remember, every context is unique from your ideas, and make decisions on stable ground based on numbers. Numbers can save you in these situations. If you plan to use Postgres in any environment, always benchmark Postgres; prior to production and preferably in a controlled environment. Benchmarking is important for capacity planning and optimization costs. This is also a fundamental opportunity to generate technical credit that we might find useful later when the Postgres database is in production. We might experience performance variability and open, for example, a support incident. 

I was in charge of 24/7 support at the 2ndQuadrant. Then, one of our customers hit a performance issue. There were two possible situations we could find ourselves in. First, a pleasant one, where we measured performance, predictability, and clear expectations based on benchmarks of the storage and the database that we had done prior to production. On the other side, the unpleasant one, wherein no benchmarks were done. When a system is in production, it is too late to run benchmarks. 

Storage performance can vary. Don’t trust the declared values by the supplier, but rather validate them. Measure the database performance knowing that it depends heavily on the storage, which can often be our bottleneck. I’m going to share a story not too long ago. A customer was complaining to us that Postgres on Kubernetes was low. It turned out that the underlying persistent volume throughput for sequential reads was 3 MB per second. There’s no way Postgres or anything can run that. 

 

Gabriele Bartolini  25:25

I want to share how everything started in our team. Back then, I was the head of global support at 2ndQuadrant. Our team has nurtured a very positive team culture based on DevOps, Lean, and Agile principles, values, and so on. We had a solid mindset of exploration and experimentation with new technologies with consistent examples of learning, unlearning, adopting, and dropping tools focused on where individuals distributed leadership and empowerment of people. In August 2019, I was asked by the CEO to devise a plan for the company to enter the Kubernetes market for Postgres. For a few years, we looked with lots of interest at what was happening in this space, with periodical conversations with our friends at Zalando. And we were eager to work on Kubernetes. Having already adopted container technologies in all our CI/CD pipelines, we were building packages, testing, and things like that. 

I remember a few major challenges. First, we put ourselves in the shoes of a Kubernetes expert and ask ourselves the following: (1) What is Postgres missing in Kubernetes? and (2) What is Kubernetes missing? Both questions required us to become proficient in Kubernetes quickly. We were perfectly aware that although we were in authority in the Postgres system, we were nobody in Kubernetes. We targeted Certified Kubernetes Administrator (CKA) certifications and were the first Postgres company to become a Kubernetes Certified Service Provider (KCSP). We treated the whole Postgres on Kubernetes as seeing an emergent problem. As a result, we adopted the exploration mode with a fail-fast approach. As much as we wanted this to be possible, our mindset was to quickly understand if we had to abandon the project in the shortest possible time. We know that high availability with Postgres was a problem already solved by other players. However, we wanted to target something that nobody had tried before: use dedicated physical worker nodes to run a Postgres instance, and see if and how Postgres could run on bare metal Kubernetes. If this had resulted in being impossible, we would have bailed out. 

My colleague (Marco) and I used the KubeCon North America in 2019 to validate our hypothesis. We decided to participate in the six storage groups in CNCF briefly. I proposed that our team perform some benchmarks using local persistent volumes, which had recently been introduced in Kubernetes 1.14, but we ran it on bare metal. Francesco was the leader of that project. We had our own bare metal Kubernetes cluster in the Hetzner cloud. We ran our storage tests, and the results were, let’s say, more than encouraging. That was the answer we were seeking, and we decided to proceed. Our results on storage were published in a blog article, which became quite popular in the data storage communities in Kubernetes. We use commodity hardware, dedicated local SSD storage, a simple setup with the hardware RAID controller, and perform storage benchmarks directly on Linux, then inside Kubernetes using fio. We use the open EBS local persistent volume, and the results in sequential reads and writes were almost identical to bare metal ones. We then proceeded with benchmarking Postgres, and the results follow the same storage pattern — with comparable behavior to bare metal. In the meantime, the acquisition process of 2ndQuadrant from EDB had started, and we had to pause our activities. I was supposed to be the number one or number two webinar in this Data on Kubernetes community, so I’m happy to be finally here. 

We then resumed the work after the acquisition was completed in October, and here we are. Let’s now look at what capabilities a database expects from the underlying storage. We all expect it to be available and scalable so that it performs well and guarantees both consistency and durability. Traditionally, Postgres administrators associate storage availability with their hosting operating system. Scalability is achieved either via tablespaces, LVM, or storage online resizing. As a result, when it comes to Postgres, we focus primarily on performance — one of the major sources of support requests, consistency, and durability. 

Fortunately, when it comes to storage access patterns, the same concepts as MDM and bare metal are still valid in Kubernetes. We just need to integrate all we know with how Kubernetes manages storage (i.e., storage classes, PVCs, etc.). There are primarily two methods of access to storage: 

 

(1) by a network either directly or indirectly, for example, an NFS volume

  • Network storage is certainly the most used pattern in Kubernetes, and there’s a labyrinth of available solutions, either outside or inside the Kubernetes cluster. Further, replication is normally at the file system level. Network storage in Kubernetes presents the same issues of throughput and latency that we experience in every other environment.

 

(2) storage that is local to the Kubernetes node where a pod is running

  • This includes the DAS or directly attaching the database people love. This is historically an anti-pattern in Kubernetes. We see this is changing, and we hope that the shared nothing architecture for databases becomes more common in the Kubernetes space.
  • Shared nothing architecture brings performance predictability, even though it comes at a cost. The cost is harder scalability. However, Postgres says otherwise to do this. High availability is delegated to Postgres, where we control durability even down to the database transaction level.

 

Let’s now look at some critical areas where storage can become the bottleneck of a database management system like Postgres. As a disclaimer, this slide offers a simplified view of how Postgres interacts with storage. Let’s start with transaction logs, which are called Write Ahead Log (WAL) in Postgres. The most important asset for Postgres is business continuity capabilities, as they form the foundation of crash recovery, point in time recovery, and replication. Briefly, the WAL is sequentially written logical files normally made up of separate files. There are up to 16 MB each. They contain the history of changes that happened to the database. We need to make sure that the information is written prior to changing an actual page in the buffer. Hence, we request that the change is fsync to the disk to survive a crash or a sudden loss of power that would wipe the content of any intermediate cache between the disk and the database. When we add a record to a table or modify its content, the changes are first stored in the WAL and then subsequently in the shared buffers for faster access and retrieval of data. These buffers become dirty when the page, which is normally 8 kB, is modified in memory, and the content differs from the actual content. They are cleaned, meaning that the version in memory matches the one on disk after three events. The first one is checkpoints. They are normally issued based on timeout and the number of changes since the last checkpoint or even manually. Typically writes for buffer flashing are spread between two checkpoints to avoid spikes in write operations. The second is by the background writer. The third mode is by an actual backend process. The backend process is a query that needs to write data but finds the buffer full. If page spilling becomes the norm, the database is operating in anaerobic mode — it is not configured or sized properly. Thus, we need to take action here. 

For buffer cleaning, we need to look at random writes performance. Also, the OS cache might play an important role here, but that’s another different topic. Whenever a page is required by a process for a query and it is not present in the buffer, Postgres needs to fetch the page from the disk. This is valid not only for tables but also for indexes. In this case, a random read is important. However, random reads are not always as fast as sequential reads. The Postgres planner is smart enough to realize that based on statistics, if you’re fetching a good portion of a relation, it is faster to read the whole table sequentially. This is a common pattern in data warehouse workloads, and in such cases, sequential reads performance becomes critical. 

Based on our experience, when preparing a system for database usage, we should benchmark the storage and determine the throughput for sequential and random reads and writes. Another aspect that we need to consider, especially in the cloud environment, is the number of input and output operations per second, which might be capped in some cases. However, they eventually impact the throughput, and it’s something we should keep in mind. Now, we move forward to the next question: how to measure the storage? Our solution is to use one of the most used tools — fio. In the past, we also used bonnie++, which is an open-source tool. It’s highly configurable, and it’s also supported on Kubernetes. It supports several IO workloads, but we’ll see that later. 

Our team usually goes, “slow storage typically produces slow databases,” unless their access pattern is primarily in memory. Since we’ve seen how to benchmark the storage, let’s now talk about the most common workloads in Postgres. One of them is In-Memory. It means that the database fits entirely in the database buffers. It’s primarily bound to CPU and memory. The most typical usage pattern is Online Transactional Processing (OLTP), which means that we have many small and concurrent transactions that are a mix of inserts, updates, and reads. The third one is for BI data warehousing. It uses historical databases for reporting. It’s online analytical processing. Fewer queries are more complex and require reading a good portion of time series. 

Our initiative is focused primarily on all OLTP workloads where the database size is larger than the available RAM. Thus, the database doesn’t fit in memory, forcing us to rely on these corporations and we can see how this storage impacts the database. Further, what we want to measure is our transaction per second. 

We selected pgbench, which is the default benchmarking tool in Postgres. It’s distributed with Postgres. It ships and simulates a TPC-B-like workload — an online transactional processing workload. It’s highly configurable. In our initial experiments, we are using the standard queries, but we could also inject our own and custom queries to simulate different workloads. In the future, we want to add application-level testing so that we could use a pool, for example, replica deployment or web applications with hay in front that generates random web access.

 

Francesco Canovai  40:25

What we did to have a standard way to test and get benchmarks on Cloud Native Postgres was developing a tool called cnp-bench. It is a collection of Helm charts, which means that it will be quite easy to run and customize, and the result will be easy to reproduce. We wanted to have something that could be used at the beginning of the Kubernetes journey; someone with Postgres experience who started working on Kubernetes could easily use the Helm chart and customize them.

We have decided to use the tool that Gabriele was talking about before, fio and pgbench, to monitor the IOPS and transactions per second that we could get from a certain setup. The code is available on GitHub. Under the Enterprise Edition event, you will be able to get the codes. 

We have two charts, one for the fio benchmark that is used to benchmark storage. This chart creates PVC, and you can customize it and choose the storage class to get the PVC from. You can select the size of the PVC, and then the fio band chart will create a config map containing the fio job, which is not the Kubernetes job, is how fio should run a workload. Then, it deploys a pod with a file running the benchmark and serving the results on a web server when you’re done. The values in YAML are simple. The first part refers to the PVC, and then there is information about the fio job that we want to run. After the run, you get the results, bandwidth, and IOPS. 

For example, this was performed on a cluster running on my laptop. My laptop is able to run at about 135 MB per second. Then similarly, we have a Helm chart for pgbench, which starts a Cloud Native PostgreSQL cluster. Once again, you can customize it in the chart because different Postgres settings can give different performance. Then, you define the pgbench job to run. This one is a Kubernetes job as well. You can define the size of the database as you want to create and the number of connections that pgbench will perform. In the end, you can get the logs of the pgbench pod and see what has happened. 

Going back to the values in YAML, the first half is dedicated to the Postgres configuration, while the second half is for pgbench. We also use node selectors to avoid pgbench jobs from running on the same nodes and Postgres pods so they don’t affect each other. You get the logs and the amount of transactions per second that you can get on the system on which you have run the chart. With this, we have decided to run some tests on the cloud. We have started our tests on Azure. We wanted to see how the network storage on Azure was performing. We have tested a few combinations of VMs and disks. 

When you go to the public cloud, there are all sorts of complexities since you have to decide the size of the VM. Its size could not only define the amount of CPUs that you have or how many gigabytes of RAM you have; different VM families can have different limits. For example, you can have VMs that limit IOPS on the underlying storage, and then you have the storage itself. You can have, again, different storage with different limits. However, some disks allow you to go over the limit for a short amount of time, called bars. It’s usually available for a lower-size disk on AKS. 

We have used Standard_E8d_v4. Together with these results, it has eight CPUs, 64 GB of RAM, and a defined amount of bandwidth which is 4 Gbps. We have tested it with different storage. They’re all premium disks from Azure. The speed of the disk in Azure is tied to the size; the larger the disk, the faster it should be able to go. A smaller P10 is just 100 GB. It can offer you 500 IOPS, a larger one. You can get up to 20,000. With the documented data on the website, we have chosen to run our tests for one hour. For fio, we chose to use a block size of 8 kb because it’s the one that Postgres uses. Globally, the fastest disk is better. Choosing the 8 kb block, IOPS mostly caps our operations. We never reach the amount of bandwidth that the disks have. We go at the IOPS level, but there are some oddities here. For example, the P10 looks like it has a very high read speed. The P30 is going way over its limit, while the P80 never reaches the limit. You have to get familiar. There are explanations for these. They are documented. For example, as I said before, some use the small disk at the burst. While on average, we are way over the 500 IOPS that we have as a cap. For the first half an hour, we are running at the bar speed of 3500, and then we have the declared 500 IOPS. If you look at this, and you can’t decide on your database, since the average is around 2000, it will run at 2000 and usually run at 500 IOPS. It could be fine if your database is small, but not for the larger ones. 

Next, we ask, why did we have this way higher reads? The disks have a read cache. The results are more in line if you try to run the same tests without the read cache. For P10, no-cache, you drop down from 483 MB/s to 16. The disk’s cache is larger than the disk itself and the file that we are writing. Running the same test over and over again, the data would go in a cache, and you would get these results. 

Read cache is good for databases, so don’t go disabling it. However, it’s nice to see that there is an explanation for that sort of result. For example, there is no information about the cache on this machine in the documentation, but the system machine says it has 200 GB of cache. This explains the behavior very easily. Further, the same system machine as the cap on these coyotes is around 12,000. This could also explain why we can’t reach the declared 20,000 IOPS that we are supposed to have from the P80 disk. This means that when you go and try what’s going on your disk, you can get different results according to the school environment. You need to know what you’re running on, especially if you know what your requirements are for the database. On the database size, we have tested pgbench. We have defined different database sizes on different disks because there were larger than the other. A small DB on a small or large disk on a larger one, but faster since most of the speed from the amount of transaction per second can be satisfying over 1000 transactions, which is good for a medium database. The configuration of Postgres mostly defaulted. We have changed a few parameters to get better results from the machine with 64 GB of RAM. We have raised the shared buffer, and we have raised max_wal_size. Do not checkpoint too much. The results were good in the end. 

What about local storage? The idea is that it should be faster; it’s directly on the machine. We have tested the L8s as the machine. It is the family of machines that comes with an NVME disk. Their results were way better in terms of row speed. It’s a tenfold increase in transactions per second, and it’s twice as fast. This gets back to the initial idea that local storage for databases is an interesting point. We’ll let Gabriella draw the conclusions.  

 

Gabriele Bartolini  56:35

There are three main takeaways from this talk. First, provide a methodology for benchmarking the Postgres database in Kubernetes. Second, the open-source set of tools, and go through why Kubernetes for databases and Postgres is a valid option. Next is why benchmarking is important, and lastly, you may now access the link for cnp-bench. You’ll hear more from us in the following weeks and months. We’re also hiring. If you want to work with us, kindly access the URL. We need people in our team.

 

Bart Farrell  57:53

Gabriele, we can answer at least one question tonight, and then we can get to the rest in Slack.

 

Gabriele Bartolini  58:27

For the question: what considerations should we take into account when running highly available Postgres? In my opinion, shared nothing architecture is fundamental. It means distributing Postgres instances across different Kubernetes nodes, possibly dedicated nodes to a single instance, and using dedicated storage to the node that is possibly local to that node, as I explained before in performance predictability. However, high availability alone is not enough. You need disaster recovery capability for business continuity, and Postgres supports continuous backup and point-in-time recovery, which are critical for enterprise-level database context. Instances can be spread across different zones in the same Kubernetes cluster. We are also working as EDB on the cross-cluster replication for Postgres, which is expected later this year. 

There’s another question about the advantages of using an operator over installing the database the old-fashioned way. It depends on the operator. The container approach only provides self-healing capabilities. Then replication availability is all dedicated to the delegated file system level. Then, the operator provides a higher-level experience which includes synchronous replication. There are automated failovers, planned switchover, rolling updates, scaling of read operations, and transparent management of TLS. Postgres is a complex application. I think an operator is what we need when managing complex applications and workloads. 

 

Bart Farrell  1:01:05

As part of the tradition, we have a wonderful artist in our community who’s always present in our meetups which provided a visual presentation for our talk today. Once again, as Gabriele started, starting with “Why?” is a significant part of our community; Why are we going to do this? You might be involved in some resistance from customers, which could have other difficulties or issues. However, when you understand the “Why” from the beginning, you can treat many other things with a little bit more patience. With that being said, thank you both for spending some time with us today.