Schedule now LIVE for DoK Day at KubeCon Paris | March 19, 2024

Register Now!

Benchmarking for PostgreSQL workloads in Kubernetes (Part 2)

A few months have passed since the last DoK webinar on benchmarking PostgreSQL workloads in a Kubernetes environment. While many things have happened at EDB, the team continues to elevate its discourse and introduces an open-source set of tools in bringing PostgreSQL to Kubernetes.

In this insightful DoKC session, Gabriele Bartolini, the Vice President of Cloud Native for EDB, uses cnp-bench and cnp-sandbox to help us describe some of the challenges we might face when running PostgreSQL workloads, how to spot them, and what actions to take to make your databases healthier.

 

Bart Farrell  00:00

All right, we are going live. What’s up, everybody? In this live stream today, we’re doing the second part, Benchmarking PostgreSQL workloads on Kubernetes. Could you just give us a little bit of context and background about how this idea got started?

 

Gabriele Bartolini  03:12

I’d like to mention how it all started. I talked about customer engagement and the uncertainty about using PostgreSQL or databases in Kubernetes. I think the importance of benchmarking is even greater here in this phase because people need to be assured about‌ running stateful workloads in Kubernetes, and I think the best way to become confident about running PostgreSQL in a Kubernetes environment is to try and ‌break it. Push its limits to see how it performs, and you need to benchmark and adopt the architecture based on your goals and budget. Learn the methodology and try to understand how we can tune the system for our organization.

 

Gabriele Bartolini  08:24

I’m delighted to be part of EDB, where I am the Vice President of Cloud Native, but I’ve been using PostgreSQL for many years now. I’ve been a member of the community for more than 15 years. Previously, I was one of the co-founders of the 2ndQuadrant, where I used to be the Head of Global Support, and I also started the Kubernetes work that EDB took over. Now, let me tell you a bit about EDB. Our mission in Kubernetes is to bring PostgreSQL to Kubernetes.

EDB has been contributing to PostgreSQL for over 15 years. If you’re using PostgreSQL, it’s also thanks to a few companies like EDB and 2ndQuadrant, where I previously worked. In PostgreSQL 14, we’ve had 32 contributors, including seven code committers and three core members. We are considered one of the major sponsors of the PostgreSQL project. We are also a Kubernetes Certified Service Provider (KCSP). We are the first company to become KSCP that is vertically focused on PostgreSQL. Lastly, we are a platinum founding sponsor of the Data on Kubernetes Community, and we’re happy to be in this community as we build awareness about using databases in Kubernetes. 

 

Gabriele Bartolini  11:09

We’ll be going over some key takeaways from the webinar we ran with Francesco. I will also try to summarize the life of a Postgres transaction and how this can impact when we benchmark so, then we can eventually propose some of the recommended architectures for Postgres in Kubernetes and describe the methodology that we are currently using and developing. 

 

Gabriele Bartolini  12:17

This is the link to the previous webinar. On this occasion, we discussed the following agenda: (1) Why use Kubernetes? and (2) Why use PostgreSQL? I focused on the concept of cloud-native—specifically, its value since it’s more than a tool. 

We usually refer to cloud-native as ‌Kubernetes, but it’s much more than that. It’s not only patterns and architecture, for example, microservices, the operator patterns, or if you want to bring systems on Kubernetes, but it’s mostly about principles, values, and the culture. Hence, I still believe that without this brain, engine, and methodologies like DevOps (i.e., Lean, Agile, etc.) that developed in the last 15-20 years, there would be no cloud-native movement, and it won’t be as important as it is now. It’s all about understanding the goals and why we are doing things in a certain way. Cloud-native and Kubernetes are now the expressions of this mindset. There’s been this report driven by the Data on Kubernetes Community in which we clearly see how stateful workloads are becoming more popular among users, including databases. The main reasons behind this are the following: (1) storage classes, (2) support for local persistent volumes, and (3) the operator pattern that has taken a lot of interest in these years. 

When we went over PostgreSQL, we thought it started in the 80s as a research project created by Michael Stonebraker, one of the greatest minds in the database technology discipline, but then it was restarted in the mid-90s. We are talking about 25 years of “evolutionary innovation” one bit at a time, which means building on top of what we built a year before. PostgreSQL has this development life cycle that lasts one year, so every year, usually in autumn, there’s a new major release of PostgreSQL. I like to refer to PostgreSQL with this operation question: what Linux is for the operating system, operating systems PostgreSQL is for databases? It was elected as the database of the year in 2017, 2018, and 2020 by dbengines.com, and some of the main features are native stream replication, which is one of the reasons I suggest relying on these rather than file system level replication for PostgreSQL; online continuous backup, declarative partitioning, parallel queries, JSON support, PostGIS support, AC transactions, and so on. Going back to replication, we support both physical, logical replication and synchronous and asynchronous angles of cascading.  

Six months ago, we talked about benchmarking the storage. The storage is the most critical component in a system where the database runs, and we explain what to look for sequential writes, fsync, random writes, random reads, sequential reads, and where they fit. I won’t be covering storage benchmarking today. Kindly refer to the previous webinar and Fransesco explaining what he did in that area.

There is also the cnp-bench which was introduced back then. Now, we talked about benchmarking the database and particularly selecting and depending on the workload you’re aiming for. In-memory databases, OLTP and OLAP. Initially, our‌ activity was focused on finding the transaction per second (TPS) metric on large OLTP workloads. In this context, large means the database size cannot fit the system’s RAM. Otherwise, it’s primarily an in-memory database. Then, in that case, we focus primarily on Pgbench, so we learn that we need to know the storage because we need to trust the storage. Thus, always ‌benchmark the storage before we go to production and test the database. It’s useless to test databases if you don’t know how the storage performs because the storage can be your bottleneck. Remember, your database will always be slow if your storage is low. Once again, kindly look back to the previous webinar to understand how we use cnp-bench and fio. 

I want to provide a quick overview and a simplified view of how PostgreSQL handles transactions. Also, I want to apologize to all the PostgreSQL experts because PostgreSQL internals are more complex than these, which I tried to simplify the whole workflow to clarify the concepts I want to explore today. 

We’ve got the shared buffers; that’s how PostgreSQL calls them. It’s the Postgres cache, and it fits in shared memory because PostgreSQL is a multi-process system. Hence, it uses shared memory. Pages in Postgres are eight KB by default, and they usually rely on disk when the database is shut down. We call it pg DAS — the directory where data files exist. Considering Postgres is running, there’s a new backend. Now, the backend is a process that manages a connection with a client. This backend starts a transaction and maybe wants to read this page on disk, so what Postgres does is read the page from disk, put it in the cache, and serve it to the backend. This is where we observe, for example, random reads or optimized sequential scans in case we are reading a large chunk of a table sequentially. 

There might be this backend that needs to change to a record, so we’re changing the page’s content in the buffers, and we say that the page is now dirty. Further, there is a misalignment between what’s in the cache and what’s on disk, but that’s fine because Postgres writes this information in the transaction log. The transaction log is the write-ahead log, and it organizes WAL files segments, which are files, usually 16 MB in size, are sequentially written and are continuously synced. That’s why it’s important to measure sequential writes when we work with Postgres, and also make sure that we are able to sync efficiently and effectively. For example, between the shared buffers and the PGDATA, there is also the operating system cache. We need to make sure that with fsync, we bypass.

Similar to the case, for example, wherein we are in a virtualized environment, there’s a cache between the supervisor and the underlying operating system. Moreover, if there’s also a cache for storage or a battery that writes data on a disk in case power goes down, we need to bypass that too. This is all stuff that also applies in the Kubernetes space. This was through environmental installations and VMs installations. It’s basically the same in Kubernetes.

Asynchronously this activity happens regularly; the dirty pages on the database cache flashed on disk are called checkpoints. There are also pages that we can flash on disk on other occasions, but this is a simplified view. This activity of aligning the cache with the disk is important to understand, like what happens at checkpoint time, and in this case, we could monitor or measure random writes. There is also optimization to write sequentially, which we need to measure for the WAL. When the WAL file is completed, what happens is that PostgreSQL marks it as ready to be recycled, and it can be used in crash recovery. For instance, the system is shut down, and that file could be used to rebuild the consistency of the database from the latest checkpoint, so all the changes are in the old files. The latest checkpoint is the last known moment in time where the data was flashed from the cache on disk, and by replaying what was in the WAL files, we achieve consistency. 

Hopefully, this short overview of the life of the transaction can be useful in the section where I am trying to recommend some architectures for PostgreSQL in Kubernetes. The major idea here is that it’s okay to start benchmarking one instance at a time. That’s what we did last time, and we got one PostgreSQL instance. We measured and benchmarked this storage and then the database. It is important to measure and spot the major bottlenecks, but it’s not enough. Once we’ve done that, the second step is to move to a real-life production architecture, which must consider business continuity. Hence, we need to think about how we want to deploy Postgres in production. In the business-critical scenarios, we want to have a very high uptime for the database. We need to plan for disaster recovery, which is primarily focused on RPO — the amount of data we can afford to lose. On the other hand, high availability is primarily focused on the recovery time objective. Say, after a failure, how long will it take for us to restore the service? 

These are fundamental goals that you need to be aware of before planning a database system or any system in your infrastructure. Always think about these two major goals. The good thing about Postgres is that it’s been designed with this in mind since the early 2000s, and it’s got two fundamental blocks, namely continuous backup/point in time recovery because one, you need backup. Then you need to recover; otherwise, without recovery, backups are useless. Backups in Postgres are made up of base backups, which are essentially a copy of the pages on disk, while archiving similarly to what was happening before with crash recovery, starting from a base backup and replaying all the WAL files that have been archived in a separate location. We can reach the consistency up to a point in time for up to the latest available WAL. That’s how you build a database that is resilient to disaster recovery. Moreover, always consider that disaster recovery, especially point in time recovery, is important to prevent accidental human errors (i.e., the developer forgot to put a WHERE clause, the DBA did an update without a way condition).

The other feature is streaming replication, based on the write-ahead log. So you see how important the WAL is for Postgres; when we go to Kubernetes, because these are fundamental blocks, they are kind of primitives. It becomes essential to have Postgres managed by an operator and written by people that know Postgres. These are how these internals are created. 

Building on what I just explained before, we’ve got this recycled WAL file segment. We can add something in between when the world file is completed. We can set up Postgres to archive these WAL files to a separate location on our remote systems, which in my opinion, is the safest approach. What happens is that Postgres calls the archive command, which is an option for Postgres for every WAL that can be archived. This is a serialized operation one at a time. Postgres archives the WAL file when the procedure is completed. The WAL file is successfully completed and is recycled. If the archive command fails, Postgres will keep retrying to archive this WAL file. 

The same WAL information is similarly used by the so-called WAL sender — processes that serve using stream replication, streaming replication protocol, and streaming verification clients. So, streaming replication clients are standby servers. The replicas and the archive command are usually shipped to the so-called WAL archive. Both these outlets could become potential bottlenecks in your system, so we need to consider these two flows. Before we talked about the sequential writes for the storage primarily, we’ve got the memory, CPU, etc. But in a business continuity scenario, we also need to consider replicas in the WAL archive. 

Here comes my recommended architecture in Kubernetes. We’ve got one Kubernetes cluster, or you could also see a namespace. We’ve got three nodes for Postgres; I suggest having dedicated nodes with dedicated storage, preferably local. You could also spread these Postgres nodes across three different availability zones if they’re available. We’ve got the Primary on the left, and the Primary is continuously archiving WAL files to the remote location known as the WAL archive. Then at the same time, it streams to two standbys. Both the standbys are part of the option called synchronous standby names in Postgres, so they are a part of the quorum. My suggestion is to have at least one of them serve as synchronous standby and then, as a fallback mechanism, the standbys in case they fall out of sync, or there’s a problem with the streaming replication connection, can use the restore command to fetch WAL files directly from the WAL archive. While archiving, streaming replication, and restoring commands, these are possible sources of bottlenecks.

Once again, in WAL writing, it is essentially tied to the local storage, and specifically the throughput of sequential writes. If there is a bottleneck, you have a slow database. But also, while archiving, one can have bottlenecks because, as I said before in Postgres, this is a serialized process (only one at a time). If there are problems with the network, you can slow down everything. Also, performance is impacted by the remote storage, high performance, or if we are compressing the WAL files. If there’s a bottleneck, Postgres will keep retrying to archive the same old files, and WAL files can pile up. They can pile up with the PG WAL where the directory is located. It could be in the same volume where your data files are in a separate one, which can even cause Postgres to hold. 

Similar applications, again, are affected by the network; this performance of the remote storage of the standbys. It depends on the bottlenecks and the issues whether we have a high value for WAL keep segments or WAL keeps size — the number of WAL that PostgreSQL keeps locally on the Primary. So beyond these numbers and thresholds, the other WAL files are recycled, so they disappear. So if the standby needs one of them, it’s not available anymore, and the standby falls out of sync. To prevent this from happening, Postgres introduced replication slots— the Primary or the source keeps track of the location needed by the standby or the location in the WAL. It keeps the WAL files needed by each standby. As you can imagine, if there’s a problem on the standby, WAL files keep accumulating on Postgres. By this, we’ve got the risk of Postgres halting. Synchronous replication can be used in this case to slow down the Primary. The primary needs to wait for the right to happen and be on standby before the commit is returned to the application. However, make sure that the transaction is written on another node if you do this. That’s also why we recommend not using one synchronous standby architecture; just with two nodes, the standby is synchronous. Because if there’s a problem on this synchronous standby, the only synchronous standby, the Primary, stops writing.

The restore command can also be impacted by the same issues that the WAL archiving could have. Thus, the standby essentially can be delayed. Another source of delay and bottleneck in Postgres is the replay process of the standby — the standby is in continuous recovery. It reads the WAL information from either the WAL archive, through the restore command, or via streaming replication through the WAL receiver and reapplying the WAL information, modifying itself based on what comes from the Primary. This process is limited to one CPU. So, in case of high workloads, while the Primary works in parallel, the standby needs to apply the changes in a serialized way, which can slow down. 

Another recommendation is to have dedicated resources for Postgres similar to what we used to do in traditional environments. Also, in Kubernetes, if you can dedicate a Kubernetes node to one Postgres instance only. There are amazing configuration techniques in Kubernetes, such as pod affinity/anti-affinity, node selectors, tolerations, etc. Another piece of advice is to use the guaranteed quality of service for Postgres, which means setting requests and limits in the resources of the pod to have the same values for us to know that the pod will always use the declared resources. If you can, use local storage on the dedicated node to benchmark the throughput. If you are using public clouds, always watch out for IOPS limitations. In the end, it’s your choice. It depends on your budget and the cost-benefit analysis you’re doing. Benchmarking is the activity that gives you the amount of information you need to make decisions on this topic. 

Let’s now answer how we’re benchmarking Postgres on Kubernetes. What tools did we use? What methodology did we use? Our approach is to observe and enter these fields with no preconceptions. For example, try to let yourself be guided by objective numbers and graphs. We developed these two sets of Helm charts, one is called cnp-sandbox, and one is cnp-bench. They’re both open-source under the Apache license. The cnp-bench was presented in the last session, and the cnp-sandbox is essentially an evaluation platform for Prometheus, Grafana, and our operator called Cloud Native PostgreSQL. Simply put, you can test the observability capabilities of Postgres using Prometheus and Grafana. Then we also rely on pg_stat_statements— an extension that is available for PostgreSQL, which normalizes the queries and gives real-time metrics about how queries are performing in your database. However, you can use your own Postgres setup, and you can use your favorite operator or a container-based installation of Postgres. You don’t need cnp-sandbox. You can use your existing Prometheus, Grafana, or another tool. 

The cnp-sandbox essentially deploys a kind of playground where you can try Prometheus, Grafana, and our Cloud Native PostgreSQL. It also ships a selection of Postgres metrics that are built for our Prometheus exporter, that is natively available in the cnp and custom Grafana dashboard that we have developed. Also, thanks to the experience of the PG admin. The main goal of the sandbox is to play with Postgres, Prometheus, and Grafana. Use these when you are benchmarking because having these diagrams and graphs can help you understand what happened, for example, in the last two days or three days, and helps you make decisions. My advice is not to run this in production but to use it for pre-production or evaluation. You should use your Prometheus and Grafana installation and maybe use the metrics and dashboards distributed with cnp-sandbox. If you do this against an existing Kubernetes cluster, you’ll find yourself from a Prometheus and Grafana environment with the Cloud Native Postgres operator. 

We have improved the cnp-bench from the last webinar, and not only are we able to benchmark storage with fio and pgbench, but we have also introduced HammerDB. The cnp-bench can also run against an existing Postgres database. If you have an existing database installed, for example, with another operator, you can use cnp-bench to monitor that operator. You don’t need cnp-sandbox and cnp operator. The presented script for pgbench initialization creates a pgbench database with a scale factor of 8,000, which is approximately 120-130 GB.

Another example is a pgbench run that runs for 600 seconds, 10 minutes using 64 clients, 128 jobs, and doesn’t initialize the database. You can initialize once and run pgbench several times with different parameters. For example, the result of a diagram with a higher number of clients benchmarking the database. We used the three dedicated nodes on AKS for the five repetitions of a 10-minute pgbench test with a scale factor 8,000. They have eight cores and 64 GB, but I set up the guaranteed quality of service with seven cores in 56 GB of RAM to leave some room for the Prometheus exporter and the management of the node. We also use a Premium P80 storage class. There’s a synchronous standby, and we were able to reach, in this case, two and a half thousand transactions per second by running pgbench on a separate node. Further, there’s a continuous backup using Azure Blob Containers. 

What I want to focus on now is the diagram that shows the WAL archiving process. As you can see, this is something that we don’t want to see in production, linear growth of accumulation in this case of WAL files because the process is serialized. Postgres cannot cope with the workload because the archiving process is probably slow. In this case, we simulated 36,000 WALs, which had been piled in the meantime, then we stopped the workload. As you can see, we got to the peak, and we’re slowing down. If you’re in production here and you’ve got 36,000 WAL files that have piled up and your Primary dies, what happens is that those WAL files have never been archived. So, you’ve got a hole in the point of recoverability of your cluster. You’ve also got a missing history of your database in the WAL archives. Say, maybe you’ve got a standby that comes up, and you can resume there, but you cannot recover in any of these time frames. For example, suppose you can remediate these by parallelizing the archiving process. In that case, you will still have some moments wherein high workloads, WAL files could accumulate, but they eventually come back down. There could be bulk load operations, and you can tune the size of the WAL file. For example, in this case, I was experimenting with 64 MB instead of the default 16 MB. Don’t get scared when you see them because they might be bulk load operations or vacuums again. It would be best to learn how to read these kinds of graphs. 

 

Gabriele Bartolini  48:49

We want to continue working on cnp-bench. We’re planning to improve and run multiple tests with an increasing number of client connections. Also, to automatically manage the interpretations of the test. Otherwise, you have to write your bash scripts that collect the data and help you produce the kind of graphs I showed before. We want to support custom pgbench scripts so that you can use not only the old TP or FTP-like benchmark where pgbench produces but also your custom queries on your databases, not the pgbench one. We also want to improve support for HammerDB and introduce further tools for benchmarking the database. We want to introduce application-level benchmarking, which we have already done. For example, in the 2ndQuarter, wherein the idea is to introduce web application load simulation with a hey. This allows us also to scale horizontally in terms of the front end to simulate multiple clients and nodes that are targeting the database. 

I talked several times about Cloud Native Postgres, the operator we have developed in EDB. It is currently closed source but available for trials. We are planning to open-source this in 2022, so I’m happy to declare the intention of EDB to do this. It’s fully declarative. We will probably take a different direction compared to that of the operators that don’t use stateful sets, for example. We manage our persistent volumes and editors. It doesn’t use an external tool for failover. We are relying on the Kubernetes API server as the source of truth for that. Further, to prove that we believe in database workloads in Kubernetes, I want to say that it’s the component that manages Postgres in the data layer of BigAnimal— the database as a service delivered by EDB (if you’re going to run Postgres in the cloud as a service).  

In conclusion, why is benchmarking PostgreSQL important? Let’s not forget that data is the most important asset an organization has. As we are trying to push in this community, the message, which must be strong, is that data can live in Kubernetes. It’s our duty to showcase this scenario to the world and lead this kind of movement. That’s why we at EDB want to be with Postgres. We are the leaders for Postgres. In traditional environments, we want to be leaders also in Kubernetes; that’s our mindset. Don’t leave anything to chance. This is important. Before you go to production, benchmark your storage and know the good and the bad aspects of your storage. You need to know the limit and when you can rely on your storage or when you need to move to a different kind of storage. The same thing with the database, say, you can plan for an eight-core machine now, and with this machine, you can get up to this point wherein it’s better to move to a 16-core machine with a higher amount of RAM and so on. Remember, you might not be able to benchmark when you’re in production. I’ve seen this happening for many years. It’s too late. You cannot benchmark or stress test a production environment. At least, in some cases, you probably can, but you may put the other services at risk, maybe because you’re using the same environment, infrastructure, or network. Especially in some cases where you’re using shared storage, you might create disruptions for the other tenants. Dedicated head storage and notes; I won’t stop saying how important for a database these two have their own dedicated storage and their own dedicated nodes. However, it doesn’t mean you can’t do otherwise. You can use shared resources, nodes, and storage for as long as you’ve benchmarked; you know your system and what to expect.

For premium database service, it is important to have dedicated resources. We always promote shared-nothing architecture because it gives us predictability and performance. Again, the first is to benchmark the single node and then benchmark the high-availability cluster with continuous backup and replicas. Understand WAL archiving, streaming, restore, replay, and so on, all these processes and how they come into play. Observe the system while it’s been stressed. Possibly, generate chaos in the system by triggering failovers and switchovers. Always think about your RPO and RTO posture and goals when you do this. They have to be the reason why you are doing all of these. 

Finally, I owe everything to Postgres. I owe everything to open-source. Study Postgres! Especially if you’re new or you’re young. Say, you might be tempted to use other databases because Postgres has been around for so long and cool features have been developed 5-10 years ago and they’re not. Finally, I owe everything to Postgres. I owe everything to open-source. Study Postgres! Especially if you’re new or you’re young. Say, you might be tempted to use other databases because Postgres has been around for so long, and cool features were developed 5-10 years ago, and they’re not promoted. But just spend more time on Postgres. I’m sure you’ll find features you didn’t even know they already had. 

 

Bart Farrell  56:38

Very well rounded, and lots of depth in the explanations! Now, we are getting to the end of 2021, and, interestingly, you mentioned that Postgres has a very long lifespan. Perhaps, like you said, maybe some younger folks out there say, “I want the new shiny database that has all these bells and whistles.” But do you expect to see a lot more adoption of data on Kubernetes? In 2022? What role will Postgres play in that? We just want to focus more on that specifically. What do you think we can expect to see more use of operators, further developments in that area?

 

Gabriele Bartolini  57:20

I definitely expect more adoption. I don’t know about Postgres. But I’m hoping that there’s more adoption of Kubernetes cloud-native. The good thing about having the database inside Kubernetes is that we can expand the path and the pipelines. One of the major goals of Kubernetes is to create an environment, an organization that is more agile and has a higher pace of development and higher velocity. We want to deploy in a faster, very safe, secure, and reliable way. The best way to do that is through automated testing. I’ve seen many times Kubernetes environments running just stateful sets, and the database sits outside. Having the database inside Kubernetes enables users to put the database inside the test, part of the test, and have a predictable environment from the developer’s workstation and workstation to the staging environment for pre-production and production environments. You have full control of the whole infrastructure. We are abstracting the infrastructure with Kubernetes. With immutable containers, we can also predict to reduce the variability for all these environments. It’s a great opportunity we have. We just need to continue hammering on the importance of having workloads. Our role is to ensure and deliver this message that it is possible. It cannot be as mature as bare metal and virtual machines, we cannot expect the same level of maturity, but I see the gap is being closed every day. I’m optimistic about this. On the other hand, I don’t know how many people will use Postgres or not. 

 

Bart Farrell  59:50

Quite a few. There is a very strong community behind it. You know that very well. Have you been involved in community work? I do think there’s genuine interest. We had another live stream this week about it, and we have at least another one planned in January. It’s a topic that attracts a lot of attention. We got a question from the audience asking: can you talk more about your choice not to use external tools for high availability like Patroni and instead use a Kubernetes API? And how easy is it or not to use it to prevent us from splitting brains?

 

Gabriele Bartolini  1:00:28

The reason for using them is because Kubernetes has a consensus store — etcd. The information is already there, and we decided to just rely on that source of truth. Every instance in our implementation as an instance manager, which is the pd1 of every port that runs Postgres, this instance manager is continuously talking with the API server in reconciliation loops. It was an idea that brilliant members of my team had at the start, and we saw that it was possible. I think that’s why we took that approach. I have to say that it requires intense knowledge of the internals of Postgres, but that’s what we’ve been doing for 20 years. My team members have been writing all the internals of Postgres, from replication to continuous backup and recovery. That’s what we breathe every day. We decided to model the information as we know the premises. We know how Postgres works with the Kubernetes engine.

 

Gabriele Bartolini  1:02:31

The split-brain, I forgot to say there. The instance managers, having one source of truth, enables us to have fast control of the shutdown and the motion and promotion procedures of each instance in the cluster. 

 

Bart Farrell  1:02:55

That being said, this is the last live stream, which sounds crazy for me to say; it’s 2021. We’ve done a lot of live streams. We’ve had about 65. As usual, though, as is tradition, we have our wonderful artists in the background, creating artistic depictions of all the things we’re talking about. 

 

Bart Farrell  1:04:12

Thanks, everybody, for attending. Thanks for the questions. We’ll keep it going in Slack. Take care, everybody.