Inside PostgreSQL: An Open Source Project’s Growth, Governance, and Future. Q&A with Joe Conway and Robert Treat
“One of the key differences between PostgreSQL and most other Open Source databases is that PostgreSQL is an Open Source project, not an Open Source product.”
Q1. What are some of the most anticipated features or improvements in PostgreSQL v18 that users should look forward to?
A1. Well, every database and every user has their own set of needs and wants, but I usually start with features that are likely to benefit a wide range of use cases and that aren’t very difficult to implement. For example, we’re introducing support for “skip scan” functionality for multicolumn B-tree indexes, which significantly improves query performance by allowing efficient lookups even when conditions on prefix columns are omitted. Another major improvement is the preservation of statistics during major version upgrades, which helps maintain query performance immediately after upgrading, without needing to wait for an ANALYZE to run. On a smaller scale, PostgreSQL v18 will include a built-in uuidv7() function. While seemingly minor, it is a significant quality-of-life improvement for generating timestamp-ordered UUIDs, which provide better indexing and performance. While folks in the past may have been able to install extensions or rolled their own implementations of this in the past, having it built in will make a lot of people’s lives easier.
Q2. Given your extensive experience, how has the PostgreSQL community evolved over the past two decades? What are some key milestones?
A2. Well, 20 years ago puts you right in the window (no pun intended) for one of our most significant releases, PostgreSQL 8.0. The headline feature for that release was the addition of Windows support, but we also saw the addition of Point-In-Time Recovery, spearheaded by the late Simon Riggs. I think this really opened a lot of people’s eyes in the community as to just how big PostgreSQL could really become. From there we have been on a long but steady growth in all aspects whether it be users, contributors, or corporate sponsorships.
I think back to key events like the launch of Heroku Postgres, which really widened our usage with developer communities, as well as the launch of Amazon RDS for PostgreSQL, which really lead the charge into the enterprise. From there, all the major hyper-scalers have now adopted PostgreSQL in one way or another, and we’ve had numerous smaller companies also bring PostgreSQL based services on-line. For the community, this has required a lot of work around managing the evolving nature of the people contributing code, as well as a much larger community of users with a much wider variety of challenges and deployment scenarios. We’ve had to update processes and write some of the “unwritten rules” down, but I think we’ve tried to stay true to the idea of being open to new people and new ideas, and we still try to reach out to people who are unfamiliar with PostgreSQL and the unique ways we operate as one of the largest Open Source communities in the world.
Q3. Can you describe the development process behind PostgreSQL? How do contributions from volunteers, companies, and working groups shape the project?
A3. PostgreSQL’s process is often referred to as “old school”, but that makes sense for a project that pre-dates Git itself, let alone GitHub or Gitlab. In our case, anyone who is so inclined can propose a change in the system via our developers mailing list. In most cases, this will probably include a patch for others to review and critique. Changes are typically reviewed by a few folks who provide feedback on both the code and the implementation ideas behind the code, and ultimately that work will get reviewed by one of the PostgreSQL “committers”, who are the final reviewers and ultimate deciders for adding changes into the source repository.
Every couple of months there is a process called a “commitfest”, where everyone tries to focus on making sure outstanding patches are reviewed and (hopefully) committed, and once a year that gets turned into a major release. Given the nature of database software and the high standards necessary for the project, some changes can take months and sometimes even years to make it into the system, again depending on the complexity and how modular the changes can be made. What is interesting to see is that the process is very decentralized; there is no central authority telling people what they should be working on or what types of patches the project is looking for; you show up with an idea and if others agree, the ideas move forward. This leads to all kinds of interaction from developers working on different types of problems, across corporate lines, all with the intention of making PostgreSQL the best possible software it can be.
Q4. What role do non-profit organizations and working groups play in supporting and advancing PostgreSQL?
A4. One of the key differences between PostgreSQL and most other Open Source databases is that PostgreSQL is an Open Source project, not an Open Source product. There is no corporate owner, and no corporate marketing department trying to sell anything. People may be paid by someone for the time they spend, but from the community perspective, they are all essentially volunteers. This usually leads to different groups of folks coalescing around a particular project need or community oriented goal. For example, we have working groups for things like security issues and one for managing the PostgreSQL infrastructure. Similarly, we have a few different nonprofits who focus on different areas like buying servers for the infrastructure group, or working on advocacy issues like managing PostgreSQL events in different parts of the world.
Q5. With PostgreSQL’s growing popularity, how does the community ensure the quality and stability of new releases like v18?
A5. As a database product, PostgreSQL’s needs for stability and reliability are extremely high, and over the decades, PostgreSQL development culture has evolved to reinforce that. Following the Open Source ideal that “with enough eyes, all bugs are shallow”, our open development process allows anyone to review, study, and report any issues in the existing code, and all new code is reviewed by multiple people before it is committed.
We also limit new feature development to new major releases only, which provides extra confidence for those on existing releases to upgrade when bug-fixes come out. We also believe in the idea of “test early, test often”. Every developer runs their patch through our regression test suite, which in turn is then re-run by reviewers and committers, and then once committed the changes go into our “buildfarm” where they are run against dozens of machines with different operating systems and build options. Beyond that, there are also users and companies who regularly run their own set of custom tests, do static analysis on the code, and at this point PostgreSQL is used in academic and research departments for things like performance benchmarking of other products (RAM and CPU performance for example), so the overall effect leads to a fairly comprehensive set of coverage.
Q6. How does the collaboration between PostgreSQL and cloud providers like AWS benefit both the community and enterprise users?
A6. While the PostgreSQL community provides the core database engine, that’s only one piece of the puzzle, especially for enterprise users. Beyond the database, you still need to deal with the underlying hardware the database is deployed on, deal with backup and restore services, integration with applications, network security, and several other operational challenges.
Whether you are deploying in a customized version of PostgreSQL like Amazon Aurora PostgreSQL-Compatible Edition, a more traditional deployment like Amazon RDS for PostgreSQL, or if you want to DIY your PostgreSQL in Amazon EC2, AWS provides a comprehensive set of services and support options to make it all happen. In return, that opens the doors for a much larger and far more diverse ecosystem, which ultimately drives more users and applications working with PostgreSQL. This in turn creates a virtuous cycle for the community, as teams like ours are able to work with these users, bringing dedicated resources towards the continued development and enhancement of PostgreSQL.
Q7. In what ways is AWS contributing to the PostgreSQL ecosystem, and how do these efforts align with the project’s open-source principles?
A7. The most obvious way that AWS contributes to the PostgreSQL ecosystem is through code contributions, starting with a dedicated contributor team that focuses on community development and operational support. This doesn’t just include the core code base; we also help shepherd critical projects like the JDBC and ODBC drivers, support community projects like pg_hint_plan (an extension that implements query hints in PostgreSQL), and AWS led projects like the Open Source pgActive, a multiple writer based logical replication system. Of course, we also have folks who work in support and operational roles around PostgreSQL who provide bug reports and corresponding fixes based on customer issues we see within the services we run, as well as internal testing we do for new releases. Additionally, we also take a long-term view towards community health, providing support in more traditional ways like sponsoring PostgreSQL events, allowing users and developers to meet in real life, which helps strengthen and grow the community.
Q8. What are some challenges faced by the PostgreSQL community as it scales globally, and how are they being addressed, especially in the AI revolution?
A8. AI has become an interesting topic for a lot of Open Source projects, not just PostgreSQL. Like others, we are witnessing an emerging trend of AI-assisted patches being submitted, which presents both opportunities and challenges. We are cautiously optimistic that this approach might help broaden project participation, particularly as a C-based project. While C remains the dominant language for critical infrastructure development, it’s not the most common language for new industry entrants. As a project with a 30-year history and promising future ahead, we recognize the vital importance of attracting and nurturing new contributors.
On the other hand, these AI produced patches are often lower quality and include hallucinations that make them unworkable, and as touched upon, we have a high standard not just for working code, but also for maintainable code. As one of the PostgreSQL Committers on our team recently put it, “Writing patches and responding to feedback are valuable skills to learn and grow as a contributor, and I’d appreciate some level of assurance that I’m investing in a human contributor when I review a patch instead of just talking to an LLM by proxy”.
That said, one of the interesting wrinkles on this topic for PostgreSQL is that we are also seeing significant adoption of PostgreSQL as an AI capable data-store, through the use of PGVector, an extension which adds vector data types and indexing methods for AI oriented workloads, all in a nice, PostgreSQL based, ACID compliant, SQL supporting package. So, our project is benefiting in multiple ways from the “AI revolution”; we just need to make sure we get that balance right.
Q9.Looking ahead, what are your hopes or predictions for the future of PostgreSQL in the next 5 to 10 years?
A9. At the moment, it is hard to see momentum for PostgreSQL slowing down. One of the big challenges in the next 5 years will certainly be how we navigate the evolving AI landscape, but we have generally been able to navigate these types of changes before, whether it was silly ideas like “XML is going to replace the relational database”, or more serious changes, like the rise of cloud platforms and the industry moving away from traditional hardware; there were many who were concerned about that change in the database world, but for PostgreSQL that has really been a significant win.
In 10 years’ time, I think more about the project itself, as we start to see some of those earliest adopters, many of whom are in project leadership positions, slowly roll in to retirement years. I think the caliber of people we have in some of the younger folks joining the community now is really impressive, so I’m pretty optimistic, but succession planning is always a delicate exercise. I’m sure there will also be challenges that we aren’t looking at heavily, and predicting the future continues to be difficult, but if there is one bet that I have made that has paid off, it was betting on PostgreSQL.
…………………………………………

Joe Conway has been involved with the PostgreSQL community for more than 25 years, presently as a PostgreSQL Committer, Major Contributor, and Infrastructure Team member. He currently leads the PostgreSQL Contributors Team at Amazon Web Services.

Robert Treat is a long-time open-source author and advocate who has contributed to numerous projects, events, and industry groups. Best known for his work with PostgreSQL, where he was recognized as a Major Contributor, he recently joined Amazon Web Services as a Principal Database Engineer on the PostgreSQL Contributors Team.
Sponsored by AWS