On the Cassandra 4.0 beta release. Q&A with Ekaterina Dimitrova, Apache Cassandra Contributor

Q1. You just announced Apache Cassandra 4.0 Beta. What is special about this release?

Apache Cassandra 4.0 comes faster in scaling operations, with new auditing capabilities, and last but not least, promise for stability from the community. It also embraces privacy by design. The security enhancements include a new audit logging feature, a new FQL tool that allows for the live traffic capture and replay, new controls to enable use cases that need data access on a per data center basis, and Virtual Tables that let you selectively expose system metrics or configuration settings. 

Regulatory compliance with laws such as SOX, PCI and GDPR. These types of compliance are crucial for companies that are traded on public stock exchanges, hold payment information such as credit cards, or retain private user information. 

This 4.0 release also works with Java 11, which allows access to Java’s new Z Garbage Collector (ZGC). Support for JDK 11 in Apache Cassandra 4.0 is an experimental feature, and not recommended for production use.

The development of Apache Cassandra 4.0 proceeded with the stated goal of delivering the most stable major release to date to achieve a high level of adoption. Some of you may have noticed the code freeze that was initiated a year ago. The reason for that is to ensure the environment is largely tested and stabilized, which is not possible if the community keeps on adding new features. The Apache Cassandra 4.0 Beta comes with more than 1,000 bug fixes. Of course, there are always very specific use cases. That is why the Apache Cassandra community encourages users to share them now and test the release during Beta so the  community can work on any potential issues before the migration of production environments after GA. 

Q2. What are the main reasons to start already using Apache Cassandra 4.0 Beta?

Apache Cassandra 4.0 Beta is not a production-ready version but the community is encouraging people to start using it in their test and QA environments now. There are two reasons for this recommendation. First, you will want to check all the improvements and new features in this release. Second, you can test and provide the community with timely feedback from your use cases, so it can support them before GA. The current code freeze guarantees there will be no new features or breaking API changes in future builds. This means that any time you put into Beta testing will potentially translate into transitioning your production workloads later after 4.0 GA.

Q3. Is it really safe to use it since it is still in Beta? 

Beta version is a pre-release of software that is given out to a large group of users to try under real conditions. Beta versions have gone through testing in-house and are generally fairly close in look, feel, and function to the final product. Beta is safe enough to be used in a test environment for early support and testing on potential migrations to Apache Cassandra 4.0. With that being said, it is safe to use it in tests but not production environments at this moment. 

Q4. You mention in your note release “Redefining the elasticity you should expect from your distributed systems with Zero Copy Streaming” What do you mean with this?

Streaming is a process where nodes of a cluster exchange data in the form of SSTables. Cassandra streams data between nodes during scaling operations such as adding a new node or datacenter during peak traffic times. Apache Cassandra 4.0 comes with the new Zero Copy Streaming functionality. According to its authors these critical operations are now up to 5x faster without vnodes compared to previous versions, which means a more elastic architecture particularly in cloud and Kubernetes environments. As a community member recently explained when it comes to Mean Time to Recovery (MTTR) — a KPI that is used to measure how quickly a system recovers from a failure — Zero Copy Streaming has a very direct impact here with a five fold improvement on performance.

Q5. You also wrote that “Globally distributed systems have unique consistency caveats”. What are such caveats? And how Apache Cassandra 4.0 Beta helps solving them?

Eventual consistency is a consistency model used in distributed computing to achieve high availability that informally guarantees that, if no new updates are made to a given data item, eventually all accesses to that item will return the last updated value.

Cassandra keeps the data replicas in sync through a process called repair. The community is working to harden and optimize incremental repair for a faster, less resource-intensive operation to maintain consistency across data replicas. This is important and not an easy task.

Q6. One of the features of Java 11 is the new Z Garbage Collector (ZGC). This feature is still experimental, why did you include it in the Cassandra 4.0 release? 

Support for JDK 11 in Apache Cassandra 4.0 is an experimental feature, and not recommended for production use. Java 11 has experimental ZGC available as an experimental GC implementation.

ZGC aims to reduce GC pause times to a max of a few milliseconds with no latency degradation as heap sizes increase. These improvements can potentially significantly improve the node availability profiles from garbage collection on a cluster. In The Last Pickle’s published Apache Cassandra 4.0 Benchmarks we can see in their results that Cassandra 4.0 brings strong performance improvements on its own which are massively amplified by the availability of new garbage collectors like ZGC.

 Q7. A number of utilities have added support for Cassandra 4.0. What are the most significant ones in your opinion and why? 

The third-party ecosystem has their eyes on this release and a number of utilities have already added support for Cassandra 4.0. These include the client driver libraries, Spring Boot and Spring Data, Quarkus, the DataStax Kafka Connector and Bulk Loader, The Last Pickle’s Cassandra Reaper tool for managing repairs, Medusa for handling backup and restore, and the Spark Cassandra Connector. 

Five operators for Apache Cassandra have been created with the intention to make it easier to run containerized Cassandra on Kubernetes. Recently, the major contributors to these operators came together to confirm the creation of a community-based operator with the intent of making one that makes it easy to run Cassandra on K8s. One of the project’s organizational goals is that the end result will eventually become part of the Apache Software Foundation or the Apache Cassandra project. 

An extensive list of third-party projects can be found at the official Cassandra website.

Q8. You wrote that “There will be no new features or breaking API changes in future Beta or GA builds.” Why?

Beta is the time when the community concentrates on stabilizing the release. This is time for testing and bug-fixing. It can be also considered early testing for potential upgrades of your production environment. With that said, it is crucial for the users what they test now to be what they will have later after GA; no new features or breaking changes. It helps them to prepare for a future potential migration of their production environment.

Qx Anything else you wish to add?

With over 1,000 bug fixes, the project is focused on quality with replay, fuzz, property-based, fault-injection, and performance tests. There are a lot of discussions and work going on at the moment. Harry, a fuzz testing tool for Apache Cassandra, was open sourced in September. The author’s goal is to generate reproducible workloads that are as close to real-life as possible, while being able to efficiently verify the cluster state against the model without pausing the workload itself. The community strongly encourages its Cassandra users to try 4.0 Beta in their test and QA environments and send back feedback from their use cases.


Ekaterina Dimitrova is a Software Engineer at DataStax and a Contributor to the Apache Cassandra project. Previously, she worked as a Researcher at the Advanced Data Management Technologies  Laboratory, University of Pittsburgh. Her work with Professor Panos K. Chrysanthis and Associate Dean Adam J. Lee on “Authorization-aware optimization for multi-provider queries” was published at the Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing. Before coming to the US, she also worked for 7 years for HP, now DXC; starting as a Technical Support Engineer she moved to leadership positions where she was accountable for the UNIX and IBMi iSeries services for some of the most strategic company customers. She has a Masters degree in Computer Science from the University of Pittsburgh (focusing on Database Management Systems, Privacy and Confidentiality, and Data Science) and Masters degree in Distributed Systems and Mobile Technologies from Sofia University, Bulgaria.

Sponsored by DataStax

You may also like...