On Software Containers. Q&A with Joe Leslie
Q1. Why do containers matter?
Containers allow developers to package small pieces of working product in a microservice that can start and restart (in the event of a failure) extremely quickly. They also offer excellent isolation from other application runtime components because everything that the software needs to run is inside the container, including the software code, runtime, system tools, system libraries, and settings. Containers provide several benefits to support the DevOps methodology and empower organizations to deliver software faster, meet customer demand, and stay ahead of their competition. Key benefits offered by containers include:
- Improved time to market:Increase delivery speed for new services with development and operational agility.
- Deployment speed: Help DevOps teams accelerate deployment speed and frequency.
- IT infrastructure savings:Reduce costs with better server compute density utilization, lower software licensing costs, and increased application workload density.
- IT operations performance:Improve operational efficiency by switching to a single operating model.
- Improved application availability:Containers can easily be configured to run in a process redundant deployment model using the Kubernetes containerized orchestration platform. Thus, in the event of a failure, the application continues to runwhile the failing container is rescheduled and a new container is started.
- Increased flexibility:Package, ship, and run applications on any public or private cloud.
Q2. What is the difference between container-native and containerization?
Containerization enables you to bundle an application with all of the configuration files, libraries and dependencies necessary in order to run across different computing environments. Containers are designed to run a specific function within a software application system, but the containerization approach doesn’t take advantage of that functionality. You can containerize just about anything, but that doesn’t mean that it’s been optimized to take advantage of all of the benefits that containers and the cloud offer.
Container-native software is built and architected to perform in a container environment — so you can scale components out and back independently based on demand. Container-native software is also architected for resilience. Containers that run isolated components can be configured to run in a redundant configuration to build fault-tolerance into the system. In this configuration, if a container fails for any reason, a new container is quickly scheduled to replace it within seconds. While the new container is restarting, existing containers of the same type will pick up the extra load automatically. Throughout this recovery process, overall application availability and performance are not impacted. The container-native approach takes full advantage of container runtime management and orchestration platforms such as Kubernetes.
Q3. Why doesn’t the “lift and shift” approach offer the flexibility, agility, and scalability today’s modern architectures require?
Legacy applications rely on a single stack of software and each of those pieces are tightly coupled together. Taking those pieces apart is difficult, and it doesn’t make sense to take the whole application stack and stuff it in a container. So essentially, you need to take all the different parts of that stack apart and then put each of those parts into their own containers.
This is called the “lift and shift” model, and while any application can be containerized using this approach, it remains in its original tightly coupled software-stack design. This results in container “bloat” — meaning the containers take on many more tasks than they were designed for. As a result, components lose the ability to scale independently and to start and restart independently, some of the key benefits of containers.
In the end, the “lift and shift” model is just another virtualization strategy that doesn’t leverage the benefits containers offer. Every time they want to scale the database, either to increase transactional throughput or to increase storage capacity, they must scale ALL the components —even if they are not needed for the task at hand. The lift and shift approach lacks the flexibility and improved scalability that today’s modern applications require.
Q4. What are the main features of a container-native, distributed database?
A container-native distributed database can scale transactional and storage management components easily and independently to meet specific application requirements. It also allows for deploying each different database component in its own container. Therefore, components can scale in and out based on demand. For example, you may want to scale up only transactional capability and not storage capacity. Or you may have lower transactional rates but require durability across many regions. A container-native database such as NuoDB allows scale-out of the storage components without requiring any change in the transactional capacity.
Another benefit of scaling out is that it creates redundancy, which means that the database has strong fault-tolerant capabilities. So, if there’s node or pod failure or hardware or network issues, that redundancy results in the distributed database always being available. As a container-native database, new instances of the database components can be started as needed following any failure to ensure applications always remain available.
A container-native distributed database also reduces costs and improves performance by running the database natively in the same cloud computing environment where the applications run. Organizations realize a reduction in cost because you only use as much database as you need and because the database scales in and out based on need, you don’t have to provision for the daily, weekly, or yearly maximum load. The distributed database also allows for cost savings because multiple instances of the database provide always up-to-date redundancy across geographical regions, eliminating the need for expensive dedicated disaster recovery failover systems that requires manual activation in the event of a failure.
About Joe Leslie: As Senior Product Manager, Joe Leslie drives the NuoDB product strategy and roadmap for deploying NuoDB in container-native computing environments such as Kubernetes. Joe works closely with the NuoDB Engineering and Marketing teams to ensure NuoDB’s leadership position in the distributed SQL database marketplace, where NuoDB focuses on delivering horizontal scale out and continuous availability for hybrid cloud applications. Joe has over 20 years of experience delivering database products and management tools in the transactional and analytical database marketplace.