Why OpenStack is like a Crowdfunded Viking Movie

Why is OpenStack Like a Crowdfunded Viking Movie?

Everybody ‘gets’ the concept of a Viking Movie. It’s got boats, men in hats with horns, lots of fighting and often a one way trip to Valhalla. So imagine you decide to crowdfund one. Things start off really well, and then turns into a mess, because the people committing resources all want their own distinct versions, but many of the competing memes are hard to reconcile.

So no only do we end up with all the obvious elements of a Viking Movie but after a few months of crowd involvement we also have the following elements in our movie:

  • A film noir subplot about a man who commits a murder on a Viking ship and is tasked with finding the killer.
  • Vikings fight zombies. With transformer Viking ships.
  • A Viking shepherd superhero.
  • An opening scene involving a traffic jam of Viking boats and a musical number (“Love Can’t Afjord to wait”)
  • An Edward Snowden cameo in which he plays an Irish Monk who persuades a rampaging Viking horde to respect privacy.

In other words, it’s a giant, un-filmable, unwatchable mess. But not because of a lack of interest. It’s chaotic because we’ve tapped into a vast seam of unmet demand for Viking movies.

OpenStack is a bit like this right now. Fundamentally it’s got a lot of good ideas, but not all of them play well together. This is because the different ‘constituencies’ involved have different, potentially conflicting goals.

Earlier this month I spent a week at the OpenStack conference. It was – like the hypothetical movie I describe above – more than a little bit odd, as you could leave a session discussing ever more abstract layers of virtualization and walk into one where they emphasized the critical importance of pinning a network interface to a specific VM for optimal performance. But once you realize that there are, in fact, several OpenStack conferences happening at the same time and place, it makes more sense. Attendees could be broken down into several distinct groups.

“Hardware Optimizers” want to get the maximum utilization out of hardware

Telcos and Fortune 500 corporations used to spend years defining custom systems, down to the last cable. These systems were designed to have a lifetime of half a decade or more, and rapidly changing hardware meant that the initial deployment had to be sized for 5-7 years out. In addition, such custom systems could only be benchmarked once they were deployed, so by the time multiple layers of management had each added a 50% safety margin to the initial SWAG, it was not unusual to see them running at 10% of capacity (but 150% of the lucky hardware salesman’s annual quota).

Private Clouds made of commodity hardware are perceived as the logical solution to this problem. In an ideal scenario, a solution that needs 25 boxes this week can move to 27 next week without any drama or excitement, and every box they own will run at 50% utilization or higher.

But getting maximum utilization out of hardware also involves bare metal installs and very careful allocation and management of physical resources.

“Manpower Optimizers” want help with managing development and test in a large corporate environment

In some places 70-80% of all sysadmin time is expended on the management of development environments. While the ultimate goal is still to save money, it’s human FTE hours as opposed to hardware and software costs.

In this case, what’s needed from a private cloud is lots of lightweight copies of what a production environment looks like, with heavy sharing of physical hardware. Ease of administration and the ability to rapidly allocate and de-allocate resources of all kinds will be critical. This implies multiple layers of abstraction, which in turn implies a more casual attitude to resource usage.

“Latency Optimizers” – need support for very large federated deployments

The 5G standard implies that we will go from 5-10 data centers to anything up to 1000 small. 5G expects a latency of 1ms, which considering that the speed of light means the data center can’t be more than 186 miles away, or 93 miles for a round trip, assuming an instant response. Bear in mind that the internet currently takes 4ms from New York to Philadelphia.

Instead of having a private cloud architecture that’s cookie-cuttered to multiple locations, we’ll need centralized management of herds of up to 1,000 private clouds. Some of these will serve large urban areas and will be sensitive to efficiency. Others will be in sparsely-populated rural areas and will rely on virtualization to deploy a full stack of 20 or so applications onto a cluster of three nodes in a shack somewhere. Note that the configurations won’t change much as it’s a production system, but automated patching, master data management and failover to nearby sites will be important.

The “Public Private Cloud” folks

They want to compete with AWS or Azure, and want to use OpenStack as a foundation for a commercial offering. This introduces a different challenge, as they need to make sure everything that’s used is actually paid for. Running such a cloud is a tricky proposition – the major public clouds combine the pricing power of an oligarchy with economies of scale and years of experience. And many people want OpenStack because they don’t want to hand over control to someone else. As a consequence, the future is ‘iffy’ for commercial public clouds that don’t offer a clear advantage or somehow differentiate themselves.

So before we all write off OpenStack …

The confusion and complexity we currently see in OpenStack is the result of all these different groups trying to use the same platform. It’s a consequence of too much interest, not too little. In addition, the different groups all have solid, commercially backed reasons for investing considerable time and effort in OpenStack.  While it’s been subject to internal criticism,  retrenchment and some bizarre decisions, the fundamentals that led to its existence haven’t changed. Will it continue to exist in its current form? I don’t know. But the need for what it aspires to do will continue, so I don’t think it will disappear.

Where VoltDB fits

How do we fit into all this? “Very Well!” is the answer. People who want get the maximum from their hardware appreciate the fact that we are one of the most efficient products on the market when it comes to resource usage. We also virtualize really well, for those OpenStack users who are focusing on managing armies of developers and swarms of MIS systems. When it comes to minimizing latency we’re in the sweet spot as well – our whole architecture is designed to be as fast as possible. We also have a proven capability to work in a public cloud, either as part of a user application or as a stand-alone ‘DB as a service’ offering. Give VoltDB a try and see what you think!

The bottom line is that OpenStack is surrounded by noise and confusion. But beneath it all it offers real value, and is arguably the only credible path forward for private clouds. Everything else is just Horns and Valkyries….

Watch the Video

See David’s full commentary, from the show, in this video.

 Sponsored by VoltDB

You may also like...