Skip to content

What are the challenges for modern Data Centers? Interview with David Gorbet.

by Roberto V. Zicari on March 25, 2014

“The real problem here is the word “silo.” To answer today’s data challenges requires a holistic approach. Your storage, network and compute need to work together.”–David Gorbet.

What are the challenges for modern data centers? On this topic I have interviewed David Gorbet, Vice President of Engineering at MarkLogic.
RVZ

Q1. Data centers are evolving to meet the demands and complexities imposed by increasing business requirements. What are the main challenges?

David Gorbet: The biggest business challenge is the rapid pace of change in both available data and business requirements. It’s no longer acceptable to spend years designing a data application, or the infrastructure to run it. You have to be able to iterate on functionality quickly. This means that your applications need to be developed in a much more agile manner, but you also need to be able to reallocate your infrastructure dynamically to the most pressing needs. In the era of Big Data this problem is exacerbated. The increasing volume and complexity of data under management is stressing both existing technologies and IT budgets. It’s not just a matter of scale, although traditional “scale-up” technologies do become very expensive as data volumes grow. It’s also a matter of complexity of data. Today a lot of data has a mix of structured and unstructured components, and the traditional solution to this problem is to split the structured components into an RDBMS, and use a search technology for the unstructured components. This creates additional complexity in the infrastructure, as different technology stacks are required for what really should be components of the same data.

Traditional technologies for data management are not agile. You have to spend an inordinate amount of time designing schemas and planning indexing strategies, both of which require pretty-much full knowledge of the data and query patterns you need to provide the application value. This has to be done before you can even load data. On the infrastructure side, even if you’ve embraced cloud technologies like virtualization, it’s unlikely you’re able to make good use of them at the data layer. Most database technologies are not architected to allow elastic expansion or contraction of capacity or compute power, which makes it hard to achieve many of the benefits (and cost savings) of cloud technologies.

To solve these problems you need to start thinking differently about your data center strategy. You need to be thinking about a data-centered data center, versus today’s more application-centered model.

Q2. You talked about a “data-centered” data center? What is it? and what is the difference with respect to a classical Data-Warehouse?

David Gorbet: To understand what I mean by “data-centered” data center, you have to think about the alternative, which is more application-centered. Today, if you have data that’s useful, you build an application or put it in a data warehouse to unlock its value. These are database applications, so you need to build out a database to power them. This database needs a schema, and that schema is optimized for the application. To build this schema, you need to understand both the data you’ll be using, and the queries that the application requires.
So you have to know in advance everything the application is going to do before you can build anything. What’s more, you then have to ETL this data from wherever it lives into the application-specific database.

Now, if you want another application, you have to do the same thing. Pretty soon, you have hundreds of data stores with data duplicated all over the place. Actually, it’s not really duplicated, it’s data derived from other data, because as you ETL the data you change its form losing some of the context and combining what’s left with bits of data from other sources. That’s even worse than straight-up duplication because provenance is seldom retained through this process, so it’s really hard to tell where data came from and trace it back to its source. Now imagine that you have to correct some data.
Can you be sure that the correction flowed through to every downstream system? Or what if you have to delete data due to a privacy issue, or change security permissions on data? Even with “small data” this is complicated, but it’s much harder and costlier with high volumes of data.

A “data-centered” data center is one that is focused on the data, its use, and its governance through its lifecycle as the primary consideration. It’s architected to allow a single management and governance model, and to bring the applications to the data, rather than copying data to the applications. With the right technologies, you can build a data-centered data center that minimizes all the data duplication, gives you consistent data governance, enables flexibility both in application development over the data and in scaling up and down capacity to match demand, allowing you to manage your data securely and cost-effectively throughout its lifecycle.

Q3. Data center resources are typically stored in three silos: compute, storage and network: is this a problem?

David Gorbet: It depends on your technology choices. Some data management technologies require direct-attached storage (DAS), so obviously you can’t manage storage separately with that kind of technology. Others can make use of either DAS or shared storage like SAN or NAS.
With the right technology, it’s not necessarily a problem to have storage managed independently from compute.
The real problem here is the word “silo.” To answer today’s data challenges requires a holistic approach. Your storage, network and compute need to work together.

Your question could also apply to application architectures. Traditionally, applications are built in a three-tiered architecture, with a DBMS for data management, an application server for business logic, and a front-end client where the UI lives. There are very good reasons for this architecture, and I believe it’s likely to be the predominant model for years to come. But even though business logic is supposed to reside in the app server, every enterprise DBMS supports stored procedures, and these are commonly used to leverage compute power near the data for cases where it would be too slow and inefficient to move data to the middle tier. Increasingly, enterprise DBMSes also have sophisticated built-in functions (and in many cases user-defined functions) to make it easy to do things that are most efficiently done right where the data lives. Analytic aggregate calculations are a good example of this. Compute doesn’t just reside in the middle tier.

This is nothing new, so why am I bringing it up? Because as data volumes grow larger, the problem of moving data out of the DBMS to do something with it is going to get a lot worse. Consider for example the problem faced by the National Cancer Institute. The current model for institutions wanting to do research based on genomic data is to download a data set and analyze it. But by the end of 2014, the Cancer Genome Atlas is expected to have grown from less than 500 TB to 2.5 PB. Just downloading 2.5 PB, even over 10-gigabit network would take almost a month.

The solution? Bring more compute to the data. The implication? Twofold: First, methods for narrowing down data sets prior to acting on them are critical. This is why search technology is fast becoming a key feature of a next-generation DBMS. Search is the query language for unstructured data, and if you have complex data with a mix of structured and unstructured components, you need to be able to mix search and query seamlessly. Second, DBMS technologies need to become much more powerful so that they can execute sophisticated programs and computations efficiently where the data lives, scoped in real-time to a search that can narrow the input set down significantly. That’s the only way this stuff is going to get fast enough to happen in real-time. Another way of putting this is that the “M” in DBMS is going to increase in scope. It’s not enough just to store and retrieve data. Modern DBMS technology needs to be able to do complex, useful computations on it as well.

Q4. How do you build such a “data-centered” data center?

David Gorbet: First you need to change your mindset. Think about the data as the center of everything. Think about managing your data in one place, and bringing the application to the data by exposing data services off your DBMS. The implications for how you architect your systems are significant. Think service-oriented architectures and continuous deployment models.

Next, you need the right technology stack. One that can provide application functionality for transactions, search and discovery, analytics, and batch computation with a single governance and scale model. You need a storage system that gives great SLAs on high-value data and great TCO on lower-value data, without ETL. You need the ability to expand and contract compute power to serve the application needs in real time without downtime, and to run this infrastructure on premises or in the cloud.

You need the ability to manage data throughout its lifecycle, to take it offline for cost savings while leaving it available for batch analytics, and to bring it back online for real-time search, discovery or analytics within minutes if necessary. To power applications, you need the ability to create powerful, performant and secure data services and expose them right from where the data lives, providing the data in the format needed by your application on the fly.
We call this “schema on read.”

Of course all this has to be enterprise class, with high availability, disaster recovery, security, and all the enterprise functionality your data deserves, and it has to fit in your shrinking IT budget. Sounds impossible, but the technology exists today to make this happen.

Q5. For what kind of mission critical apps is such a “data-centered” data center useful?

David Gorbet: If you have a specific application that uses specific data, and you won’t need to incorporate new data sources to that application or use that data for another application, then you don’t need a data-centered data center. Unfortunately, I’m having a hard time thinking of such an application. Even the dull line of business apps don’t stand alone anymore. The data they create and maintain is sent to a data warehouse for analysis.
The new mindset is that all data is potentially valuable, and that isn’t just restricted to data created in-house.
More and more data comes from outside the organization, whether in the form of reference data, social media, linked data, sensor data, log data… the list is endless.

A data-centered data center strategy isn’t about a specific application or application type. It’s about the way you have to think about your data in this new era.

Q6. How Hadoop fits into this “data-centered” data center?

David Gorbet: Hadoop is a key enabling technology for the data-centered data center. HDFS is a great file system for storing loads of data cheaply.
I think of it as the new shared storage infrastructure for “big data.” Now HDFS isn’t fast, so if you need speed, you may need NAS, SAN, or even DAS or SSD. But if you have a lot of data, it’s going to be much cheaper to store it in HDFS than in traditional data center storage technologies. Hadoop MapReduce is a great technology for batch analytics. If you want to comb through a lot of data and do some pretty sophisticated stuff to it, this is a good way to do it. The downside to MapReduce is that it’s for batch jobs. It’s not real-time.

So Hadoop is an enabling technology for a data-centered data center, but it needs to be complemented with high-performance storage technologies for data that needs this kind of SLA, and more powerful analytic technologies for real-time search, discovery and analysis. Hadoop is not a DBMS, so you also need a DBMS with Hadoop to manage transactions, security, real-time query, etc.

Q7. What are the main challenges when designing an ETL strategy?

David Gorbet: ETL is hard to get right, but the biggest challenge is maintaining it. Every app has a v2, and usually this means new queries that require new data that needs a new schema and revised ETL. ETL also just fundamentally adds complexity to a solution.
It adds latency since many ETL jobs are designed to run in batches. It’s hard to track provenance of data through ETL, and it’s hard to apply data security and lifecycle management rules through ETL. This isn’t the fault of ETL or ETL tools.
It’s just that the model is fundamentally complex.

Q8. With Big Data analytics you don’t know in advance what data you’re going to need (or get in the future). What is the solution to this problem?

David Gorbet: This is a big problem for relational technologies, where you need to design a schema that can fit all your data up front.
The best approach here is to use a technology that does not require a predefined schema, and that allows you to store different entities with different schemas (or no schema) together in the same database and analyze them together.
A document database, which is a type of NoSQL database, is great for this, but be careful which one you choose because some NoSQL databases don’t do transactions and some don’t have the indexing capability you need to search and query the data effectively.
Another trend is to use Semantic Web technology. This involves modeling data as triples, which represent assertions with a subject, a predicate, and an object.
Like “This derivative (subject) is based on (predicate) this underlying instrument (object).”
It turns out you can model pretty much any data that way, and you can invent new relationships (predicates) on the fly as you need them.
No schema required. It’s also easy to relate data entities together, since triples are ideal for modeling relationships. The challenge with this approach is that there’s still quite a bit of thought required to figure out the best way to represent your data as triples. To really make it work, you need to define rules about what predicates you’re going to allow and what they mean so that data is modeled consistently.

Q9. What is the cost to analyze a terabyte of data?

David Gorbet: That depends on what technologies you’re using, and what SLAs are required on that data.
If you’re ingesting new data as you analyze, and you need to feed some of the results of the analysis back to the data in real time, for example if you’re analyzing risk on derivatives trades before confirming them, and executing business rules based on that, then you need fast disk, a fair amount of compute power, replicas of your data for HA failover, and additional replicas for DR. Including compute, this could cost you about $25,000/TB.
If your data is read-only and your analysis does not require high-availability, for example a compliance application to search those aforementioned derivatives transactions, you can probably use cheaper, more tightly packed storage and less powerful compute, and get by with about $4,000/TB. If you’re doing mostly batch analytics and can use HDFS as your storage, you can do this for as low as $1,500/TB.

This wide disparity in prices is exactly why you need a technology stack that can provide real-time capability for data that needs it, but can also provide great TCO for the data that doesn’t. There aren’t many technologies that can work across all these data tiers, which is why so many organizations have to ETL their data out of their transactional system to an analytic or archive system to get the cost savings they need. The best solution is to have a technology that can work across all these storage tiers and can manage migration of data through its lifecycle across these tiers seamlessly.
Again, this is achievable today with the right technology choices.

———————————-
David Gorbet, Vice President, Engineering, MarkLogic.
David brings two decades of experience bringing to market some of the highest-volume applications and enterprise software in the world. David has shipped dozens of releases of business and consumer applications, server products and services ranging from open source to large-scale online services for businesses, and twice has helped start and grow billion-dollar software products.

Prior to MarkLogic, David helped pioneer Microsoft’s business online services strategy by founding and leading the SharePoint Online team. In addition to SharePoint Online, David has held a number of positions at Microsoft and elsewhere with a number of products, including Microsoft Office, Extricity B2Bi server software, and numerous incubation products.

David holds a Bachelor of Applied Science degree in Systems Design Engineering with an additional major in Psychology from the University of Waterloo, and an MBA from the University of Washington Foster School of Business.

Related Posts

On Linked Data. Interview with John Goodwin. ODBMS Industry September 1, 2013

On NoSQL. Interview with Rick Cattell. ODBMS Industry Watch August 19, 2013

Resources

Got Loss? Get zOVN!
Authors: Daniel Crisan, Robert Birke, Gilles Cressier, Cyriel Minkenberg and Mitch Gusat. IBM Research – Zurich Research Laboratory.
Abstract: Datacenter networking is currently dominated by two major trends. One aims toward lossless, flat layer-2 fabrics based on Converged Enhanced Ethernet or InfiniBand, with ben- efits in efficiency and performance.

F1: A Distributed SQL Database That Scales
Authors: Jeff Shute, Radek Vingralek, Eric Rollins, Stephan Ellner, Traian Stancescu, Bart Samwel, Mircea Oancea, John Cieslewicz, Himani Apte, Ben Handy, Kyle Littlefield, Ian Rae*. Google, Inc., *University of Wisconsin-Madison
Abstract: F1 is a distributed relational database system built at Google to support the AdWords business.

Events

David Gorbet will be speaking at MarkLogic World in San Francisco from April 7-10, 2014.

ODBMS.org on Twitter: @odbmsorg

From → Uncategorized

No comments yet

Leave a Reply

Note: HTML is allowed. Your email address will not be published.

Subscribe to this comment feed via RSS