Skip to content

"Trends and Information on Big Data, New Data Management Technologies, and Innovation."

This is the Industry Watch blog. To see the complete ODBMS.org
website with useful articles, downloads and industry information, please click here.

Dec 7 14

On Big Data Analytics. Interview with Anthony Bak

by Roberto V. Zicari

“The biggest challenge facing data analytics is how to turn complex data into actionable information. One way to think about complexity is that there are many stories happening simultaneously in the data – some relevant to the problem being solved but most irrelevant. The goal of Big Data Analytics is to find the relevant story, reducing complexity to actionable information.”–Anthony Bak

On Big Data Analytics, I have interviewed Anthony Bak, Data Scientist and Mathematician at Ayasdi.

RVZ

Q1. What are the most important challenges for Big Data Analytics?

Anthony Bak: The biggest challenge facing data analytics is how to turn complex data into actionable information. One way to think about complexity is that there are many stories happening simultaneously in the data – some relevant to the problem being solved but most irrelevant. The goal of Big Data Analytics is to find the relevant story, reducing complexity to actionable information. How do we sort through all the stories in an efficient manner?

Historically, organizations extracted value from data by building data infrastructure and employing large teams of highly trained Data Scientists who spend months, and sometimes years, asking questions of data to find breakthrough insights. The probability of discovering these insights is low because there are too many questions to ask and not enough data scientists to ask them.

Ayasdi’s platform uses Topological Data Analysis (TDA) to automatically find the relevant stories in complex data and operationalize them to solve difficult and expensive problems. We combine machine learning and statistics with topology, allowing for ground-breaking automation of the discovery process.

Q2. How can you “measure” the value you extract from Big Data in practice?

Anthony Bak: We work closely with our clients to find valuable problems to solve. Before we tackle a problem we quantify both its value to the customer and the outcome delivering that value.

Q3. You use a so called Topological Data Analysis. What is it?

Anthony Bak: Topology is the branch of pure mathematics that studies the notion of shape.
We use topology as a framework combining statistics and machine learning to form geometric summaries of Big Data spaces. These summaries allow us to understand the important and relevant features of the data. We like to say that “Data has shape and shape has meaning”. Our goal is to extract shapes from the data and then understand their meaning.

While there is no complete taxonomy of all geometric features and their meaning there are a few simple patterns that we see in many data sets: clusters, flares and loops.

Clusters are the most basic property of shape a data set can have. They represent natural segmentations of the data into distinct pieces, groups or classes. An example might find two clusters of doctors committing insurance fraud.
Having two groups suggests that there may be two types of fraud represented in the data. From the shape we extract meaning or insight about the problem.

That said, many problems don’t naturally split into clusters and we have to use other geometric features of the data to get insight. We often see that there’s a core of data points that are all very similar representing “normal” behavior and coming off of the core we see flares of points. Flares represent ways and degrees of deviation from the norm.
An example might be gene expression levels for cancer patients where people in various flares have different survival rates.

Loops can represent periodic behavior in the data set. An example might be patient disease profiles (clinical and genetic information) where they go from being healthy, through various stages of illness and then finally back to healthy.
The loop in the data is formed not by a single patient but by sampling many patients in various stages of disease. Understanding and characterizing the disease path potentially allows doctors to give better more targeted treatment.

Finally, a given data set can exhibit all of these geometric features simultaneously as well as more complicated ones that we haven’t described here. Topological Data Analysis is the systematic discovery of geometric features.

Q4. The core algorithm you use is called “Mapper“, developed at Stanford in the Computational Topology group by Gunnar Carlsson and Gurjeet Singh. How has your company, Ayasdi, turned this idea into a product?

Anthony Bak: Gunnar Carlsson, co-founder and Stanford University mathematics professor, is one of the leaders in a branch of mathematics called topology. While topology has been studied for the last 300 years, it’s in just the last 15 years that Gunnar has pioneered the application of topology to understand large and complex sets of data.

Between 2001 and 2005, DARPA and the National Science Foundation sponsored Gunnar’s research into what he called Topological Data Analysis (TDA). Tony Tether, the director of DARPA at the time, has said that TDA was one of the most important projects DARPA was involved in during his eight years at the agency.
Tony told the New York Times, “The discovery techniques of topological data analysis are going to have a huge impact, and Gunnar Carlsson is at the forefront of this research.”

That led to Gunnar teaming up with a group of others to develop a commercial product that could aid the efforts of life sciences, national security, oil and gas and financial services organizations. Today, Ayasdi already has customers in a broad range of industries, including at least 3 of the top global pharmaceutical companies, at least 3 of the top oil and gas companies and several agencies and departments inside the U.S. Government.

Q5. Do you have some uses cases where Topological Data Analysis is implemented to share?

Anthony Bak: There is a well known, 11-year old data set representing a breast cancer research project conducted by the Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital. The research looked at 272 cancer patients covering 25,000 different genetic markers. Scientists around the world have analyzed this data over and over again. In essence, everyone believed that anything that could be discovered from this data had been discovered.

Within a matter of minutes, Ayasdi was able to identify new, previously undiscovered populations of breast cancer survivors. Ayasdi’s discovery was recently published in Nature.

Using connections and visualizations generated from the breast cancer study, oncologists can map their own patients data onto the existing data set to custom-tailor triage plans. In a separate study, Ayasdi helped discover previously unknown biomarkers for leukaemia.

You can find additional case studies here.

Q6. Query-Based Approach vs. Query-Free Approach: could you please elaborate on this and explain the trade off?

Anthony Bak: Since the creation of SQL in the 1980s, data analysts have tried to find insights by asking questions and writing queries. This approach has two fundamental flaws. First, all queries are based on human assumptions and bias. Secondly, query results only reveal slices of data and do not show relationships between similar groups of data. While this method can uncover clues about how to solve problems, it is a game of chance that usually results in weeks, months, and years of iterative guesswork.

Ayasdi’s insight is that the shape of the data – its flares, cluster, loops – tells you about natural segmentations, groupings and relationships in the data. This information forms the basis of a hypothesis to query and investigate further. The analytical process no longer starts with coming up with a hypothesis and then testing it, instead we let the data, through its geometry, tell us where to look and what questions to ask.

Q7 Anything else you wish to add?

Anthony Bak: Topological data analysis represents a fundamental new framework for thinking about, analyzing and solving complex data problems. While I have emphasized its geometric and topological properties it’s important to point out that TDA does not replace existing statistical and machine learning methods. 
Instead, it forms a framework that utilizes existing tools while gaining additional insight from the geometry.

I like to say that statistics and geometry form orthogonal toolsets for analyzing data, to get the best understanding of your data you need to leverage both. TDA is the framework for doing just that.

———————
Anthony Bak is currently a Data Scientist and mathematician at Ayasdi. Prior to Ayasdi, Anthony was at Stanford University where he worked with Ayasdi co-founder Gunnar Carlsson on new methods and applications of Topological Data Analysis. He did his Ph.D. work in algebraic geometry with applications to string theory.

Resources

- Extracting insights from the shape of complex data using topology
P. Y. Lum,G. Singh,A. Lehman,T. Ishkanov,M. Vejdemo-Johansson,M. Alagappan,J. Carlsson & G. Carlsson
Nature, Scientific Reports 3, Article number: 1236 doi:10.1038/srep01236, 07 February 2013

-Topological Methods for the Analysis of High Dimensional Data Sets and 3D Object Recognition

-Extracting insights from the shape of complex data using topology

Related Posts

-Predictive Analytics in Healthcare. Interview with Steve Nathan,ODBMS Industry Watch,August 26, 2014

Follow ODBMS.org on Twitter: @odbmsorg

##

Nov 18 14

On Mobile Data Management. Interview with Bob Wiederhold

by Roberto V. Zicari

“We see mobile rapidly emerging as a core requirement for data management. Any vendor who is serious about being a leader in the next generation database market, has to have a mobile strategy.”
–Bob Wiederhold.

I have interviewed Bob Wiederhold, President and Chief Executive Officer of Couchbase.

RVZ

Q1. On June 26, you have announced a $60 Million series E round of financing. What are Couchbase’s chances of becoming a major player in the database market (and not only in the NoSQL market)? And what is your strategy for achieving this?

Bob Wiederhold: Enterprises are moving from early NoSQL validation projects to mission critical implementations.
As NoSQL deployments evolve to support the core business, requirements for performance at scale and completeness increase. Couchbase Server is the most complete offering on the market today, delivering the performance, scalability and reliability that enterprises require.
Additionally, we see mobile rapidly emerging as a core requirement for data management. Any vendor who is serious about being a leader in the next generation database market, has to have a mobile strategy.
At this point, we are the only NoSQL vendor offering an embedded mobile database and the sync needed to manage data between the cloud, the device and other devices. We believe that having the most complete, best performing operational NoSQL database along with a comprehensive mobile offering, uniquely positions us for leadership in the NoSQL market.

Q2. Why Couchbase Lite is so strategically important for you?

Bob Wiederhold: First, because the world is going mobile. That is indisputable. Mobile initiatives top the list of every IT department. As I said above, if you don’t have a mobile data management offering, you are not looking at the complete needs of the developer or the enterprise.
Second, let’s level set on Couchbase Lite. Couchbase Lite is our offering for an embedded mobile JSON database.
Our complete mobile offering, Couchbase Mobile, includes Couchbase Server – for data management in the cloud, and Sync Gateway for synchronization of data stored on the device with other devices, or the database in the cloud.
Today, because connectivity is unknown, data synchronization challenges force developers to either choose a total online (data stored in the cloud), or total offline (data stored on the device) data management strategy.
This approach limits functionality, as when the network is unavailable, online apps may freeze and not work at all. People want access to their applications, travel, expense report, or multi-user collaboration etc., whether they’re online or not.
Couchbase Mobile is the only NoSQL offering available that allows developers to build JSON applications that work whether an application is online or off, and manages the synchronization of the data between those applications and the cloud, or other devices. This is revolutionary for the mobile world and we are seeing tremendous interest from the mobile developer community.

Q3 What can enterprise do with a NoSQL mobile database, that they would not be able to do with a non-mobile database?

Bob Wiederhold: Offline access and syncing has been too time and resource intensive for mobile app developers. With Couchbase Mobile, developers don’t have to spend months, or years, building a solution that can store unstructured data on the device and sync that data with external sources – whether that is the cloud or another device. With Couchbase Mobile, developers can easily create mobile applications that are not tied to connectivity or limited by sync considerations. This empowers developers to build an entirely new class of enterprise applications that go far beyond what is available today.

Q4 What kind of businesses and applications will benefit when people use a NoSQL databases on their mobile devices? Can you give us some examples?

Bob Wiederhold: Nearly every business can benefit from the use of a complete mobile solution to build always available apps that work offline or online. One business example is our customer Infinite Campus.
Focused on educational transformation through the use of information technology, Infinite Campus is looking at Couchbase Lite as a solution that will enable students to complete their homework modules even when they don’t have access to a network outside of school. Instructional videos and homework assignments can be selectively pushed to students’ mobile devices when they are online at school.
Using Couchbase Lite, students can work online at school and then complete their homework assignments anywhere – on or offline. And the data seamlessly syncs across devices and between users, so teachers and students can participate in real-time Q&A chat sessions during lectures.

Q5. Do you have some customers who have gone into production with that?

Bob Wiederhold: The product is new, but we already have several customers that are live.
In addition to Microsoft, we have several companies around the world. You can check out one iOS app by Spraed, who is using Couchbase Server – running on AWS, Sync Gateway and Couchbase Lite.

Q6. Couchbase Server is a JSON document-based database. Why this design choice?

Bob Wiederhold: The world is changing. Businesses need to be agile and responsive.
Relational databases, with rigid schema design, don’t allow for fast change. JSON is the next generation architecture that businesses are increasingly using for mission critical applications because the technology allows them to manage and react to all aspects of big data: volume, variety and velocity of data, as well as big users and do that in a cloud based landscape.

Q7. Do you have any plan to work with Cloud providers?

Bob Wiederhold: We already work with many cloud providers. We have a great relationship with Amazon Web Services and many of our customers, including WebMD and Viber, run on AWS.
We also have partnerships and customers running on Windows Azure, GoGrid, and others. More and more organizations are moving infrastructure to the cloud and we will continue expanding our eco system to give our customers the flexibility to choose the best deployment options for their businesses.

Q8. Do you see happening any convergence between operational data management and analytical data processing? And if yes, how?

Bob Wiederhold: Yes, Analytics can happen at real time, near real time in operational stores and in batch modes. We have several customers who are deploying and have deployed complete solutions to integrated operational big data with real time analytical processing. LivePerson has done some incredibly innovative work here. They have been very open about the work they are doing and you can hear them tell their story here.

Q9 Do you have any plan to integrate your system with platforms for use in big data analytics?

Bob Wiederhold: Absolutely and we are integrated today into many platforms, including Hadoop via our Couchbase Hadoop connector and have many customers using Couchbase Server with both realtime and batch mode analytics platforms. See Avira and LivePerson presentations for examples. We continue to work with big data ISVs to ensure our customers can easily integrate their systems with the analytics system of their choosing.

—————
BobW
Bob Wiederhold, President and Chief Executive Officer, Couchbase

Bob has more than 25 years of high technology experience. Until an acquisition by IBM in 2008, Bob served as chairman, CEO, and president of Transitive Corporation, the worldwide leader in cross-platform virtualization with over 20 million users. Previously, he was president and CEO of Tality Corporation, the worldwide leader in electronic design services, whose revenues and size grew to almost $200 million and had 1,500 worldwide employees.
Bob held several executive general management positions at Cadence Design Systems, Inc., an electronic design automation company, which he joined in 1985 as an early stage start-up and helped to grow to more than $1.5 billion during his 13 years at the company. Bob also headed High Level Design Systems, a successful electronic design automation start-up that was acquired by Cadence in 1996. Bob has extensive board experience having served on both public (Certicom, HLDS) and private company boards (Snaketech, Tality, Transitive, FanfareGroup).

Resources

-Magic Quadrant for Operational Database Management Systems. 16 October 2014. Analyst(s): Donald Feinberg, Merv Adrian, Nick Heudecker, Gartner.

Related Posts

- Using NoSQL at BMW. Interview with Jutta Bremm and Peter Palm. ODBMS Industry Watch, September 29, 2014

- NoSQL for the Internet of Things. Interview with Mike Williams. ODBMS Industry Watch, June 5, 2014

Follow ODBMS.org on Twitter: @odbmsorg

##

Nov 2 14

On Hadoop RDBMS. Interview with Monte Zweben.

by Roberto V. Zicari

“HBase and Hadoop are the only technologies proven to scale to dozens of petabytes on commodity servers, currently being used by companies such as Facebook, Twitter, Adobe and Salesforce.com.”–Monte Zweben.

Is it possible to turn Hadoop into a RDBMS? On this topic, I have interviewed Monte Zweben, Co-Founder and Chief Executive Officer of Splice Machine.

RVZ

Q1. What are the main challenges of applications and operational analytics that support real-time, interactive queries on data updated in real-time for Big Data?

Monte Zweben: Let’s break down “real-time, interactive queries on data updated in real-time for Big Data”. “Real-time, interactive queries” means that results need to be returned in milliseconds to a few seconds.
For “Data updated in real-time” to happen, changes in data should be reflected in milliseconds. “Big Data” is often defined as dramatically increased volume, velocity, and variety of data. Of these three attributes, data volume typically dominates, because unlike the other attributes, its growth is virtually unbounded.

Traditional RDBMSs like MySQL or Oracle can support real-time, interactive queries on data updated in real-time, but they struggle on handling Big Data. They can only scale up on larger servers that can cost hundreds of thousands, if not millions of dollars per server.

Big Data technologies such as Hadoop can easily handle Big Data data volumes with their ability to scale-out on commodity hardware. However, with their batch analytics heritage, they often struggle to provide real-time, interactive queries. They also lack ACID transactions to support data updated in real time.

So, real-time applications and operational analytics had to choose between real-time interactive queries on data updated in real-time, or Big Data volumes. With Splice Machine, these applications can have the best of both worlds: real-time interactive queries, the reliability of real-time updates on ACID transactions, and the ability to handle Big Data volumes with a 10x price/performance improvement over traditional RDBMSs.

Q2. You suggested that companies should replace their traditional RDBMS systems. Why and when? Do you really think this is always possible? What about legacy systems?

Monte Zweben: Companies should consider replacing their traditional RDBMSs when they experience significant cost or scaling issues. Our informal surveys of customers indicate that up to half of traditional RDBMSs experience cost or scaling issues. The biggest barrier to migrating from a traditional RDBMS to a new database like Splice Machine is converting custom stored procedure (e.g., PL/SQL code). Operational analytics often have limited custom stored procedure code, so the migration process is generally straightforward.

Operational applications typically have thousands of lines of custom stored procedure code, but in extreme cases it can run into hundreds of thousands to millions of lines of code. There are actually commercially-supported tools that will convert from PL/SQL to the Java needed for Splice Machine. We have typically seen them convert from 70-95% accurately, but it will obviously depend on the complexity of the original code. Financially, migration makes sense for many companies to get an ongoing 10x price/performance, but there are cases when it does not make sense because converting custom code is too expensive.

Q3. Is scale-out the solution to Big Data at scale? Why?

Monte Zweben: Scale-out is definitely the critical technology to making Big Data work at scale. Scale-out leverages inexpensive, commodity hardware to parallelize queries to easily achieve a 10x price/performance improvement over existing database technologies.

Q4. You have announced your real-time relational database management system. What is special about Splice Machine`s Hadoop RDBMS? 

Monte Zweben: We are the only Hadoop RDBMS. There are obviously many RDBMSs, but we are the only one with scale-out technology from Hadoop. Hadoop is the only scale-out technology proven to scales into dozens of petabytes on commodity hardware at companies like Facebook. There are other SQL-on-Hadoop technologies, but none of them can support real-time ACID transactions.

Q5 Hadoop-connected SQL databases do not eliminate “silos”. How do you handle this? 

Monte Zweben: We are not a database that has a connector to Hadoop. We are tightly integrated into Hadoop, using HBase and HDFS as our storage layer.

Q6. How did you manage to move Hadoop beyond its batch analytics heritage to power operational applications and real-time analytics?

Monte Zweben: At its core, Hadoop is a distributed file system (HDFS) where data cannot be updated or deleted. If you want to update or delete anything, you have to reload all the data (i.e., batch load). As a file system, it has very limited ability to seek specific data; instead, you use Java MapReduce programs to scan all of the data to find the data you need. It can easily take hours or even days for queries to return data (i.e., batch analytics). There is no way you could support a real-time application on top of HDFS and MapReduce.

By using HBase (a real-time key value store on top of HDFS), Splice Machine provides a full RDBMS on top of Hadoop.
You can now get real-time, interactive queries on real-time updated data on Hadoop, necessary to support operational applications and analytics.

Q7. How do you use Apache Derby™ and Apache HBase™/Hadoop?

Monte Zweben: Splice Machine marries two proven technology stacks: Apache Derby for ANSI SQL and HBase/Hadoop for proven scale out technology. With over 15 years of development, Apache Derby is a Java-based SQL database. Splice Machine chose Derby because it is a full-featured ANSI SQL database, lightweight (<3 MB), and easy to embed into the HBase/Hadoop stack.

HBase and Hadoop are the only technologies proven to scale to dozens of petabytes on commodity servers, currently being used by companies such as Facebook, Twitter, Adobe and Salesforce.com. Splice Machine chose HBase and Hadoop because of their proven auto-sharding, replication, and failover technology.

Q8. Why did you replace the storage engine in Apache Derby with HBase?

Monte Zweben: Apache Derby has a native shared-disk (i.e., non-distributed) storage layer. We replaced that storage layer with HBase to provide an auto-sharded, distributed computing storage layer.

Q9. Why did you redesign the planner, optimizer, and executor of Apache Derby?

Monte Zweben: We redesigned the planner, optimizer, and executor of Derby because Splice Machine has a distributed computing infrastructure instead of its old shared-disk storage. Distributed computing requires a functional re-architecting because computation must be distributed to where the data is, instead of moving the data to the computation.

Q10. What are the main benefits for developers and database architects who build applications?

Monte Zweben: There are two main benefits to Splice Machine for developers and database architects. First, no longer is data scaling a barrier to using massive amounts of data in an application; you no longer need to prune data or rewrite applications to do unnatural acts like manual sharding. Second, you can enjoy the scaling with all the critical features of an RDBMS – strong consistency, joins, secondary indexes for fast lookups, and reliable updates with transactions. Without those features, developers have to implement those functions for each application, a costly, time-consuming, and error-prone process.

———————
Monte Zweben, Co-Founder and Chief Executive Officer, Splice Machine

A technology industry veteran, Monte’s early career was spent with the NASA Ames Research Center as the Deputy Branch Chief of the Artificial Intelligence Branch, where he won the prestigious Space Act Award for his work on the Space Shuttle program. Monte then founded and was the Chairman and CEO of Red Pepper Software, a leading supply chain optimization company, which merged in 1996 with PeopleSoft, where he was VP and General Manager, Manufacturing Business Unit.

In 1998, Monte was the founder and CEO of Blue Martini Software – the leader in e-commerce and multi-channel systems for retailers. Blue Martini went public on NASDAQ in one of the most successful IPOs of 2000, and is now part of Red Prairie. Following Blue Martini, he was the chairman of SeeSaw Networks, a digital, place-based media company, and is the chairman of Clio Music, an advanced music research and development company. Monte is also the co-author of Intelligent Scheduling and has published articles in the Harvard Business Review and various computer science journals and conference proceedings.

Zweben currently serves on the Board of Directors of Rocket Fuel Inc. as well as the Dean’s Advisory Board for Carnegie-Mellon’s School of Computer Science. Monte’s involvement with CMU, which has been a long-time leader in distributed computing and Big Data research, helped inspire the original concept behind Splice Machine.

Resources

ODBMS.org: Several Free Resources on Hadoop.

Related Posts

-AsterixDB: Better than Hadoop? Interview with Mike Carey. ODBMS INDUSTRY WATCH, October 22, 2014

- Hadoop at Yahoo. Interview with Mithun Radhakrishnan. ODBMS INDUSTRY WATCH, September 21, 2014

- On the Hadoop market. Interview with John Schroeder. ODBMS INDUSTRY WATCH, June 30, 2014

–> FOLLOW ODBMS.ORG ON TWITTER: @odbmsorg 

##

Oct 22 14

AsterixDB: Better than Hadoop? Interview with Mike Carey

by Roberto V. Zicari

“To distinguish AsterixDB from current Big Data analytics platforms – which query but don’t store or manage Big Data – we like to classify AsterixDB as being a “Big Data Management System” (BDMS, with an emphasis on the “M”)”–Mike Carey.

Mike Carey and his colleagues have been working on a new data management system for Big Data called AsterixDB.

The AsterixDB Big Data Management System (BDMS) is the result of approximately four years of R&D involving researchers at UC Irvine, UC Riverside, and Oracle Labs. The AsterixDB code base currently consists of over 250K lines of Java code that has been co-developed by project staff and students at UCI and UCR.

The AsterixDB project has been supported by the U.S. National Science Foundation as well as by several generous industrial gifts.

RVZ

Q1. Why build a new Big Data Management System?

Mike Carey: When we started this project in 2009, we were looking at a “split universe” – there were your traditional parallel data warehouses, based on expensive proprietary relational DBMSs, and then there was the emerging Hadoop platform, which was free but low-function in comparison and wasn’t based on the many lessons known to the database community about how to build platforms to efficiently query large volumes of data. We wanted to bridge those worlds, and handle “modern data” while we were at it, by taking into account the key lessons from both sides.

To distinguish AsterixDB from current Big Data analytics platforms – which query but don’t store or manage Big Data – we like to classify AsterixDB as being a “Big Data Management System” (BDMS, with an emphasis on the “M”). 
We felt that the Big Data world, once the initial Hadoop furor started to fade a little, would benefit from having a platform that could offer things like:

  • a flexible data model that could handle data scenarios ranging from “schema first” to “schema never”;
  • a full query language with at least the expressive power of SQL;
  • support for data storage, data management, and automatic indexing;
  • support for a wide range of query sizes, with query processing cost being proportional to the given query;
  • support for continuous data ingestion, hence the accumulation of Big Data;
  • the ability to scale up gracefully to manage and query very large volumes of data using commodity clusters; and,
  • built-in support for today’s common “Big Data data types”, such as textual, temporal, and simple spatial data.

So that’s what we set out to do.

Q2. What was wrong with the current Open Source Big Data Stack?

Mike Carey: First, we should mention that some reviewers back in 2009 thought we were crazy or stupid (or both) to not just be jumping on the Hadoop bandwagon – but we felt it was important, as academic researchers, to look beyond Hadoop and be asking the question “okay, but after Hadoop, then what?” 
We recognized that MapReduce was great for enabling developers to write massively parallel jobs against large volumes of data without having to “think parallel” – just focusing on one piece of data (map) or one key-sharing group of data (reduce) at a time. As a platform for “parallel programming for dummies”, it was (and still is) very enabling! It also made sense, for expedience, that people were starting to offer declarative languages like Pig and Hive, compiling them down into Hadoop MapReduce jobs to improve programmer productivity – raising the level much like what the database community did in moving to the relational model and query languages like SQL in the 70’s and 80’s.

One thing that we felt was wrong for sure in 2009 was that higher-level languages were being compiled into an assembly language with just two instructions, map and reduce. We knew from Tedd Codd and relational history that more instructions – like the relational algebra’s operators – were important – and recognized that the data sorting that Hadoop always does between map and reduce wasn’t always needed. 
Trying to simulate everything with just map and reduce on Hadoop made “get something better working fast” sense, but not longer-term technical sense. As for HDFS, what seemed “wrong” about it under Pig and Hive was its being based on giant byte stream files and not on “data objects”, which basically meant file scans for all queries and lack of indexing. We decided to ask “okay, suppose we’d known that Big Data analysts were going to mostly want higher-level languages – what would a Big Data platform look like if it were built ‘on purpose’ for such use, instead of having incrementally evolved from HDFS and Hadoop?”

Again, our idea was to try and bring together the best ideas from both the database world and the distributed systems world. (I guess you could say that we wanted to build a Big Data Reese’s Cup… J)

Q3. AsterixDB has been designed to manage vast quantities of semi-structured data. How do you define semi-structured data?

Mike Carey: In the late 90’s and early 2000’s there was a bunch of work on that – on relaxing both the rigid/flat nature of the relational model as well as the requirement to have a separate, a priori specification of the schema (structure) of your data. We felt that this flexibility was one of the things – aside from its “free” price point – drawing people to the Hadoop ecosystem (and the key-value world) instead of the parallel data warehouse ecosystem.
In the Hadoop world you can start using your data right away, without spending 3 months in committee meetings to decide on your schema and indexes and getting DBA buy-in. To us, semi-structured means schema flexibility, so in AsterixDB, we let you decide how much of your schema you have to know and/or choose to reveal up front, and how much you want to leave to be self-describing and thus allow it to vary later. And it also means not requiring the world to be flat – so we allow nesting of records, sets, and lists. And it also means dealing with textual data “out of the box”, because there’s so much of that now in the Big Data world.

Q4. The motto of your project is “One Size Fits a Bunch”. You claim that AsterixDB can offer better functionality, managability, and performance than gluing together multiple point solutions (e.g., Hadoop + Hive + MongoDB).  Could you please elaborate on this?

Mike Carey: Sure. If you look at current Big Data IT infrastructures, you’ll see a lot of different tools and systems being tied together to meet an organization’s end-to-end data processing requirements. In between systems and steps you have the glue – scripts, workflows, and ETL-like data transformations – and if some of the data needs to be accessible faster than a file scan, it’s stored not just in HDFS, but also in a document store or a key-value store.
This just seems like too many moving parts. We felt we could build a system that could meet more (not all!) of today’s requirements, like the ones I listed in my answer to the first question.
If your data is in fewer places or can take a flight with fewer hops to get the answers, that’s going to be more manageable – you’ll have fewer copies to keep track of and fewer processes that might have hiccups to watch over. If you can get more done in one system, obviously that’s more functional. And in terms of performance, we’re not trying to out-perform the specialty systems – we’re just trying to match them on what each does well. If we can do that, you can use our new system without needing as many puzzle pieces and can do so without making a performance sacrifice.
We’ve recently finished up a first comparison of how we perform on tasks that systems like parallel relational systems, MongoDB, and Hive can do – and things look pretty good so far for AsterixDB in that regard.

Q5. AsterixDB has been combining ideas from three distinct areas — semi-structured data management, parallel databases, and data-intensive computing. Could you please elaborate on that?

Mike Carey: Our feeling was that each of these areas has some ideas that are really important for Big Data. Borrowing from semi-structured data ideas, but also more traditional databases, leads you to a place where you have flexibility that parallel databases by themselves do not. Borrowing from parallel databases leads to scale-out that semi-structured data work didn’t provide (since scaling is orthogonal to data model) and with query processing efficiencies that parallel databases offer through techniques like hash joins and indexing – which MapReduce-based data-intensive computing platforms like Hadoop and its language layers don’t give you. Borrowing from the MapReduce world leads to the open-source “pricing” and flexibility of Hadoop-based tools, and argues for the ability to process some of your queries directly over HDFS data (which we call “external data” in AsterixDB, and do also support in addition to managed data).

Q6. How does the AsterixDB Data Model compare with the data models of NoSQL data stores, such as document databases like MongoDB and CouchBase, simple key/value stores like Riak and Redis, and column-based stores like HBase and Cassandra?

Mike Carey: AsterixDB’s data model is flexible – we have a notion of “open” versus “closed” data types – it’s a simple idea but it’s unique as far as we know. When you define a data type for records to be stored in an AsterixDB dataset, you can choose to pre-define any or all of the fields and types that objects to be stored in it will have – and if you mark a given type as being “open” (or let the system default it to “open”), you can store objects there that have those fields (and types) as well as any/all other fields that your data instances happen to have at insertion time.
Or, if you prefer, you can mark a type used by a dataset as “closed”, in which case AsterixDB will make sure that all inserted objects will have exactly the structure that your type definition specifies – nothing more and nothing less.
(We do allow fields to be marked as optional, i.e., nullable, if you want to say something about their type without mandating their presence.)

What this gives you is a choice!  If you want to have the total, last-minute flexibility of MongoDB or Couchbase, with your data being self-describing, we support that – you don’t have to predefine your schema if you use data types that are totally open. (The only thing we insist on, at the moment, is that every type must have a key field or fields – we use keys when sharding datasets across a cluster.)

Structurally, our data model was JSON-inspired – it’s essentially a schema language for a JSON superset – so we’re very synergistic with MongoDB or Couchbase data in that regard. 
On the other end of the spectrum, if you’re still a relational bigot, you’re welcome to make all of your data types be flat – don’t use features like nested records, lists, or bags in your record definitions – and mark them all as “closed” so that your data matches your schema. With AsterixDB, we can go all the way from traditional relational to “don’t ask, don’t tell”. As for systems with BigTable-like “data models” – I’d personally shy away from calling those “data models”.

Q7. How do you handle horizontal scaling? And vertical scaling?

Mike Carey: We scale out horizontally using the same sort of divide-and-conquer techniques that have been used in commercial parallel relational DBMSs for years now, and more recently in Hadoop as well. That is, we horizontally partition both data (for storage) and queries (when processed) across the nodes of commodity clusters. Basically, our innards look very like those of systems such as Teradata or Parallel DB2 or PDW from Microsoft – we use join methods like parallel hybrid hash joins, and we pay attention to how data is currently partitioned to avoid unnecessary repartitioning – but have a data model that’s way more flexible. And we’re open source and free….

We scale vertically (within one node) in two ways. First of all, we aren’t memory-dependent in the way that many of the current Big Data Analytics solutions are; it’s not that case that you have to buy a big enough cluster so that your data, or at least your intermediate results, can be memory-resident.
Instead, our physical operators (for joins, sorting, aggregation, etc.) all spill to disk if needed – so you can operate on Big Data partitions without getting “out of memory” errors. The other way is that we allow nodes to hold multiple partitions of data; that way, one can also use multi-core nodes effectively.

Q8. What performance figures do you have for AsterixDB?

Mike Carey: As I mentioned earlier, we’ve completed a set of initial performance tests on a small cluster at UCI with 40 cores and 40 disks, and the results of those tests can be found in a recently published AsterixDB overview paper that’s hanging on our project web site’s publication page (http://asterixdb.ics.uci.edu/publications.html).
We have a couple of other performance studies in flight now as well, and we’ll be hanging more information about those studies in the same place on our web site when they’re ready for human consumption. There’s also a deeper dive paper on the AsterixDB storage manager that has some performance results regarding the details of scaling, indexing, and so on; that’s available on our web site too. The quick answer to “how does AsterixDB perform” is that we’re already quite competitive with other systems that have narrower feature sets – which we’re pretty proud of.

Q9. You mentioned support for continuous data ingestion. How does that work?

Mike Carey: We have a special feature for that in AsterixDB – we have a built-in notion of Data Feeds that are designed to simplify the lives of users who want to use our system for warehousing of continuously arriving data.
We provide Data Feed adaptors to enable outside data sources to be defined and plugged in to AsterixDB, and then one can “connect” a Data Feed to an AsterixDB data set and the data will start to flow in. As the data comes in, we can optionally dispatch a user-defined function on each item to do any initial information extraction/annotation that you want.  Internally, this creates a long-running job that our system monitors – if data starts coming too fast, we offer various policies to cope with it, ranging from discarding data to sampling data to adding more UDF computation tasks (if that’s the bottleneck). More information about this is available in the Data Feeds tech report on our web site, and we’ll soon be documenting this feature in the downloadable version of AsterixDB. (Right now it’s there but “hidden”, as we have been testing it first on a set of willing UCI student guinea pigs.)

Q10. What is special about the AsterixDB Query Language? Why not use SQL?

Mike Carey: When we set out to define the query language for AsterixDB, we decided to define our own new language – since it seemed like everybody else was doing that at the time (witness Pig, Jaql, HiveQL, etc.) – one aimed at our data model. 
SQL doesn’t handle nested or open data very well, so extending ANSI/ISO SQL seemed like a non-starter – that was also based on some experience working on SQL3 in the late 90’s. (Take a look at Oracle’s nested tables, for example.). Based on our team’s backgrounds in XML querying, we actually started there – XQuery was developed by a team of really smart people from the SQL world (including Don Chamberlin, father of SQL) as well as from the XML world and the functional programming world – so we started there. We took XQuery and then started throwing the stuff overboard that wasn’t needed for JSON or that seemed like a poor feature that had been added for XPath compatibility.
What remained was AQL, and we think it’s a pretty nice language for semistructured data handling. We periodically do toy with the notion of adding a SQL-like re-skinning of AQL to make SQL users feel more at home – and we may well do that in the future – but that would be different than “real SQL”. (The N1QL effort at Couchbase is doing something along those lines, language-wise, as an example. The SQL++ design from UCSD is another good example there.)

Q11. What level of concurrency and recovery guarantees does AsterixDB offer?

Mike Carey: We offer transaction support that’s akin to that of current NoSQL stores. That is, we promise record-level ACIDity – so inserting or deleting a given record will happen as an atomic, durable action. However, we don’t offer general-purpose distributed transactions. We support an arbitrary number of secondary indexes on data sets, and we’ll keep all the indexes on a data set transactionally consistent – that we can do because secondary index entries for a given record live in the same data partition as the record itself, so those transactions are purely local.

Q12. How does AsterixDB compare with Hadoop? What about Hadoop Map/Reduce compatibility?

Mike Carey: I think we’ve already covered most of that – Hadoop MapReduce is an answer to low-level “parallel programming for dummies”, and it’s great for that – and languages on top like Pig Latin and HiveQL are better programming abstractions for “data tasks” but have runtimes that could be much better. We started over, much as the recent flurry of Big Data analytics platforms are now doing (e.g., Impala, Spark, and friends), but with a focus on scaling to memory-challenging data sizes. We do have a MapReduce compatibility layer that goes along with our Hyracks runtime layer – Hyracks is name of our internal dataflow runtime layer – but our MapReduce compatibility layer is not related to (or connected to) the AsterixDB system.

Q13. How does AsterixDB relate to Hadapt?

Mike Carey: I’m not familiar with Hadapt, per se, but I read the HadoopDB work that fed into it. 
We’re architecturally very different – we’re not Hadoop-based at all – I’d say that HadoopDB was more of an expedient hybrid coupling of Hadoop and databases, to get some of the indexing and local query efficiency of an existing database engine quickly in the Hadoop world. We were thinking longer term, starting from first principles, about what a next-generation BDMS might look like. AsterixDB is what we came up.

Q14. How does AsterixDB relate to Spark?

Mike Carey: Spark is aimed at fast Big Data analytics – its data is coming from HDFS, and the task at hand is to scan and slice and dice and process that data really fast. Things like Shark and SparkSQL give users SQL query power over the scanned data, but Spark in general is really catching fire, it appears, due to its applicability to Big Machine Learning tasks. In contrast, we’re doing Big Data Management – we store and index and query Big Data. It would be a very interesting/useful exercise for us to explore how to make AsterixDB another source where Spark computations can get input data from and send their results to, as we’re not targeting the more complex, in-memory computations that Spark aims to support.

Q15. How can others contribute to the project?

Mike Carey: We would love to see this start happening – and we’re finally feeling more ready for that, and even have some NSF funding to make AsterixDB something that others in the Big Data community can utilize and share. 
(Note that our system is Apache-style open source licensed, so there are no “gotchas” lurking there.)
Some possibilities are:

(1) Others can start to use AsterixDB to do real exploratory Big Data projects, or to teach about Big Data (or even just semistructured data) management. Each time we’ve worked with trial users we’ve gained some insights into our feature set, our query optimizations, and so on – so this would help contribute by driving us to become better and better over time.

(2) Folks who are studying specific techniques for dealing with modern data – e.g., new structures for indexing spatiotemporaltextual (J) data – might consider using AsterixDB as a place to try out their new ideas.
(This is not for the meek, of course, as right now effective contributors need to be good at reading and understanding open source software without the benefit of a plethora of internal design documents or other hints.) We also have some internal wish lists of features we wish we had time to work on – some of which are even doable from “outside”, e.g., we’d like to have a much nicer browser-based workbench for users to use when interacting with and managing an AsterixDB cluster.

(3) Students or other open source software enthusiasts who download and try our software and get excited about it – who then might want to become an extension of our team – should contact us and ask about doing so. (Try it first, though!)  We would love to have more skilled hands helping with fixing bugs, polishing features, and making the system better – it’s tough to build robust software in a university setting, and we would especially welcome contributors from companies.

Thanks very much for this opportunity to share what we’ve being doing!

————————
Michael J. Carey is a Bren Professor of Information and Computer Sciences at UC Irvine.
Before joining UCI in 2008, Carey worked at BEA Systems for seven years and led the development of BEA’s AquaLogic Data Services Platform product for virtual data integration. He also spent a dozen years teaching at the University of Wisconsin-Madison, five years at the IBM Almaden Research Center working on object-relational databases, and a year and a half at e-commerce platform startup Propel Software during the infamous 2000-2001 Internet bubble. Carey is an ACM Fellow, a member of the National Academy of Engineering, and a recipient of the ACM SIGMOD E.F. Codd Innovations Award. His current interests all center around data-intensive computing and scalable data management (a.k.a. Big Data).

Resources

- AsterixDB Big Data Management System (BDMS): Downloads, Documentation, Asterix Publications.

Related Posts

-Hadoop at Yahoo. Interview with Mithun Radhakrishnan. ODBMS Industry Watch, September 21, 2014

-On the Hadoop market. Interview with John Schroeder. ODBMS Industry Watch, June 30, 2014

Follow ODBMS.org on Twitter: @odbmsorg
##

Oct 12 14

Big Data Management at American Express. Interview with Sastry Durvasula and Kevin Murray.

by Roberto V. Zicari

“The Hadoop platform indeed provides the ability to efficiently process large-scale data at a price point we haven’t been able to justify with traditional technology. That said, not every technology process requires Hadoop; therefore, we have to be smart about which processes we deploy on Hadoop and which are a better fit for traditional technology (for example, RDBMS).”–Kevin Murray.

I wanted to learn how American Express is taking advantage of analysing big data.
I have interviewed Sastry Durvasula, Vice President – Technology, American Express, and Kevin Murray, Vice President – Technology, American Express.

RVZ

Q1. With the increasing demand for mobile and digital capabilities, how are American Express’ customer expectations changing?

SASTRY DURVASULA: American Express customers expect us to know them, to understand and anticipate their preferences and personalize our offerings to meet their specific needs. As the world becomes increasingly mobile, our Card Members expect to be able to engage with us whenever, wherever and using whatever device or channel they prefer.
In addition, merchants, small businesses and corporations also want increased value, insights and relevance from our global network.

Q2. Could you explain what is American Express’ big data strategy?

SD: American Express seeks to leverage big data to deliver innovative products in the payments and commerce space that provide value to our customers. This is underpinned by best-in-class engineering and decision science.

From a technical perspective, we are advancing an enterprise-wide big data platform that leverages open source technologies like Hadoop, integrating it with our analytical and operational capabilities across the various business lines. This platform also powers strategic partnerships and real-time experiences through emerging digital channels. Examples include Amex Offers, which connects our Card Members and merchants through relevant and personalized digital offers; an innovative partnership with Trip Advisor to unlock exclusive benefits; insights and tools for our B2B partners and small businesses; and advanced credit and fraud risk management.

Additionally, as always, we seek to leverage data responsibly and in a privacy-controlled environment. Trust and security are hallmarks of our brand. As we leverage big data to create new products and services, these two values remain at the forefront.

Q3. What is the “value” you derive by analysing big data for American Express?

SD: Within American Express, our Technology and Risk & Information Management organizations partner with our lines of business to create new opportunities to drive commerce and serve customers across geographies with the help of big data. Big data is one of our most important tools in being the company we want to be – one that identifies solutions to customers’ needs and helps us deliver what customers want today and what they may want in the future.

Q4. What metrics do you use to monitor big data analytics at American Express?

SD: Big data investments are no different than any other investments in terms of the requirement for quantitative and qualitative ROI metrics with pre- and post-measurements that assess the projects’ value for revenue generation, cost avoidance and customer satisfaction. There is also the recognition that some of the investments, especially in the big data arena, are strategic and longer term in nature, and the value generated should be looked at from that perspective.

Additionally, we are constantly focused on benchmarking the performance of our platform with industry standards, like minute-sort and tera-sort, as well as our proprietary demand management metrics.

Q5. Could you explain how did you implement your big data infrastructure platform at Amex?

KEVIN MURRAY: We started small and expanded as our use cases grew over time, about once or twice a year.
We make it a practice to reassess the hardware and software state within the industry before each major expansion to determine whether any external changes should alter the deployment path we have chosen.

Q6. How did you select the components for your big data infrastructure platform, choosing among the various competing compute and storage solutions available today?

KM: Our research told us low-cost commodity servers with local storage was the common deployment stack across the industry. We made an assessment of industry offerings and evaluated against our objectives to determine a good balance of cost, capabilities and time to market.

Q7. How did you unleash big data across your enterprise and put it to work in a sustainable and agile environment?

SD: We engineered our enterprise-wide big data platform to foster R&D and rapid development of use cases, while delivering highly available production applications. This allows us to be adaptable and agile, scaling up or redeploying, as needed, to meet market and business demands. With the Risk and Information Management team, we established Big Data Labs comprising top-notch decision scientists and engineers to help democratize big data, leveraging self-service tools, APIs and common libraries of algorithms.

Q8. What are the most significant challenges you have encountered so far?

SD: An ongoing challenge is balancing our big data investment between immediate needs and research or innovations that will drive the next generation of capabilities. You can’t focus solely on one or the other but has to find a balance.

Another key challenge is ensuring we are focused on driving outcomes that are meaningful to customers – that are responsive to their current and anticipated needs.

Q9. What did you learn along the way?

KM: The Hadoop platform indeed provides the ability to efficiently process large-scale data at a price point we haven’t been able to justify with traditional technology. That said, not every technology process requires Hadoop; therefore, we have to be smart about which processes we deploy on Hadoop and which are a better fit for traditional technology (for example, RDBMS). Some components of the ecosystem are mature and work well, and others require some engineering to get to an enterprise-ready state. In the end, it’s an exciting journey to offer new innovation to our business.

Q10. Anything else you wish to add?

KM: The big data industry is evolving at lightning speed with new products and services coming to market every day. I think this is being driven by the enterprise’s appetite for something new and innovative that leverages the power of compute, network and storage advancements in the marketplace, combined with a groundswell of talent in the data science domain, pushing academic ideas into practical business use cases. The result is a wealth of new offerings in the marketplace – from ideas and early startups to large-scale mission-critical solutions. This is providing choice to enterprises like we’ve never seen before, and we are focused on maximizing this advantage to bring groundbreaking products and opportunities to life.

———————————-
Sastry Durvasula, Vice President – Technology, American Express
Sastry Durvasula is Vice President and Global Technology Head of Information Management and Digital Capabilities within the Technology organization at American Express. In this role, Sastry leads IT strategy and transformational development to power the company’s data-driven capabilities and digital products globally. His team also delivers enterprise-wide analytics and business intelligence platforms, and supports critical risk, fraud and regulatory demands. Most recently, Sastry and his team led the launch of the company’s big data platform and transformation of its enterprise data warehouse, which are powering the next generation of information, analytics and digital capabilities. His team also led the development of the company’s API strategy, as well as the Sync platform to deliver innovative products, drive social commerce and launch external partnerships.

Kevin Murray, Vice President – Technology, American Express
Kevin Murray is Vice President of Information Management Infrastructure & Integration within the Technology organization at American Express. Throughout his 25+ year career, he has brought emerging technologies into large enterprises, and most recently launched the big data infrastructure platform at American Express. His team architects and implements a wide range of information management capabilities to leverage the power of increasing compute and storage solutions available today.

Related Posts

-Hadoop at Yahoo. Interview with Mithun Radhakrishnan. ODBMS Industry Watch, 2014-09-21

-On Big Data benchmarks. Interview with Francois Raab and Yanpei Chen. ODBMS Industry Watch,2014-08-14

Resources

Presenting at Strata/Hadoop World NY
Big Data: A Journey of Innovation
Thursday, October 16, 2014, at 1:45-2:25 p.m. Eastern
Room: 1 CO3/1 CO4

The power of big data has become the catalyst for American Express to accelerate transformation for the digital age, drive innovative products, and create new commerce opportunities in a meaningful and responsible way. With the increasing demand for mobile and digital capabilities, the customer expectation for real-time information and differentiated experiences is rapidly changing. Big data offers a solution that enables this organization to use their proprietary closed-loop network to bring together consumers and merchants around the world, adding value to each in a way that is individualized and unique.

During their presentation, Sastry Durvasula and Kevin Murray will discuss American Express’ ongoing big data journey of transformation and innovation. How did the company unleash big data across its global network and put it to work in a sustainable and agile environment? How is it delivering offers using digital channels relevant to their Card Members and partners? What have they learned along the way? Sastry and Kevin will address these questions and share their experiences and insights on the company’s big data strategy in the digital ecosystem.

Follow ODBMS.org and ODBMS Industry Watch on Twitter: @odbmsorg
##

Sep 29 14

Using NoSQL at BMW. Interview with Jutta Bremm and Peter Palm.

by Roberto V. Zicari

“We need high performance databases for a wide range of challenges and analyses that arise from a variety of different systems and processes.”–Jutta Bremm, BMW

BMW is using a NoSQL database, CortexDB, for the configuration of test vehicles. I have interviewed Jutta Bremm, IT Project Leader at BMW, and Peter Palm, CVO at Cortex.

RVZ

Q1. What is your role, and for what IT projects are you responsible for at BMW?

Jutta Bremm: I am IT Project Leader for IT projects at BMW with a volume of more than 10 million Euro per year.

Q2. What are the main technical challenges you have at BWM?

Jutta Bremm: We need high performance databases for a wide range of challenges and analyses that arise from a variety of different systems and processes.

These don’t only include recursive, parameterized explosions for bills of materials, but also the provision of standardized tools to the business departments. That way, they can run their own queries more often and are not so dependent on IT to do it for them.

Q3. You define CortexDB as a schema-less multi-model database. What does it mean in practice? What kind of applications is it useful for?

Peter Palm: In CortexDB, datasets are stored as independent entities (cf. objects). To achieve this, the system transforms all content into a new type of index structure. This ensures that every item of content and every field “knows” the context in which it is being used. As a result, the database isn’t searched. Instead, queries are run on information that is already known and the results are combined using simple procedures based on set theory.

This is why there’s no predefined schema for the datasets – only for the index of all fields and the content.
This is what differentiates CortexDB from all other databases, which require the configuration of at least one index even though the datasets themselves are stored in schema-less mode.

The innovative index structure means that no administrative adaptation or optimization of the index is necessary.
Nor is there any requirement for an index for a specific applications – and that enables users to query all the content whenever they want and combine queries with each other too. That makes it very flexible for them to query any field and easily make any necessary development changes to in-house applications.

From the server’s perspective, the fields and content, as well as the interpretation of dataset structure and utilization, are not that important. The application working with the data creates a data structure that can be changed at any time (this is known as schema-less). For CortexDB, all that’s relevant is the content-based structure, which can be used in a generalized way and modified any time. This design gives customers a significant advantage when working with recursive data structures.

This is why CortexDB is particularly well suited to tasks whose definitive structure cannot be fixed at the beginning of the project, as well as for systems that change dynamically. The content-based architecture and the innovative index also deliver significant benefits for BI systems, as ad hoc analyses can be run and adapted whenever required.

In addition, users can add a validity period (“valid from…”) to any item of content. This enables them to view the evolution of particular data over time (known as historization). This evolutionary information is ideal for storing data that change frequently, such as smart metering and insurance information. For each field in a dataset, users don’t only see the information that was valid at the time of the transaction, but also the validity date after which the information was/is/will be valid. This is what we call a temporal database.

These benefits are complemented by the fact that individual fields can be used alone or in combination with others and repeated within a dataset. This – together with the use of validity dates – is what we call a “multi-value” database.

The terms “multi-model”, “multi-value” and “schema-less” also explain the fact that benefits of the database functions mentioned above apply to other NoSQL databases too, but users can extend these with new functions. In principle, any other database can be seen as a subset of CortexDB:

Database type: Key/Value Store
Function: One dataset = one key with one value (a value or value list) => a single, large index of keys
How it works in CortexDB: Every value and every field is indexed automatically and can be freely combined with others by using an occurrence list

Database type: Document Store
Function: One dataset combines several fields using a common ID (often json objects)
How it works in CortexDB: One ID combines fields that belong together in a dataset. Datasets can be output as json objects via an API.

Database type: GraphDB
Function: Links to other datasets are saved as meta information and can be used via proprietary graph queries.
How it works in CortexDB: Links are stored as actual data in a dataset and can be edited using additional fields. Fields can be repeated as often as required.

Database type: Big Table
Function: Multi-dimensional tables that use timestamps to define the validity of information. Its datasets can have a variety of attributes.
How it works in CortexDB: The use of a validity date in addition to a transaction date delivers a temporal database. Additional content can be added despite the dataset description.

Database type: Object oriented
Function: A class model defines the objects that need to be monitored persistently.
How it works in CortexDB: With the Cortex UniPlex application, users can define dataset types. Compared with classes, these define the maximum attributes of a dataset. Nevertheless, users can add more fields at any time, even if they have not been defined for UniPlex.

Q4. Can you please describe the use cases where you use CortexDB at BMW?

Jutta Bremm: The current use case for which we’re working with CortexDB is the explosion of bills of material for the configuration of test vehicles.

The construction of test vehicles must be planned and timed just as carefully as with mass production. To make the process smoother, we conduct reviews before starting construction to ensure that the bills of materials include the right parts and are therefore complete and free of any errors and conflicts.

One thing I’d like to point out here is that every vehicle comprises 15,000 parts, so there are between 10 to the power of 30 and 10 to the power of 60 configuration possibilities! It’s easy to understand why this isn’t an easy task. This high variance is due to the number of different models, engine types, displacements, optional extras, interior fittings and colors. As a result, a development BOM can only be stored in a highly compressed format.

To obtain an individual car from all this, the BOM must be “exploded” recursively. Multiple parameters have an effect on this, including validities (deadlines for parts, products, optional extras, markets etc.), construction stipulations (“this part can only be installed together with a navigation device and a 3-liter engine”) and structures (“this part is comprised of several smaller parts”).

Unlike conventional solutions, for which an explosion function is complex and expensive, the interpretation of the compressed BOM is very easy for CortexDB due to its bidirectional linking technology.

Q5. Why did you select CortexDB and not a classical relational database system? Did you compare CortexDB with other database management systems?

Jutta Bremm: We were looking for a product that would be easy to use, as well as simple and flexible to configure, for our users in product data management. We also wanted the highest possible level of functionality included as standard.

We looked at 4 products that appeared to be suitable for use by the departments for analysis and evaluation. The essential functions for product data management – explosion and the documentation of components used – were only available as standard with CORTEX. For all other products, we were looking at customer-specific extensions that would have cost several hundred thousand euros.

Q6. How do you store complex data structures (such as for example graphs) in CortexDB?

Peter Palm: CortexDB sees graphs as a derivative of certain database functions.

Firstly, it uses the “internal reference” field type (link). This is a data field in which the UUID of a target dataset is stored. That alone enables the use of simple links.

Second, users can choose to define fields as “repeating fields”. That means that the same field can also be used within a dataset. This is useful when a contact has more than one email address or phone number, and for links to individual parts in a BOM.

Repeating fields defined in this way can be grouped together to produce “repeating field groups”. Content items that belong together are thus stored as an information block. An example of this is bank account details that comprise the bank’s name, the sort code and the account number.

The use of repeating field groups, in which validity values are added to linked fields, enables complex data structures within a single dataset.

In addition, every dataset “knows” which other dataset is pointing to it. This bidirectional information using a simple link means that data administration is only required for one dataset. It is only necessary in both datasets if there are two conflicting points of view on a graph (e.g. “my friend considers me as an enemy”).

In addition, result sets can be combined with partial sets resulting from links when running queries and making selections. This limits the results to those that include certain details about their link structures.

Q7. How do you perform data analytics with CortexDB?

Peter Palm: The content in every field “knows” the field context it is being used in and how often (“occurrence list” or “field index”). By combining partial sets (as in set theory), result sets are determined extremely fast, eliminating the need for read access to individual datasets.

CortexDB comes with an application that lets users freely configure queries, reports and graphical output. There is also an application API (data service) that enables these elements to be used within in-house applications or interfaces.

The solution also identifies correlations itself using algorithms, even if they are connected via graphs. Unlike data warehouse systems, this lets users do more than just test estimates or ideas – it determines a result on its own and delivers it to the user for further analysis or for modification of the algorithm.

Q8. Do you some performance metrics for the analysis of recursive structured BOMs (bill of material) for your vehicles?

Jutta Bremm: Internal tests on BOM explosion with conventional relational databases showed that it took up to 120 seconds. Compare that with CortexDB, which delivers the result of the same explosion in 50 milliseconds.

Q9. How do you handle data quality control?

Jutta Bremm: We require 100% data quality (consistency at all times) and CortexDB delivers that.

Q10. What are the main business benefits of using CortexDB for these use cases?

Jutta Bremm: The agile modeling, the flexible adaptation options and the level of functionality delivered as standard shortens the duration of a project and reduces the costs compared to the other products we tested (see Q5).

Qx. Anything else you wish to add?
Jutta Bremm, Peter Palm: By using the temporal capabilities (time of transaction and time of validity), users can easily see which individual value in a dataset was/is/will be valid and from when.
In addition, the server-side JavaScript is used to calculate ad hoc results from the recursive structure, eliminating the need for these to be calculated and saved in the database beforehand.

——————–
Jutta Bremm, IT Project Manager, BMW.
Jutta is a IT Project Leader at BMW in product data management since 1987.
She was involved in IT projects at Siemens, Wacker Chemie, Sparkassenverband since 1978.

Peter Palm, Chief Visionary Officer (CVO) at Cortex.
Started CortexDB development in 1997.
Holds a Master in electronic engineering.
Area of expertise: Computer hardware development, Chip design, Independent Design Center for Chip Design (Std-Cell, Gate Array), Operating system development, CRM development since 1986.

Resources

- ODBMS.org: Resources related to Cortex.

Related Posts

-NoSQL for the Internet of Things. Interview with Mike Williams. ODBMS Industry Watch,June 5, 2014

- On making information accessible. Interview with David Leeming. ODBMS Industry Watch, July 30, 2014

- On SQL and NoSQL. Interview with Dave Rosenthal. ODBMS Indutry Watch, March 18, 2014

Follow ODBMS.org on Twitter: @odbmsorg

##

Sep 21 14

Hadoop at Yahoo. Interview with Mithun Radhakrishnan

by Roberto V. Zicari

“The main challenge when working with “big data” in Yahoo has always been our definition of “big”. :] There are several thousands of feeds on Yahoo’s Hadoop clusters, with daily, hourly and up-to-the-minute data frequencies, spanning Petabytes of data”.–Mithun Radhakrishnan

I have interviewed one of our experts, Mithun Radhakrishnan, member of the Yahoo Hive team.

RVZ

Q1. You work on Apache Hive, in the Yahoo Hadoop team. What are the most current projects you are working on?

Mithun Radhakrishnan: I work on the Hive team at Yahoo. Currently, we are migrating our Hadoop clusters from Hadoop 0.23 (initial release of YARN) to Hadoop 2.5. My team has been focusing on making sure that Hive 0.12 is performant on Hadoop 2.5, as well as rolling out Hive 0.13 to Yahoo’s Grid infrastructure. We have also been busy trying to enhance the performance of Hive queries, as well as of the Hive metastore, to work effectively at Yahoo’s large scale.

Q2. What are the most important challenges you are facing for the deployment, scaling and performance of Hive-related services at Yahoo?

Mithun Radhakrishnan:The main challenge when working with “big data” in Yahoo has always been our definition of “big”. :] There are several thousands of feeds on Yahoo’s Hadoop clusters, with daily, hourly and up-to-the-minute data frequencies, spanning Petabytes of data. Each feed would correspond to a Hive table, with the timestamp (date, hour, minute) being just one of several levels of partition-keys. Some of our more popular feeds add hundreds of thousands of partitions daily, and span millions overall. We’re working on optimizations in Hive’s metadata-storage, to scale to these high levels.

Another recent challenge has been the increased adoption of Business Intelligence and Data visualization tools (such as Tableau and MicroStrategy), connected directly to Grid data over HiveServer2. Such use imposes expectations not only on Hive query performance, but also on data transport as well as the metastore.

And finally, the hardware on which Hadoop runs at Yahoo is heterogeneous, accumulated over many years of usage at Yahoo. While our newer clusters use bleeding-edge hardware with gobs of memory, some of our clusters are several years old.
At our scale, we don’t have the luxury of completely replacing our hardware every year. We need our Grid software (Hadoop, Hive, Pig, etc.) to be performant on a variety of processor/memory/disk configurations.

Q3. What kind of Hive-related services did you implement at Yahoo?

Mithun Radhakrishnan: Yahoo has traditionally been an Apache Pig shop, but recently, we’ve seen an increase in the number of Hive jobs. This may be attributed to increased SQL-based analytics, proliferation of Business Intelligence tools, and some use of Hive for data transformations.

At Yahoo, we use HCatalog (i.e. Hive’s metadata server) for interoperability between Pig, Hive and MapReduce. An HCatalog Server runs as a separate service, serving metadata about various datasets.
Users consume this data using Hive directly, or using Pig and MapReduce (via HCatalog wrappers).

The data lifecycle (ingestion, replication and retirement) is managed via the Grid Data Management (GDM) suite, which was a pre-cursor to the Apache Falcon project. GDM is tightly integrated with HCatalog, and deals with data-registrations and discovery with HCatalog.

To enable data analysis and visualization tools for analysts, we deploy HiveServer2 instances. This allows direct JDBC/ODBC based connections to Grid data, to drastically cut down analysis and decision time, as well as unnecessary intermediate copies in a separate data warehouse.

A large number of users employ Oozie jobs to produce/consume Hive data, using Oozie’s “Hive Actions“. The Yahoo Hive and Oozie teams have integrated the two systems to reduce latencies in data processing pipelines.

Q4, What is Y!Grid ? And what is it useful for?

Mithun Radhakrishnan: Y!Grid is Yahoo’s Grid of Hadoop Clusters that’s used for all the “big data” processing that happens in Yahoo today. It currently consists of 16 clusters in multiple datacenters, spanning 32,500 nodes, and accounts for almost a million Hadoop jobs every day. Around 10% of those are Hive jobs.

No one else makes as much use out of Hadoop every single day as Yahoo does. Some of the notable use cases of Hadoop at Yahoo include:
Content Personalization for increasing engagement by presenting personalized content to users based on their profile and current activity
Ad Targeting and Optimization for serving the right ad to the right customer by targeting billions of impressions everyday based on recent user activities
New Revenue Streams from native ads and mobile search monetization through better serving, budgeting, reporting and analytics
Data Processing Pipelines for aggregating various dimensions of event level traffic data (page, ad, link views, link clicks, etc.) across billions of audience, search, and advertising events everyday
Mail Anti-spam and Membership Anti-abuse for blocking billions of spam emails and hundreds of thousands of abusive accounts per day through machine learning algorithms
Search Assist and Analytics for improving the Yahoo Search experience by processing billions of web pages

Q5. What are the strengths and weakness of Hive in your experience?

Mithun Radhakrishnan: Hive combines the immense computing power of Hadoop with the accessibility and expressiveness of SQL. Its main strengths include:
Scale: Hive scales easily to multi-terabyte datasets, and isn’t shackled by memory constraints.
SQL: Hive allows business logic to be expressed in SQL. This lowers the bar of entry for usage, allowing data-analysts with little Hadoop experience to use their expertise with Hadoop data. (Performance tuning is, admittedly, a different kettle of fish. ;])
Standard: Apache Hive supports analytics through Tableau, Microstrategy and Microsoft Excel, and has supported this for the longest time.
Strong community: The dev community in Hive is brilliant, vibrant and active (as a glance at the Git log would reveal. ;]) We’ve recently seen the introduction of an Apache Tez backend, vectorization support, optimized file-formats like ORC, as well as the promise of very interesting things to come (such as the new Cost Based Optimizer, and an Apache Spark-based back-end).

Which is not to say that everything’s perfect:

M/R: Until recently, Hive’s physical plans could only target MapReduce, which caused multi-stage queries to run quite slowly. Hive 0.13 now supports the expression of physical plans as arbitrary DAGs, using Apache Tez. This dramatically boosts performance, as our benchmarks have shown.
Standard SQL: HiveQL isn’t quite SQL92-compliant yet, although it’s tending in the right direction. Industry-standard benchmarks like TPC-h and TPC-ds typically need rewriting to run on Hive. To borrow a simile from Rowan Atkinson: it is sort of like Andrew Lloyd Webber rearranging the score of Evita to suit the vocal range of Britney Spears. :]
Metastore performance and data throughput in HiveServer2 still have room for improvement.

Q6. You are an Apache HCatalog committer. What are your most important contributions? Who is currently using HCatalog and for what?

Mithun Radhakrishnan: The Apache HCatalog project has been merged with the main Apache Hive project now.
My work with HCatalog has primarily revolved around integration with other projects. Specifically:
I worked on the HCatalog notification system, to send JMS compliant notifications in response to changes to a dataset’s metadata. In Yahoo, we use this specifically with Oozie, to kick off Oozie workflows as soon as their dependency dataset-partitions are published in HCatalog. This reduces workflow launch latencies, end-to-end pipeline execution times, while also reducing NameNode pressure caused by polling.
I’ve worked (and am still working) on integration with data ingestion services like GDM. My focus at the moment is on metastore performance, and replication of tables/partitions across HCatalog instances.

HCatalog is an integral part of data processing pipelines at Yahoo, given its integration with GDM and Oozie. Outside of Yahoo, HCatalog is also used at Twitter and LinkedIn, as far as I’m aware. I’m sure there are other firms as well.

HCatalog is also used externally by several projects such as Apache Falcon, Apache Oozie, etc.

Q7. You have been benchmarking various versions of Hive. What are the main results you have obtained?

Mithun Radhakrishnan: I’ve had the opportunity to benchmark Apache Hive 0.10 through Hive 0.13, across various scales of input data, multiple data formats and tuning parameters. We’ve observed that the query performance has improved steadily, with each major release. But the jump in Hive 0.13 has been quite phenomenal. The switch to a more expressive physical execution engine in Apache Tez, coupled with vectorization, ORC files and table/column statistics has really paid dividends.

For the Yahoo Hadoop team, the main result from the benchmark was that Apache Hive 0.13 supports a “high dynamic range” of data scale: it is performant enough at the 100GB scale to approach interactivity, while simultaneously also scaling to 10+ TB of data. Given that the system scales over such a wide range, and that Yahoo already deploys Hive in production, we find little reason to deploy any other frameworks for SQL-based analytics on Y!Grid.

Q8. How did you define the workloads for your benchmark of Hive?

Mithun Radhakrishnan: When we started off, we considered creating a Yahoo-specific benchmark: a set of Hive scripts and accompanying datasets to represent the Yahoo workload. The problem was that there was a variety of datasets, and several Hive users, running different kinds of workloads.

In the end, we opted to use the TPC-h benchmarks instead. These are industry standard, more or less representative of the jobs we run at Yahoo. Hortonworks was already running a large subset of TPC-ds benchmarks on Hive. We decided that TPC-h would allow for complementary coverage. We did partition the data and transform it in the way that we would have with production data.

At the time, the comparisons most people were trying to make were between Shark (on Apache Spark) and Hive.
Shark engineers had posted results from running a port of TPC-h, transliterated to Hive’s SQL dialect. We figured we’d get an apples-to-apples comparison by running those scripts against Hive.

Q9. Did you compare Hive with other Big Data software platforms?

Mithun Radhakrishnan: The objective of the benchmark was primarily to track Apache Hive’s progressive performance gains viz. prior versions. However, I did compare Hive 0.12 and 0.13′s performance against Shark 0.7.1 and Shark 0.8 (which was trunk at the time.)

The results were mixed. At the 100GB scale, I did see Shark perform admirably. But I ran into problems with Shark at scale. A large majority of queries simply didn’t complete on Shark at the 10+TB scale. It appeared that a lot of time was lost in shuffling data between consecutive stages. Coupled with the fact that Shark was only compatible with Hive Metastore v0.9, that the number of reducers wasn’t deduced per job, the lack of support for security or interoperability with our existing production systems, Apache Hive looked the better fit for Y!Grid.

I haven’t had the opportunity to compare Hive against other systems yet. I do hope to, as soon as I can find the time, but my day-job keeps me pretty busy. :]

Qx Anything else you wish to add?

Mithun Radhakrishnan: Lots of people fret over query performance. Performance is important, but one must think holistically about the data and workload that needs to be processed, hardware choices available at various price points, holistic long-term TCOs of operating the system, current and future use cases, support, etc. Everyone’s situation would be a bit different and something to take into account when thinking about a SQL-on-Hadoop solution.

————
My name is Mithun Radhakrishnan. I work on Apache Hive, in the Yahoo Hadoop team. My team is responsible for the deployment, scaling and performance of Hive-related services (including HCatalog and HiveServer2) on the Y!Grid, the largest production Hadoop Clusters in existence today.

I’ve been working on Hadoop-related projects in Yahoo since 2009, including the Grid Data Management System (pre-cursor to Apache Falcon), HCatalog and Hive. I’m an Apache HCatalog committer and Hive contributor. Prior to working at Yahoo, I was a firmware developer at Hewlett-Packard, writing hardware self-diagnostic and healing firmware for HP’s big-iron boxen (Integrity Servers, running Intel Itaniums).

I’m currently working broadly on getting the Hive Metastore to perform at Yahoo-scale.

I’ve recently had the pleasure of benchmarking various versions of Hive (0.10-13), with different settings, file-formats, etc., to gauge progressive performance gains. I’ll be presenting my findings at Strata 2014.

Resources

-Hive on Apache Tez: Benchmarked at Yahoo! Scale. Mithun Radhakrishnan (Yahoo! Inc.). Talk at Strata+Hadoop Conference. 2:35pm Thursday, 10/16/2014

Related Posts

-On Big Data benchmarks. Interview with Francois Raab and Yanpei Chen. ODBMS Industry Watch, August 14, 2014

-On the Hadoop market. Interview with John Schroeder. ODBMS Industry Watch, June 30, 2014

- On Spring for Apache Hadoop. Interview with Thomas Risberg. ODBMS Industry Watch, May 28, 2014

Follow ODBMS.org on Twitter: @odbmsorg
##

Sep 4 14

The Global Alliance for Genomics and Health. Interview with David Haussler

by Roberto V. Zicari

“A main challenge facing clinical and genomic data sharing efforts is the lack of harmonized methods and interoperable approaches that would enable such sharing. This barrier is one of the main motives for the formation of the Global Alliance for Genomics and Health.”
–David Haussler

I have interviewed David Haussler, director of the Center for Biomolecular Science & Engineering at the University of California, Santa Cruz. David is one of eight organizing committee members of the Global Alliance for Genomics and Health.

RVZ

Q1. What is the Global Alliance for Genomics and Health?

David Haussler: The Global Alliance for Genomics and Health is a partnership of more than 180 of the world’s leading stakeholders working together to create a common framework of harmonized approaches to enable the responsible and effective sharing of genomic and clinical data. The Global Alliance is made of up of a diverse, international group of organizations working in healthcare, biomedical research, disease and patient advocacy, life science, and information technology, who come together with the goal of accelerating progress in medicine and human health.

Q2. What are the main objectives of the Data Working group?

David Haussler: The Data Working Group is focused on the interoperability and scalability of formats and interfaces for genomic information. The main near-term objective of the Data Working Group is to establish a role as the international coordinating body and frontrunner for organizing, developing and aligning the computer formats and application programming interfaces (APIs) used to represent and exchange genomic data on individuals.

This includes stewardship of existing file formats used to store genomic information (BAM and VCF files) and engaging the community in devising forward-looking data models and APIs for representing, submitting, exchanging, and querying genomic data.

Q3. What are the main challenges (technical and non-technical) in representing and exchanging genomic data on individuals?

David Haussler: A main challenge facing clinical and genomic data sharing efforts is the lack of harmonized methods and interoperable approaches that would enable such sharing. This barrier is one of the main motives for the formation of the Global Alliance for Genomics and Health.

Currently, the ad hoc use of different data formats and technologies in different systems, lack of alignment between approaches to ethics and national legislation across jurisdictions, and the challenges of devising secure systems for controlled sharing of data puts the world on track to create Balkanized data sets and not be able to learn from aggregated information.

It is the hope of the Global Alliance that by addressing these technical, regulatory and other barriers at the outset, we will reverse the current course and enable medical progress through large-scale data aggregation and analysis.

Q4. What do you mean with “responsible” data sharing?

David Haussler: The meaning of responsible data sharing comes down to respect for the privacy and the data sharing preferences of participants. One of the core missions of the Global Alliance is to promote the highest standards for ethics and ensure that participants have a choice to securely share their genomic and clinical data as much as they want to, including not at all.

Aligning with this mission, two of the four initial Working Groups are focused on aspects of this responsible sharing: the Security Working Group and the Regulatory and Ethics Working Group.

The Regulatory and Ethics Working Group is in the process of drafting an International Code of Conduct, which will support the establishment of a set of ethical principles and practices for research seeking to share genomic and clinical data. The Security Working Group aims to support a technology environment that provides assurance to patients, researchers, clinicians, and other stakeholders that data are shared, annotated, and interpreted only by those with appropriate authorization to do so. All work done by the Global Alliance, including in the Data Working Group, is closely tied to ensure that any data sharing is done in a manner that respects privacy and security, while still retaining essential attributes to enable effective analysis.

Q5. What are the plans of the Data Working group to overcome such challenges?

David Haussler: Initially, the Data Working Group will take a role in overseeing the current BAM, CRAM, and VCF format standards to provide a governance and support structure for these efforts.

In the near-term, we will work with the international community to develop formal data models, APIs, and reference implementations of those APIs for representing, submitting, exchanging, querying, and analyzing genomic data in a scalable and potentially distributed fashion. This work will be consistent with the security model developed by the Alliance’s Security Working Ground, the clinical data framework developed by the Alliance’s Clinical Working Group, and the International Code of Conduct developed by the Alliance’s Regulatory and Ethics Working Group.

The Data Working Group in conjunction with partner organisations has also contributed to the startup of a project known as “Beacon”, which fosters the development of ‘beacons’: any institution or site that provides a simple yes or no in response to query regarding the presence of a specific human genetic variant in their genetic data. This open web service is designed both to be technically simple, so that it is easy to implement, and to not return information that could be construed as violating anyone’s privacy, so that it is available as a public, unrestricted web resource.

Q6. Why new APIs are needed? and what are the key areas in which these new APIs will be used?

David Haussler: We need to switch from file formats to APIs so that new architectures can be employed for storage and access to genomics data as we scale to thousands and eventually millions of genomes. APIs allow third parties to write code with standardized methods for utilization of genomic data that do not require download or parsing of large files and that are broadly compatible across many institutional systems.
Specifically, APIs are needed for and will be used in these four key areas:

Reference variants. This API represents a reference genome structure consisting of typical human chromosomal DNA sequences and well-established human polymorphisms including larger structural variations. It defines mechanisms for mapping other information to the reference, including individual genomes, RNA data, and annotation. It should support mapping of DNA or RNA reads from a BAM file or equivalent, individual genome variants as described in a VCF file or equivalent, and various types of reference genome annotation as found in a genome browser or in one of the existing human genetic variation databases.

Read data. This API represents collections of primary data collected from sequencing machines, covering functions currently supported by FASTQ and BAM file formats, and including a query interface over groups of samples. It addresses issues of efficient interaction with large databases, the relationship of reads to a reference genome, lossy or loss-free data compression, and error correction.

Expression, methylation, and other epigenetic data. It is also necessary to have APIs that represent gene expression and the epigenetic state of the DNA or chromatin in a tissue sample. We plan to build an API specifically for gene expression, and establish a framework in which other external groups can create APIs for other types of epigenetic and functional data. These APIs will interact with the reference variant and read data APIs.

Metadata. Metadata is general information about a sample, such as tissue type, including how, when, and where information was extracted from that sample, such as the name of the sequencing center. We intend that there be a single sample metadata schema that is shared by all data models, used universally across expert working groups so that there is maximum compatibility. Optional fields will allow customization as necessary so that it does not force the specification of too much information for any given API or project.

Q7. How do you plan to create a “shared” data representation, storage, and analysis of genomic data?

David Haussler: The Global Alliance intends to enable the sharing of genomic and clinical data, but will not itself store, analyze, or interpret data. By undertaking work such as the development of APIs for representing, submitting, exchanging, and querying genomic data, we seek to create a common framework of interoperable approaches, lifting up best practices and creating new methods where none exist, that will enable more effective, responsible sharing of genomic and clinical data and facilitate large-scale research by entities throughout the world. The Global Alliance also seeks to catalyze data sharing projects that drive and demonstrate the value of data sharing, and to convene stakeholders from different sectors and localities to share information, establish best practices, and enable interoperability across the broadest possible group.

Q8. Why InterSystems joined the Global Alliance for Genomics and Health and what will be their contribution?

David Haussler: To answer this, I point you towards the statement of Paul Grabscheid, Vice President of Strategy comments at Intersystems, when the company joined the Global Alliance: http://www.intersystems.com/who-we-are/newsroom/news-item/intersystems-joins-global-alliance-for-genomics-and-health/.

On membership generally, since the Global Alliance’s initial formation with 70 partners in June of 2013, the group has brought on many more highly esteemed research and health institutions with broadened international representation, including partners from over 40 leading life science and information technology companies, world leaders in cloud computing, biotechnology, and healthcare generally, and additional respected disease and patient advocacy groups.

Q9. What are the progress and deliverables so far?

David Haussler: The Data Working Group has formed its first four task teams:
(1) File Formats Task Team,
(2) Reference Variation Task Team,
(3) Read Store Task Team, and
(4) Metadata Task Team.
Work from each Task Team is addressed below and is available at https://github.com/ga4gh unless otherwise noted:

File Formats. The developers of the current VCF, BAM, and CRAM file formats have been engaged in a File Formats Task Team led by Ewan Birney of the EBI to govern, maintain and extend these formats. A pre-existing official specification and software development site has been endorsed at https://github.com/samtools/hts-specs and will be used by the Task Team to address suggestions from the developer community for file format modifications.
Reference Variation. The Reference Variation Task Team, co-led by Gil McVean of Oxford University and Benedict Paten of UC Santa Cruz, held its organizing meeting in Hinxton, UK on March 3, 2014.
The team aims to compare existing reference structures such as the GRC reference genome and the dbSNP database alongside newer graph-based approaches, with the near-term goal of delivering one or more new or enhanced reference structures with pilot implementations.

Read Store. The Read Task Team, led by Dave Patterson of UC Berkeley, involves members from various companies, government, and academic institutions. It has compared in detail APIs from NCBI, EBI, Google, SMART/HL7 FHIR, and UC Berkeley, has established a publicly readable mailing list with discussions, designed and released an initial v0.1 API, and is currently working on the v0.5 API. All work is open and issues/comments may be raised by any members of the public through mechanisms provided by the GitHub open source software development environment.
Metadata. The Metadata Task Team, is led by Helen Parkinson of EBI and Tanya Barrett of NCBI, which held its organizing meeting April 17, 2014. In the near term, the team aims to create a single sample metadata schema that is shared by all data models, used universally across expert working groups so that there is maximum compatibility.

In addition to these task teams, to develop APIs in the context of major ongoing research projects, the Data Working Group currently interacts with three projects from outside the Global Alliance: the ICGC/TCGA Pan-Cancer Whole Genome Analysis project, Matchmaker Exchange, and Beacon.

Q10. What is the Beacon project, and how does it relate to your work at the Global Alliance for Genomics and Health?

David Haussler: In order to root the activities of the Global Alliance in real-world problems and to demonstrate the value of interoperable approaches to data sharing, the Alliance supports specific projects, of which the Beacon project is one. Ongoing engagement between these projects and Working Groups is intended to encourage a focus on the needs of projects currently advancing science and medicine, and crosscutting engagement of the Working Groups with one another and with stakeholders in the community.

The Beacon project, led by Jim Ostell of the NCBI, is a project that was created in order to test the willingness of international sites to share genetic data in the simplest of all technical contexts. It is defined as a simple public web service that any institution can implement as a service.
A site offering this service is called a “beacon.” This open web service is designed both to be technically simple (so that it is easy to implement) and to not return information that could be construed as violating anyone’s privacy (so that there is no good excuse for not implementing it as a public, unrestricted web resource).

A goal of the Data Working Group is to foster the development of more than a dozen independent “beacons” in the near-term, and in collaboration with the other Alliance Working Groups, to gain initial direct experience with the barriers to international genetic data sharing through this Beacon project.
There are currently 4 beacons running at the following locations: UC Berkeley (http://beacon.eecs.berkeley.edu), NCBI (http://www.ncbi.nlm.nih.gov/projects/genome/beacon/), UC Santa Cruz (http://hgwdev-max.cse.ucsc.edu/cgi-bin/beacon/query), and EMBL-EBI (http://www.ebi.ac.uk/eva/beacon).
This is still very much early days. Both the interface and the rules for engagement with beacons are rapidly evolving.

——————————–
David Haussler 
David Haussler’s research lies at the interface of mathematics, computer science, and molecular biology. He develops new statistical and algorithmic methods to explore the molecular function and evolution of the human genome, integrating cross-species comparative and high-throughput genomics data to study gene structure, function, and regulation. He is credited with pioneering the use of hidden Markov models (HMMs), stochastic context-free grammars, and the discriminative kernel method for analyzing DNA, RNA, and protein sequences. He was the first to apply the latter methods to the genome-wide search for gene expression biomarkers in cancer, now a major effort of his laboratory.

As a collaborator on the international Human Genome Project, his team posted the first publicly available computational assembly of the human genome sequence on the Internet on July 7, 2000. Following this, his team developed the UCSC Genome Browser, a web-based tool that is used extensively in biomedical research and serves as the platform for several large-scale genomics projects, including NHGRI’s ENCODE project to use omics methods to explore the function of every base in the human genome, NIH’s Mammalian Gene Collection, NHGRI’s 1000 genomes project to explore human genetic variation,  and NCI’s Cancer Genome Atlas (TCGA) project to explore the genomic changes in cancer.

His group’s informatics work on cancer genomics, including the UCSC Cancer Genomics Browser, provides a complete analysis pipeline from raw DNA reads through the detection and interpretation of mutations and altered gene expression in tumor samples. His group collaborates with researchers at medical centers nationally, including members of the Stand Up To Cancer “Dream Teams” and the Cancer Genome Atlas, to discover molecular causes of cancer and pioneer a new personalized, genomics-based approach to cancer treatment.

The UCSC Cancer Genomics Hub (CGHub), a product of the Haussler lab, is a secure repository for storing, cataloging, and accessing cancer genome sequences, alignments, and mutation information for 25 cancer types from TCGA, the Therapeutically Applicable Research to Generate Effective Treatments (TARGET) project, and other related projects. The current planned capacity of this data center is five petabytes. The CGHub will serve as a platform to aggregate other large-scale cancer genomics information, growing to provide the statistical power to attack the complexity of cancer.

He co-founded the Genome 10K Project to assemble a genomic zoo—a collection of DNA sequences representing the genomes of 10,000 vertebrate species—to capture genetic diversity as a resource for the life sciences and for worldwide conservation efforts.

Haussler is an organizing member of the Global Alliance for Genomics and Health, through which research, health care, and disease advocacy organizations that have taken the first steps to standardize and enable secure sharing of genomic and clinical data.

Haussler received his PhD in computer science from the University of Colorado at Boulder. He is a member of the National Academy of Sciences and the American Academy of Arts and Sciences and a fellow of AAAS and AAAI. He has won a number of awards, including the 2011 Weldon Memorial Prize from University of Oxford, the 2009 ASHG Curt Stern Award in Human Genetics, the 2008 Senior Scientist Accomplishment Award from the International Society for Computational Biology, the 2005 Dickson Prize for Science from Carnegie Mellon University, and the 2003 ACM/AAAI Allen Newell Award in Artificial Intelligence.

Resources

- Global Alliance Working Groups Summaries and Proposals for Initial Deliverables (.PDF) June 10, 2014

Related Posts

- Big Data for Genomic Sequencing. Interview with Thibault de Malliard. ODBMS Industry Watch, March 25, 2013

Follow ODBMS.org on Twitter: @odbmsorg

Aug 26 14

Predictive Analytics in Healthcare. Interview with Steve Nathan

by Roberto V. Zicari

“Analysis of big data can identify the subtle differences that explain why similar-seeming patients have different outcomes, and predictive decision support can help physicians guide more patients on the path to recovery”–Steve Nathan.

Why using predictive analytics in healthcare? On this topic, I have interviewed Steve Nathan, CEO at Amara Health Analytics.

RVZ

Q1. Amara provides real-time predictive analytics to support clinicians in the early detection of critical disease states. Can you tell us a bit more of what are these predictive analytics and which data sets do you use?

Steve Nathan: Our system runs on hospital data, including labs, pharmacy, real-time vitals, ADT, EMR, and clinical narrative text. We have developed domain-specific natural language processing (NLP) and machine learning to find predictive signal that is beyond the reach of traditional approaches to clinical analytics.

Q2. How large are the data sets you analyse?

Steve Nathan: For an average 500 bed hospital we analyse about 100 million data items in real-time annually. We expect this volume of data to increase dramatically as more centers deploy hospital-wide automated real-time streaming of data from patient monitors into the EMR.

Q3. Why early detection is important?

Steve Nathan: Let’s look at the example of sepsis, which is the body’s systemic toxic response to an infection. Clinicians know how to treat sepsis once identified, but it’s often difficult to identify early because it can mimic other conditions and there are a multitude of clinical variables involved, some of which are subjective in nature.
At the later stages of sepsis, mortality increases almost 8% with every hour of delayed treatment. So technology that can assist clinicians in early identification is important.

Q4. What is your back-end platform for real-time data processing and analytics?

Steve Nathan: The major components of the backend are Mirth (open source HL7 parsing/transform), a data aggregator, a real-time data streaming engine, Jess rules engine, the core analytics engine (including NLP pipeline), and Cassandra NoSQL database from DataStax.

Q5 What are the main technical challenges you encounter when you analyse big data sets that are composed of previous patient histories and medical records?

Steve Nathan: Input data comes from a wide variety of centers and is wildly heterogeneous. Issues range from proprietary health IT systems and incompatible usage of available standards like HL7, to wide variations in clinical language used by physicians and nurses in their notes, to varying patterns of diagnostic testing by different providers. And of course every patient is unique. So much of our system is dedicated to producing a standardized timeline for each patient in which every variable has a consistent interpretation, no matter where it originally came from. We then use these patient timelines as input for machine learning, and there are some unique challenges there in developing predictive models that are appropriate for use in real-time decision support.

Q6. Why did you choose a NoSQL database for this task?

Steve Nathan: The primary motivation was flexibility of data representation. We wanted to be able to change schemas dynamically. Also, DataStax’s integration of Cassandra with Solr was very important for us because we do a lot text mining as part of our overall approach.

Q7. What are the lessons learned so far?

Steve Nathan: Processing huge amounts of historical patient data in simulations to refine predictive models is very challenging to do with good performance. We must be able to run many years worth of archived data through the system in a small number of hours, so the system architecture must be designed with this processing mode in mind.
It is important to consider early on how to upgrade the live system with no down time while avoiding the possibility of losing any data.

Q8. What about data protection issues?

Steve Nathan: We comply with HIPAA regulations and go to great lengths to protect data. This includes work processes and policies for all employees, contractors, and business associates. And, importantly, it includes the architecture and deployment of our systems – for example all data is transferred via VPN, and every VPN connection is terminated in a VM that contains a single protected dataset, rather than terminating at a DC router.

Q9. Which relationships exist between Amara and UC San Diego?

Steve Nathan: There is no formal relationship between Amara and UCSD. Amara’s co-founder and chairman, Dr. Ramamohan Paturi, is a professor of Computer Science at UCSD.

Qx. Anything else you wish to add?

Steve Nathan: Analysis of big data can identify the subtle differences that explain why similar-seeming patients have different outcomes, and predictive decision support can help physicians guide more patients on the path to recovery.

—————————————-
S.Nathan-5X7_300dpi
Steve Nathan, CEO, Amara Health Analytics

Steve Nathan has over 25 years in the enterprise software industry. As CEO of Parity Computing beginning in 2008, Steve led the company’s product expansion with a pioneering analytics platform for enhancing biomedical research productivity. He then led the formation of Amara Health Analytics, the concept and strategy for the Clinical Vigilance™ product line, and the spinout of Amara as an independent company.

Steve has held key leadership positions at recognized technology innovators Sun Microsystems and Cray Research, as well as at start-ups Celerity Computing, Alignent Software, and Exist Global. As General Manager at Sun Microsystems, he had P&L responsibility for messaging, portal, and web infrastructure in the iPlanet business unit, where he grew the annual revenue of this product line from $15M to $150M in three years.

Steve holds B.S. degrees from the University of California at Riverside in Computer Science, Mathematics, and Psychology; and he is a 1999 graduate of the Stanford University Business Executive Program.

Resources

- U.S. Department of Health & Human Services

- DataStax Enterprise Reference Architecture, White Paper, BY DATASTAX CORPORATION January 2014

Related Posts

- Big Data: three questions to DataStax. ODBMS Industry Watch,April 7, 2014

Follow ODBMS.org and ODBMS Industry Watch on Twitter:
@odbmsorg

##

Aug 14 14

On Big Data benchmarks. Interview with Francois Raab and Yanpei Chen.

by Roberto V. Zicari

“It’s unlikely that a big data benchmark will gain wide recognition until a clear “playing field” has emerged and focused the competitive pressure.” –Francois Raab

On the topic of constructing big data benchmarks I have interviewed Francois Raab and Yanpei Chen. Francois is the original author of the TPC-C Benchmark. He is currently the President of InfoSizing, Inc.
Yanpei is a member of the Performance Engineering Team at Cloudera.

RVZ

Q1.There have been a number of attempts at constructing big data benchmarks. None of them has yet gained wide recognition and usage. Why?

Yanpei: Many big data benchmarks are just like big data systems – new, and with room to improve and grow.
In more detail big data systems:
- rapidly evolve, so it’s important to define performance in ways that matter for end customers.
- consist of many interdependent components, so it’s difficult to measure performance in a reliable fashion.
- service diverse business needs using diverse implementations, so benchmarks need to accommodate different system implementations.

Francois: It’s unlikely that a big data benchmark will gain wide recognition until a clear “playing field” has emerged and focused the competitive pressure. There are 3 phases in the evolution of a new technology. First, the technology is introduced and applied to a wide array of solutions without a proven return on investment. Next, a “killer app” emerges from the early adopters and its rapid growth draws all the vendors into competing on a common playing field. Lastly, some technologies emerge as clear winners in the race and the market start to consolidate around a few dominant vendors. Big data has not entered the second phase yet.

Q2. Is it possible to build a truly representative big data benchmark?

Yanpei: Absolutely!
To me, the rise of “big data” in part comes from our increased ability to instrument, measure, and ultimately derive value from large scale systems – technology systems, financial systems, medical systems, or physical systems touching day-to-day life. Big data systems, as a special case of technology systems, also deal with ever increasing instrumentation and measurement. Over time, I am absolutely confident that we will increase our understanding of big data systems, and with it, improve the quality of our big data benchmarks.

Cloudera’s broad customers base gives us visibility into big data deployments across telecom, banking, retail, manufacturing, media, government, healthcare, and many other industry sectors. We’re in a great position to identify representative use cases.

Francois: A benchmark is a somewhat abstract (i.e. simplified) model of a real life scenario. The question we face today is to identify a scenario that Fortune 500 companies would widely recognize as relevant to their operations and vital to their competitive survival. Once that critical mass has been reached it will quickly spread to the entire commercial data processing landscape and a successful big data benchmark will be built based on that scenario.

Q3. How would you define a Big Data Benchmark ?

Yanpei: The key properties of good big data benchmarks are a re-cast of the same properties for benchmarks of more established systems.

A good big data benchmark should be representative of real-life use cases; it should generate performance insights immediately relevant to diverse and evolving big data use cases. The benchmark should also be scalable; it should stress big data systems today, as well as the vastly improved systems in the future. The benchmark should be portable, meaning it should accommodate systems with different implementations that achieve the same end-goal. The benchmark should also be verifiable, in that the results can be checked by independent auditors if needed, and end-users can reproduce on their own systems the winning configurations and result.

Q4. Can you give some examples of Successful Benchmarks ?

Yanpei: My co-author Francois was a lead contributor to TPC-C, a very successful benchmark for online transactional processing (OLTP). He can share other examples.

Francois: The success of a benchmark can be measured by its number of published results and by its longevity over shifts in the underlying technologies. By that measure TPC-C and TPC-H are leading the field. While it can be argued that they have lost relevance over their two decades lifetime, they still encapsulate critical elements at the core of the application domains they represent (transaction processing and decision support).

Q5. One of the main purposes of a benchmark is to evaluate and contrast the merits of various implementations of the same set of requirements. How do you do this with Big Data?

Yanpei: You construct benchmarks that are portable. In other words, you specify implementation-independent requirements.

Best illustrated by example – TPC-C. TPC-C specifies five operations – New Order, Payment, Delivery, Order-Status, and Stock-Level. It also describes the interdependencies between these operations. For example, every New Order will be accompanied by Payment, but only one in ten New Orders will trigger an Order Status. TPC-C describes the load that the system under test should handle – many concurrent operations arriving in randomized order with randomized inter-arrival time, but at controlled relative frequencies. TPC-C also specifies the initial content of all the datasets, as well as how the content grows over the execution of the benchmark. This is an implementation-independent set of requirements – “handle these operations on these data sets.” The underlying system could be a relational database, or a key-value store like HBase.

Francois: Benchmarks can be defined one of two way: by creating a kit to be deployed on technology specific platforms or by specifying a set of technology agnostic requirements to be implemented at will. Because big data has first emerged from the MapReduce paradigm, we have seen a number of technology centric benchmarks (also called component benchmarks) that put a narrow focus on one or more components of a predefined solution. But we should soon expect to see a big data application emerge as the new must-have in commercial data centers.

Q6. In a recent position paper you argued for building future big data benchmarks using what you call a “functional workload model”. What is it?

Francois: We introduced a couple of terms in that position paper to highlight the core concepts underlying representative, scalable, portable, and verifiable big data benchmarks.

The “functional workload model” is a way to specify such benchmarks. It contains three things – the “functions of abstraction”, the load pattern serviced by the system, and the data sets being acted upon.

“Functions of abstraction” describes “what is being computed” without specifying “how the computation should be done.”
The intent is an abstract, functional description that allows the benchmark to be portable across systems of different compute paradigms. “What is being computed” should be justified by empirical evidence, either system traces or industry-wide surveys, with emphasis on identifying the common computation goals.

The load pattern describes “what is the serviced load” without specifying “how it is serviced.” It outlines the execution frequency, distribution, arrival rate, bursts and averages over time of each individual function of abstraction.

The data sets describe “what is the data and the relationships within the data” without specifying “how it is represented.” It is in terms of the structure and interdependence between data elements, initial size and contents, how it evolves over the course of the workload execution, and how it is expected to scale with the system size and load volume.

These concepts help us routinely identify shortcomings in haphazardly specified benchmarks. For example, some of the most often-cited big data benchmarks contain artificial functions of abstraction that do not match any common use cases.
Or, a multi-job, multi-query load pattern is missing altogether, or the data sets are represented in unrealistic formats that inflate performance advantages.

Q7. Why did you select TPC-C as a starting point for your work?

Yanpei: Because TPC-C already has a functional workload model within its specification. And because Francois wrote TPC-C.

Francois: The functional workload model is the underlying structure on which TPC-C was built. Subsequent TPC benchmarks, like TPC-H and TPC-E, were also built based on a functional workload model.

Q8. How does your functional workload model compares with TPC-C ?

Yanpei: TPC-C already uses the functional workload concept.

Q9. For your functions of abstractions concept to be useful, it must be applicable to different types of big data systems. Two important examples are relational databases and MapReduce. How do you do that? How does your work compare with other MapReduce-Specific Benchmarks ?

Yanpei: Best illustrated by example.

Suppose we discover that sorting data is a common operation in real-life production use cases. We would then define “sort” as a function of abstraction. We would define it in the same fashion as the official Sort Benchmark – the input data is of size X, format Y, and the system is asked to produce output sorted by order Z.

A relational database implementation could do, say, “insert into TABLE … ” followed by “select * from TABLE ordered by COLUMN”. A MapReduce implementation would use the IdentityMapper and IdentityReducer, and rely on the implicit shuffle-sort in MapReduce.

This is obvious for sort, because the sort operation has traditionally been defined in a system-independent way.
In contrast, many of the existing MapReduce and relational database performance measurement tools are specified in ways that do not translate across different types of systems. The many SQL-on-Hadoop systems are fast removing the boundary.
The functions of abstraction concept allows us to understand use-case at a level above than any SQL-only or Hadoop-only specifications.

Q10. What are in your opinion the Emerging Big Data Application Domains?

Francois: Everyone wants to figure out which application domain will become the big data killer app. Today, no commercial data center can live without on-line transaction processing or without decision support systems.
Which big data application will become indispensable tomorrow? That is the million dollar question! Once we know that, a standard big data benchmark will soon follow.

Yanpei: The maturation of the Hadoop platform has been relentless. Its role has changed as the platform has gotten more secure, more reliable, more powerful, and (especially) more real-time. It’s no longer a system used for just big batch jobs. Instead, it has become the first place that data lands. It scales and it can store anything – no data need be discarded. It’s used to pre-process data before delivering it to an enterprise data warehouse, a document repository, an analytic engine, a CRM or ERP application, or other specialized system. Most significantly, it has begun to take over some of the work previously done by those traditional platforms, because it can do real-time search and analysis on the data directly, in place, and without further Extract-Transform-Load (ETL).

This leads to the emergence of the enterprise data hub (EDH), a new architecture to complement existing investments and help put data at the center of an organisation’s business. An enterprise data hub allows storage of any amount and type of data, for as long as is needed, and accessible in any way needed.
Additional necessary attributes of EDHs include: It’s Secure and Compliant, offering perimeter security and encryption, plus fine-grained (row and column-level), role-based access controls over data, just like a data warehouse. It’s Governed, enabling users to do data discovery, data auditing, and data lineage, thus understanding what data is in their EDH and how the data are used.
It’s Unified and Manageable, providing native high-availability, fault-tolerance, self-healing storage, automated replication, and disaster recovery, as well as advanced workload management capabilities to enable multiple speciacialist systems to analyze the same data set. And it’s Open, ensuring that customers are not locked in to any particular vendor’s license agreement, that you can choose what tools to use with your EDH, and nobody can hold your data or applications hostage.

The emergence of EDHs pose both challenges and opportunities for defining big data benchmarks. As Francois alluded to, the representative scenarios typically involve application domains whose performance has traditionally been measured separately, such as the case for on-line transaction processing and decision support systems. How to define and measure performance for such concurrent application domains present both a challenge and an opportunity.
Further, to compare different EDHs, it becomes necessary to quantify characteristics that are previously yes/no checks – which is the more secure EDH? the better governed? the more unified and more manageable? the more open? How to quantify such characteristics will stretch our performance thinking and measurement methodology into new territory.

Q11. Future Work ?

Yanpei: We have a strong Performance Engineering Team at Cloudera. We insist on systematic, fair, and repeatable tests both for our internal performance assessment and competitive studies. We are also engaged with community efforts to define big data benchmarks. Look for our future posts on the Cloudera Developer Blog!

—————–

Francois Raab is a recognized, award winning expert in the field of performance engineering, benchmark design and system testing. He is the original author of the TPC-C Benchmark, the most successful industry standard measure of OLTP performance. He was also co-author of “The Benchmark Handbook” (pub. Morgan Kaufmann). Francois is accredited as a Certified Benchmark Auditor by the Transaction Processing Performance Council. His consulting services are retained by most major system vendors as well as Fortune-500 IT organizations. With over 30 years of experience in the field of databases and commercial data processing, Francois is a leading member of the performance measurement, system sizing and technology evaluation community. He is currently the President of InfoSizing, Inc.

Yanpei Chen is a member of the Performance Engineering Team at Cloudera, where he works on internal and competitive performance measurement and optimization. His work touches upon multiple interconnected computation frameworks, including Cloudera Search, Cloudera Impala, Apache Hadoop, Apache HBase, and Apache Hive. He is the lead author of the Statistical Workload Injector for MapReduce (SWIM), an open source tool that allows someone to synthesize and replay MapReduce production workloads. SWIM has become a standard MapReduce performance measurement tool used to certify many Cloudera partners. He received his doctorate at the UC Berkeley AMP Lab, where he worked on performance-driven, large-scale system design and evaluation.

Resources

-New (August 18, 2014): TPCx-HS: First Vendor-Neutral, Industry Standard Big Data Benchmark.
The Transaction Processing Performance Council (TPC) announced the immediate availability of TPCx-HS, developed to provide verifiable performance, price/performance, availability, and optional energy consumption metrics of big data systems.

- From TPC-C to Big Data Benchmarks: A Functional Workload Model. Yanpei Chen, Francois Raab, Randy H. Katz, July 1, 2012

- Workload-Driven Design and Evaluation of Large-Scale Data-Centric Systems. Yanpei Chen. Spring 2012

- Statistical Workload Injector for MapReduce (SWIM). Yanpei Chen, Sara Alspaugh, Archana Ganapathi, Rean Griffith, Randy Katz

- The Fifth Workshop on Big Data Benchmarking (5th WBDB) August 5-6, 2014, Potsdam, Germany: Program and Videos of all talks.

Related Posts

- Benchmarking XML Databases: New TPoX Benchmark Results Available. ODBMS INdustry Watch. September 19, 2011

- Measuring the scalability of SQL and NoSQL systems. ODBMS Industry Watch. May 30, 2011

Follow ODBMS.org and ODBMS Industry Watch on Twitter: @odbmsorg
##