Skip to content

On Column Stores. Interview with Shilpa Lawande

by Roberto V. Zicari on July 14, 2014

“A true columnar store is not only about the way you store data, but the engine and the optimizations that are enabled by the column store”–Shilpa Lawande.

On the subject of column stores, and what are the important features in the new release of the HP Vertica Analytics Platform, I have interviewed Shilpa Lawande, VP Engineering & Customer Experience at HP Vertica.

RVZ

Q1. Back in 2011 I did an interview with you [1] at the time Vertica was just acquired by HP. What is new in the current version of Vertica?

Shilpa Lawande: We’ve come a long way since 2011 and our innovation engine is going strong!
From “Bulldozer” to “Crane” and now “Dragline,” we’ve built on our columnar-compressed, MPP share-nothing core, expanded security and manageability, dramatically expanded data ingestion capabilities, and what’s most exciting is that we’ve added a host of advanced analytics functions and extensibility APIs to the HP Vertica Analytics Platform itself. One key innovation is our ability to ingest and auto-schematize semi-structured data using HP Vertica Flex Zone, which takes away much of the friction in the analytic life-cycle from exploration to production.
We’ve also grown a vibrant community of practitioners and an ecosystem of complementary tools, including Hadoop.

Dragline, our next release of the HP Vertica Analytics Platform addresses the needs of the most demanding, analytic-driven organizations by providing many new features, including:

    • Project Maverick’s Live Aggregate Projections to speed up queries that rely on resource- intensive aggregate functions like SUM, MIN/MAX, and COUNT.
    • Dynamic mixed workload management, which identifies and adapts to varying query complexities — simple and ad-hoc queries as well as long-running advanced queries — and dynamically assigns the appropriate amount of resources to ensure the needs of all data consumers
    • HP Vertica Pulse, which helps organizations leverage an in-database sentiment analysis tool that scores short data posts, including social data, such as Twitter feeds or product reviews, to gauge the most popular topics of interest, analyze how sentiment changes over time and identify advocates and detractors.
    • HP Vertica Place, which stores and analyzes geospatial data in real time, including locations, networks and regions.
    • An expanded SQL-on-Hadoop offering that gives users freedom to pick their data formats and where to store it, including HDFS, but still benefit from the power of the Vertica analytic engine. OF course, there’s a lot more to the “Dragline” release, but these are the highlights.

Q2. Vertica is referred to as an analytics platform. How does it differentiate with respect to conventional relational database systems (RDBMSes)?

Shilpa Lawande: Good question. First, let me clear the misconception that column stores are not relational – Vertica is a relational database, an RDBMS – it speaks tables and columns and standard SQL and ODBC, and like your favorite RDBMS, talks to a variety of BI tools. Now, there are many variations in the database market from low-cost solutions that lack advanced analytics to high-end solutions that can’t handle big data.
HP Vertica is the only one purpose-built for big data analytics – most conventional RDBMS were purpose- built for OLTP and then retrofitted for analytics. Vertica’s core architecture with columnar storage, a columnar engine, aggressive use of data compression, our scale-out architecture, and, most importantly, our unique hybrid load architecture enables what we call real-time analytics, which gives us the edge over the competition.
You can keep loading your data throughout the day — not in batch at night — and you can query the data as it comes in, without any specialized indexes, materialized views, or other pre- processing. And we have a huge and ever-growing library of features and functions to explore and perform analytics on big data–both structured and semi-structured. All of these core capabilities add up to a powerful analytics platform–far beyond a conventional relational database.

Q3. Vertica is column-based. Could you please explain what are the main technological differences with respect to a conventional relational database system?

Shilpa Lawande: It’s about performance. A conventional RDBMS is bottlenecked with disk I/O.
The reason for this is that with a traditional database, data is stored on disks in a row-wise manner, so even if the query needs only a few columns, the entire row must be retrieved from disk. In analytic workloads, often there are hundreds of columns in the data and only a few are used in the query, so row-oriented databases simply don’t scale as the data sets get large.
Vendors who offer this type of database often require that you create indexes and materialized views to retrieve the relevant data in a reasonable about of time. With columnar storage, you store data for each column separately, so that you can grab just the columns you need to answer the query. This can speed query times immensely, where hour-long queries can happen in minutes or seconds. Furthermore, Vertica stores and processes the data sorted, which enables us to do all manner of interesting optimizations to queries that further boost performance.

Some of the traditional database vendors out there claim they now have columnar store, but a true columnar store is not only about the way you store data, but the engine and the optimizations that are enabled by the column store.
For instance, an optimization called late materialized allows Vertica to delay retrieval of columns as late as possible in query processing so that minimal I/O and data movement is done until absolutely necessary. Vertica is the only engine that is true columnar; everything else out there is a retrofit of a general purpose engine that can read some kind of a columnar format.

Q4. What is so special of Vertica data compression?

Shilpa Lawande: The capability of Vertica to store data in columns allows us to take advantage of the similar traits in data. This gives us not only a footprint reduction in the disk needed to store data, but also an I/O performance boost — compressed data takes a shorter time to load. But, even more importantly, we use various encoding techniques on the data itself that enable us to process the data without expanding it first.
We have over a dozen schemes for how we store the data to optimize its storage, retrieval, and processing.

Q5. Vertica is designed for massively parallel processing (MPP). What is it?

Shilpa Lawande: Vertica is a database designed to run on a cluster of industry-standard hardware.
There are no special- purpose hardware components. The database is based on a shared-nothing architecture, where many nodes each store part of the database and do part of the work in processing queries. We optimize the processing so much as to minimize data traffic over the network. We have built-in high availability to handle node failures. We also have a sophisticated elasticity mechanism that allows us to efficiently add and remove nodes from the cluster. This enables us to scale-out to very large data sizes and handle very large data problems. In other words, it is massively parallel processing!

Q6. In the past, columnar databases were said to be slow to load. Is it still true now?

Shilpa Lawande: This may have been true with older unsophisticated columnar databases. We have customers loading over 35 TB data / hour into Vertica, so I think we’ve put that one squarely to rest.

Q7. Who are the users ready to try column-based “data slicers”? And for what kind of use cases?

Shilpa Lawande: Vertica is a technology broadly applicable in many industries and in many business situations. Here are just a few of them.

Data Warehouse Modernization – the customer has some underperforming solution for data warehouse in place and they want to replace or augment their current analytics with a solution that will scale and deliver faster analytics at an overall lower TCO that requires substantially less hardware resources.

Hadoop Acceleration – the customer has bought into Hadoop for a data lake solution and would like a more expressive and faster SQL-on-Hadoop solution or an analytic platform that can offer real-time analytics for production use.

Predictive analytics – the customer has some kind of machine data, clickstream logs, call detail records, security event data, network performance data, etc. over long periods of time and they would like to get value out of this data via predictive analytics. Use-cases include website personalization, network performance optimization, security thread forensics, quality control, predictive maintenance, etc.

Q8. What are the typical indicators which are used to measure how well systems are running and analyzing data in the enterprise? In other words, how “good” is the value derived from analyzing (Big) Data?

Shilpa Lawande: There are many, many advantages and places to derive value from big data.
First, just having the ability to answer your daily analytics faster can be a huge boost for the organization. For example, we had one brick-and-mortar retailer who wanted to brief sales associates and managers daily on what the hottest selling products were, who had inventory and other store trends. With their legacy analytics system, they could not deliver analytics fast enough to have these analytics on hand. With Vertica, they now provide very detailed (and I might add graphically pleasing) analytics across all of their stores, right in the hands of the store manager via a tablet device. The analytics has boosted sales performance and efficiency across the chain. The user experience they get wouldn’t be possible without the speed of Vertica.

But what is most exciting to me is when Vertica is used to save lives and the environment. We have a client in the medical field who has used Vertica analytics to better detect infections in newborn infants by leveraging the data they have from the NICU. It’s difficult to detect infections in newborns because they don’t often run a fever, nor can they explain how they feel. The estimate is that this big data analytics has saved the lives of hundreds of newborn babies in the first year of use. Another example is the HP Earth Insights project, which used Vertica to create an early warning system to identify species threatened by destruction of tropical forests around the world.
This project done in cooperation with Conservation International is making an amazing difference to scientists and helping inform and influence policy decisions around our environment.

There are a LOT of great use cases like these coming out of the Vertica community.

Q9. What are the main technical challenges when analyzing data at speed?

Shilpa Lawande: In an analytics system, you tend to have a lot going on at the same time. There are data loads, both in batch and trickle loads. There is daily and regular analytics for generating daily reports. There may be data discovery where users are trying to find value in data. Of course, there are dashboards that executives rely upon to stay up to date. Finally, you may have ad-hoc queries that come in and try to take away resources. So perhaps the biggest challenge is dealing with all of these workloads and coming up with the most efficient way to manage it all.
We’ve invested a lot of resources in this area and the fruit of that labor is very much evident in the “Dragline” release.

Q10. Do you have some concrete example of use cases where HP Vertica is used to analyze data at speed?

Shilpa Lawande: Yes, we have many, see here.

Q11. How HP Vertica differs with respect to other analytical platforms offered by competitors such as IBM, Teradata, to in-memory databases such as SAP HANA?

Shilpa Lawande: Vertica offers everything that’s good about legacy data warehouse technologies like the ability to use your favorite visualization tools, standard SQL, and advanced analytic functionality.

In general, the legacy databases you mentioned are pretty good at handling analysis of business data, but they are still playing catch-up when it comes to big data – the volume, variety, and velocity. A row store simply cannot deliver the analytical performance and scale of an MPP columnar platform like Vertica.

In-memory databases are a good acceleration solution for some classes of business analytics, but, again, when it comes to very large data problems, the economics of putting all the data in memory simply do not work. That said, Vertica itself has an in-memory component which is at the core of our high-speed loading architecture, so I believe we have the best of both worlds – ability to use memory where it matters and still support petabyte scales!

——————–
ShilpaLawande_web
Shilpa Lawande,VP Software Engineering and Customer Experience, HP/ Vertica

Shilpa Lawande has been an integral part of the Vertica engineering team from its inception to its acquisition by HP in 2011. Shilpa brings over 15 years of experience in databases, data warehousing and grid computing to HP/Vertica.
Besides being responsible for Vertica’s Engineering team, Shilpa also manages the Customer Experience organization for Vertica including Customer Support, Training and Professional Services. Prior to Vertica, she was a key member of the Oracle Server Technologies group where she worked directly on several data warehousing and self-managing features in the Oracle Database.
Shilpa is a co-inventor on several patents on query optimization, materialized views and automatic index tuning for databases. She has also co-authored two books on data warehousing using the Oracle database as well as a book on Enterprise Grid Computing. She has been named to the 2012 Women to Watch list by Mass High Tech and awarded HP Software Business Unit Leader of the year in 2012.
Shilpa has a Masters in Computer Science from the University of Wisconsin-Madison and a Bachelors in Computer Science and Engineering from the Indian Institute of Technology, Mumbai.

Related Posts

[1] On Big Data: Interview with Shilpa Lawande, VP of Engineering at Vertica, ODBMS Industry Watch, November 16, 2011

Resources

Using the HP Vertica Analytics Platform to Manage Massive Volumes of Smart Meter Data. HP Technical white paper 2014.

Knowing your game. How electronic game companies use Big Data for retention and monetization. HP Business white paper June 2014.

Big Data Applications and Analytics. Dr. Geoffrey Fox, Indiana University. Lecture Notes

Applied Predictive Analytics: Principles and Techniques for the Professional Data Analyst. Dean Abbott, Wiley May 2014

Follow ODBMS.org on Twitter: @odbmsorg

From → Uncategorized

2 Comments Leave one →
  1. Interesting interview. I have following questions to Shilpa:
    1. Are there benchmarks between Vertica and PostgreSQL?
    2. Vertica white papers report they have added float and varchar to c-store’s integer. What about the dozen other data types which exist in RDMS’ like PostgreSQL?
    3. How does Vertica cover SQL syntax? What about window functions or CTE?
    4. Which transaction isolation levels are supported by Vertica?
    5. You (Vertica) took code from #PostgreSQL. What are you giving back to the open source community?

  2. Sorry for the delay Stefan and thank you for the questions.

    We do not spend a lot of time on benchmarks because, as you’d probably agree, it is possible for any system to be made to look better than any other system in a canned benchmark. Our value is demonstrated by the customer successes we have on their real-life business scenarios. Specifically we’ve never considered comparing ourselves to PostgreSQL per se because we don’t think it is solving the same kind of problem as Vertica is. Postgres is a great open-source database for OLTP workloads but comparing to Vertica would be apples v/s oranges. There is a key principle we take to heart – One size doesn’t fit all, use the right tool for the job. We do not claim to be a general purpose RDBMS and we do not aim to solve the OLTP problem at all.

    When Mike Stonebraker founded Vertica in 2005, you might have expected him to base his new analytics database by sharding Postgres which he invented over 20 years earlier. Instead Vertica was built mostly from scratch. We did borrow the SQL parser and semantic analysis, and some client code from Postgres (why reinvent the wheel?) but much of the database engine, anything that impacts performance for analytic workloads is written from scratch – the catalog, optimizer, engine, transaction, recovery, etc. Also, we’ve also deviated quite a ways from Postgres to make the behavior compliant with SQL standards.

    Our documentation is public and details all of our functionality including data types, transaction isolation level and SQL functionality. Note that http://www.vertica.com/documentation/hp-vertica-analytics-platform-7-0-x-product-documentation/

    Regarding giving back to the community, we offer a Community Edition of Vertica, a completely free version of Vertica, usable up to 1TB on 3 nodes. Anyone can download this version and use it forever. With this version, you get all of the great new featured we’ve developed over the years. Many users take advantage of this offer and use Vertica’s fast analytics without paying us a dime. It’s available at https://my.vertica.com/community/ We also have a number of extensions to Vertica that we share with the community including components to connect to Vertica from Hadoop and Pig as open source.

    The Vertica user community also supports us. Our new marketplace (http://vertica.com/marketplace) is full of add-ons to Vertica that will help with data ingestion, visualization and specialized analytics to name a few. These add-on greatly enhance the value of the community version of HP Vertica, as well at the Enterprise version, and many are also free to use.

Leave a Reply

Note: HTML is allowed. Your email address will not be published.

Subscribe to this comment feed via RSS