{"id":3357,"date":"2014-07-14T07:39:29","date_gmt":"2014-07-14T07:39:29","guid":{"rendered":"http:\/\/www.odbms.org\/blog\/?p=3357"},"modified":"2014-07-14T07:39:29","modified_gmt":"2014-07-14T07:39:29","slug":"interview-shilpa-lawande-2","status":"publish","type":"post","link":"https:\/\/www.odbms.org\/blog\/2014\/07\/interview-shilpa-lawande-2\/","title":{"rendered":"On Column Stores. Interview with Shilpa Lawande"},"content":{"rendered":"<blockquote><p><strong><em>&#8220;A true columnar store is not only about the way you store data, but the engine and the optimizations that are enabled by the column store&#8221;<\/em>&#8211;Shilpa Lawande<\/strong>.<\/p><\/blockquote>\n<p>On the subject of column stores, and what are the important features in the new release of the HP Vertica Analytics Platform, I have interviewed <strong>Shilpa Lawande<\/strong>, VP Engineering &amp; Customer Experience at <a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/www.vertica.com');\"  href=\"http:\/\/www.vertica.com\" target=\"_blank\">HP Vertica<\/a>.<\/p>\n<p>RVZ<\/p>\n<p><strong>Q1. Back in <a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/www.odbms.org\/blog\/2011\/11\/on-big-data-interview-with-shilpa-lawande-vp-of-engineering-at-vertica\/');\"  href=\"http:\/\/www.odbms.org\/blog\/2011\/11\/on-big-data-interview-with-shilpa-lawande-vp-of-engineering-at-vertica\/\" target=\"_blank\">2011 I did an interview with you<\/a> [1] at the time Vertica was just acquired by HP. What is new in the current version of Vertica?<\/strong><\/p>\n<p><strong>Shilpa Lawande<\/strong>: We\u2019ve come a long way since 2011 and our innovation engine is going strong!<br \/>\nFrom \u201c<a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/www.vertica.com\/vertica-6-1-bulldozer\/');\"  href=\"http:\/\/www.vertica.com\/vertica-6-1-bulldozer\/\" target=\"_blank\">Bulldozer<\/a>\u201d to \u201c<a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/www.vertica.com\/hp-vertica-analytics-platform-7-crane\/');\"  href=\"http:\/\/www.vertica.com\/hp-vertica-analytics-platform-7-crane\/\" target=\"_blank\">Crane<\/a>\u201d and now \u201c<a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/www.vertica.com\/content\/hp-vertica-dragline-2\/');\"  href=\"http:\/\/www.vertica.com\/content\/hp-vertica-dragline-2\/\" target=\"_blank\">Dragline<\/a>,\u201d we\u2019ve built on our columnar-compressed, MPP share-nothing core, expanded security and manageability, dramatically expanded data ingestion capabilities, and what\u2019s most exciting is that we\u2019ve added a host of advanced analytics functions and extensibility APIs to the <a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/www.vertica.com\/hp-vertica-products\/the-analytics-platform-2\/');\"  href=\"http:\/\/www.vertica.com\/hp-vertica-products\/the-analytics-platform-2\/\" target=\"_blank\">HP Vertica Analytics Platform <\/a>itself. One key innovation is our ability to ingest and auto-schematize semi-structured data using HP Vertica Flex Zone, which takes away much of the friction in the analytic life-cycle from exploration to production.<br \/>\nWe\u2019ve also grown a vibrant community of practitioners and an ecosystem of complementary tools, including Hadoop.<\/p>\n<p><a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/www.vertica.com\/2014\/05\/28\/introducing-hp-vertica-dragline\/');\"  href=\"http:\/\/www.vertica.com\/2014\/05\/28\/introducing-hp-vertica-dragline\/\" target=\"_blank\">Dragline<\/a>, our next release of the HP Vertica Analytics Platform addresses the needs of the most demanding, analytic-driven organizations by providing many new features, including:<\/p>\n<ul>\n<ul>\n<li><a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/www.vertica.com\/2014\/07\/01\/live-aggregate-projections-with-hp-vertica\/');\"  href=\"http:\/\/www.vertica.com\/2014\/07\/01\/live-aggregate-projections-with-hp-vertica\/\" target=\"_blank\">Project Maverick\u2019s Live Aggregate Projections<\/a> to speed up queries that rely on resource- intensive aggregate functions like SUM, MIN\/MAX, and COUNT.<\/li>\n<li>Dynamic mixed workload management, which identifies and adapts to varying query complexities \u2014 simple and ad-hoc queries as well as long-running advanced queries \u2014 and dynamically assigns the appropriate amount of resources to ensure the needs of all data consumers<\/li>\n<li>HP Vertica Pulse, which helps organizations leverage an in-database sentiment analysis tool that scores short data posts, including social data, such as Twitter feeds or product reviews, to gauge the most popular topics of interest, analyze how sentiment changes over time and identify advocates and detractors.<\/li>\n<li>HP Vertica Place, which stores and analyzes geospatial data in real time, including locations, networks and regions.<\/li>\n<li>An expanded SQL-on-Hadoop offering that gives users freedom to pick their data formats and where to store it, including HDFS, but still benefit from the power of the Vertica analytic engine. OF course, there\u2019s a lot more to the \u201cDragline\u201d release, but these are the highlights.<\/li>\n<\/ul>\n<\/ul>\n<p><strong>Q2. Vertica is referred to as an analytics platform. How does it differentiate with respect to conventional relational database systems (RDBMSes)?<\/strong><\/p>\n<p><strong>Shilpa Lawande<\/strong>: Good question. First, let me clear the misconception that column stores are not relational \u2013 Vertica is a relational database, an RDBMS \u2013 it speaks tables and columns and standard SQL and ODBC, and like your favorite RDBMS, talks to a variety of BI tools. Now, there are many variations in the database market from low-cost solutions that lack advanced analytics to high-end solutions that can\u2019t handle big data.<br \/>\nHP Vertica is the only one purpose-built for big data analytics \u2013 most conventional RDBMS were purpose- built for OLTP and then retrofitted for analytics. Vertica\u2019s core architecture with columnar storage, a columnar engine, aggressive use of data compression, our scale-out architecture, and, most importantly, our unique hybrid load architecture enables what we call real-time analytics, which gives us the edge over the competition.<br \/>\nYou can keep loading your data throughout the day &#8212; not in batch at night &#8212; and you can query the data as it comes in, without any specialized indexes, materialized views, or other pre- processing. And we have a huge and ever-growing library of features and functions to explore and perform analytics on big data&#8211;both structured and semi-structured. All of these core capabilities add up to a powerful analytics platform&#8211;far beyond a conventional relational database.<\/p>\n<p><strong>Q3. Vertica is column-based. Could you please explain what are the main technological differences with respect to a conventional relational database system?<\/strong><\/p>\n<p><strong>Shilpa Lawande<\/strong>: It\u2019s about performance. A conventional RDBMS is bottlenecked with disk I\/O.<br \/>\nThe reason for this is that with a traditional database, data is stored on disks in a row-wise manner, so even if the query needs only a few columns, the entire row must be retrieved from disk. In analytic workloads, often there are hundreds of columns in the data and only a few are used in the query, so row-oriented databases simply don\u2019t scale as the data sets get large.<br \/>\nVendors who offer this type of database often require that you create indexes and materialized views to retrieve the relevant data in a reasonable about of time. With columnar storage, you store data for each column separately, so that you can grab just the columns you need to answer the query. This can speed query times immensely, where hour-long queries can happen in minutes or seconds. Furthermore, Vertica stores and processes the data sorted, which enables us to do all manner of interesting optimizations to queries that further boost performance.<\/p>\n<p>Some of the traditional database vendors out there claim they now have columnar store, but a true columnar store is not only about the way you store data, but the engine and the optimizations that are enabled by the column store.<br \/>\nFor instance, an optimization called late materialized allows Vertica to delay retrieval of columns as late as possible in query processing so that minimal I\/O and data movement is done until absolutely necessary. Vertica is the only engine that is true columnar; everything else out there is a retrofit of a general purpose engine that can read some kind of a columnar format.<\/p>\n<p><strong>Q4. What is so special of Vertica data compression?<\/strong><\/p>\n<p><strong>Shilpa Lawande<\/strong>: The capability of Vertica to store data in columns allows us to take advantage of the similar traits in data. This gives us not only a footprint reduction in the disk needed to store data, but also an I\/O performance boost &#8212; compressed data takes a shorter time to load. But, even more importantly, we use various encoding techniques on the data itself that enable us to process the data without expanding it first.<br \/>\nWe have over a dozen schemes for how we store the data to optimize its storage, retrieval, and processing.<\/p>\n<p><strong>Q5. Vertica is designed for massively parallel processing (MPP). What is it?<\/strong><\/p>\n<p><strong>Shilpa Lawande<\/strong>: Vertica is a database designed to run on a cluster of industry-standard hardware.<br \/>\nThere are no special- purpose hardware components. The database is based on a shared-nothing architecture, where many nodes each store part of the database and do part of the work in processing queries. We optimize the processing so much as to minimize data traffic over the network. We have built-in high availability to handle node failures. We also have a sophisticated elasticity mechanism that allows us to efficiently add and remove nodes from the cluster. This enables us to scale-out to very large data sizes and handle very large data problems. In other words, it is massively parallel processing!<\/p>\n<p><strong>Q6. In the past, columnar databases were said to be slow to load. Is it still true now?<\/strong><\/p>\n<p><strong>Shilpa Lawande<\/strong>: This may have been true with older unsophisticated columnar databases. We have customers loading over 35 TB data \/ hour into Vertica, so I think we\u2019ve put that one squarely to rest.<\/p>\n<p><strong>Q7. Who are the users ready to try column-based &#8220;data slicers&#8221;? And for what kind of use cases?<\/strong><\/p>\n<p><strong>Shilpa Lawande<\/strong>: Vertica is a technology broadly applicable in many industries and in many business situations. Here are just a few of them.<\/p>\n<p><em>Data Warehouse Modernization<\/em> \u2013 the customer has some underperforming solution for data warehouse in place and they want to replace or augment their current analytics with a solution that will scale and deliver faster analytics at an overall lower TCO that requires substantially less hardware resources.<\/p>\n<p><em>Hadoop Acceleration<\/em> \u2013 the customer has bought into Hadoop for a data lake solution and would like a more expressive and faster SQL-on-Hadoop solution or an analytic platform that can offer real-time analytics for production use.<\/p>\n<p><em>Predictive analytics<\/em> \u2013 the customer has some kind of machine data, clickstream logs, call detail records, security event data, network performance data, etc. over long periods of time and they would like to get value out of this data via predictive analytics. Use-cases include website personalization, network performance optimization, security thread forensics, quality control, predictive maintenance, etc.<\/p>\n<p><strong>Q8. What are the typical indicators which are used to measure how well systems are running and analyzing data in the enterprise? In other words, how &#8220;good&#8221; is the value derived from analyzing (Big) Data?<\/strong><\/p>\n<p><strong>Shilpa Lawande<\/strong>: There are many, many advantages and places to derive value from big data.<br \/>\nFirst, just having the ability to answer your daily analytics faster can be a huge boost for the organization. For example, we had one brick-and-mortar retailer who wanted to brief sales associates and managers daily on what the hottest selling products were, who had inventory and other store trends. With their legacy analytics system, they could not deliver analytics fast enough to have these analytics on hand. With Vertica, they now provide very detailed (and I might add graphically pleasing) analytics across all of their stores, right in the hands of the store manager via a tablet device. The analytics has boosted sales performance and efficiency across the chain. The user experience they get wouldn\u2019t be possible without the speed of Vertica.<\/p>\n<p>But what is most exciting to me is when Vertica is used to save lives and the environment. We have a client in the medical field who has used Vertica analytics to better detect infections in newborn infants by leveraging the data they have from the <a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/en.wikipedia.org\/wiki\/Neonatal_intensive_care_unit');\"  href=\"http:\/\/en.wikipedia.org\/wiki\/Neonatal_intensive_care_unit\" target=\"_blank\">NICU<\/a>. It\u2019s difficult to detect infections in newborns because they don\u2019t often run a fever, nor can they explain how they feel. The estimate is that this big data analytics has saved the lives of hundreds of newborn babies in the first year of use. Another example is the <a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/www8.hp.com\/h71061\/index.html');\"  href=\"http:\/\/www8.hp.com\/h71061\/index.html\" target=\"_blank\">HP Earth Insights project<\/a>, which used Vertica to create an early warning system to identify species threatened by destruction of tropical forests around the world.<br \/>\nThis project done in cooperation with <a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/www.conservation.org');\"  href=\"http:\/\/www.conservation.org\" target=\"_blank\">Conservation International<\/a> is making an amazing difference to scientists and helping inform and influence policy decisions around our environment.<\/p>\n<p>There are a LOT of great use cases like these coming out of the Vertica community.<\/p>\n<p><strong>Q9. What are the main technical challenges when analyzing data at speed?<\/strong><\/p>\n<p><strong>Shilpa Lawande<\/strong>: In an analytics system, you tend to have a lot going on at the same time. There are data loads, both in batch and trickle loads. There is daily and regular analytics for generating daily reports. There may be data discovery where users are trying to find value in data. Of course, there are dashboards that executives rely upon to stay up to date. Finally, you may have ad-hoc queries that come in and try to take away resources. So perhaps the biggest challenge is dealing with all of these workloads and coming up with the most efficient way to manage it all.<br \/>\nWe\u2019ve invested a lot of resources in this area and the fruit of that labor is very much evident in the \u201cDragline\u201d release.<\/p>\n<p><strong>Q10. Do you have some concrete example of use cases where HP Vertica is used to analyze data at speed?<\/strong><\/p>\n<p><strong>Shilpa Lawande<\/strong>: Yes, we have many, see <a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/www.vertica.com\/customers\/case-studies\/');\"  href=\"http:\/\/www.vertica.com\/customers\/case-studies\/\" target=\"_blank\">here<\/a>.<\/p>\n<p><strong>Q11. How HP Vertica differs with respect to other analytical platforms offered by competitors such as IBM, Teradata, to in-memory databases such as SAP HANA?<\/strong><\/p>\n<p><strong>Shilpa Lawande<\/strong>: Vertica offers everything that\u2019s good about legacy data warehouse technologies like the ability to use your favorite visualization tools, standard SQL, and advanced analytic functionality.<\/p>\n<p>In general, the legacy databases you mentioned are pretty good at handling analysis of business data, but they are still playing catch-up when it comes to big data \u2013 the volume, variety, and velocity. A row store simply cannot deliver the analytical performance and scale of an MPP columnar platform like Vertica.<\/p>\n<p>In-memory databases are a good acceleration solution for some classes of business analytics, but, again, when it comes to very large data problems, the economics of putting all the data in memory simply do not work. That said, Vertica itself has an in-memory component which is at the core of our high-speed loading architecture, so I believe we have the best of both worlds \u2013 ability to use memory where it matters and still support petabyte scales!<\/p>\n<p>&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;<br \/>\n<a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/www.odbms.org\/blog\/wp-content\/uploads\/2014\/07\/ShilpaLawande_web.jpg');\"  href=\"http:\/\/www.odbms.org\/blog\/wp-content\/uploads\/2014\/07\/ShilpaLawande_web.jpg\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-3359\" alt=\"ShilpaLawande_web\" src=\"http:\/\/www.odbms.org\/blog\/wp-content\/uploads\/2014\/07\/ShilpaLawande_web.jpg\" width=\"93\" height=\"100\" \/><\/a><br \/>\n<strong>Shilpa Lawande<\/strong>,VP Software Engineering and Customer Experience, HP\/ Vertica<\/p>\n<p><em>Shilpa Lawande has been an integral part of the Vertica engineering team from its inception to its acquisition by HP in 2011. Shilpa brings over 15 years of experience in databases, data warehousing and grid computing to HP\/Vertica.<br \/>\nBesides being responsible for Vertica&#8217;s Engineering team, Shilpa also manages the Customer Experience organization for Vertica including Customer Support, Training and Professional Services. Prior to Vertica, she was a key member of the Oracle Server Technologies group where she worked directly on several data warehousing and self-managing features in the Oracle Database.<br \/>\nShilpa is a co-inventor on several patents on query optimization, materialized views and automatic index tuning for databases. She has also co-authored two books on data warehousing using the Oracle database as well as a book on Enterprise Grid Computing. She has been named to the 2012 Women to Watch list by Mass High Tech and awarded HP Software Business Unit Leader of the year in 2012.<br \/>\nShilpa has a Masters in Computer Science from the University of Wisconsin-Madison and a Bachelors in Computer Science and Engineering from the Indian Institute of Technology, Mumbai.<\/em><\/p>\n<p><strong>Related Posts<\/strong><\/p>\n<p>[1] <a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/www.odbms.org\/blog\/2011\/11\/on-big-data-interview-with-shilpa-lawande-vp-of-engineering-at-vertica\/');\"  href=\"http:\/\/www.odbms.org\/blog\/2011\/11\/on-big-data-interview-with-shilpa-lawande-vp-of-engineering-at-vertica\/\" target=\"_blank\">On Big Data: Interview with Shilpa Lawande, VP of Engineering at Vertica, ODBMS Industry Watch, November 16, 2011<\/a><\/p>\n<p><strong>Resources<\/strong><\/p>\n<p>&#8211;<a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/www.odbms.org\/2014\/06\/hp-vertica-analytics-platform-manage-massive-volumes-smart-meter-data\/');\"  href=\"http:\/\/www.odbms.org\/2014\/06\/hp-vertica-analytics-platform-manage-massive-volumes-smart-meter-data\/\" target=\"_blank\">Using the HP Vertica Analytics Platform to Manage Massive Volumes of Smart Meter Data. HP Technical white paper 2014.<\/a><\/p>\n<p>&#8211; <a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/www.odbms.org\/2014\/06\/knowing-game-electronic-game-companies-big-data-retention-monetization\/');\"  href=\"http:\/\/www.odbms.org\/2014\/06\/knowing-game-electronic-game-companies-big-data-retention-monetization\/\" target=\"_blank\">Knowing your game. How electronic game companies use Big Data for retention and monetization. HP Business white paper June 2014.<\/a><\/p>\n<p>&#8211; <a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/www.odbms.org\/2014\/04\/big-data-applications-analytics\/');\"  href=\"http:\/\/www.odbms.org\/2014\/04\/big-data-applications-analytics\/\" target=\"_blank\">Big Data Applications and Analytics. Dr. Geoffrey Fox, Indiana University. Lecture Notes<\/a><\/p>\n<p>&#8211; <a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/www.odbms.org\/2014\/05\/applied-predictive-analytics-principles-techniques-professional-data-analyst-2\/');\"  href=\"http:\/\/www.odbms.org\/2014\/05\/applied-predictive-analytics-principles-techniques-professional-data-analyst-2\/\" target=\"_blank\">Applied Predictive Analytics: Principles and Techniques for the Professional Data Analyst. Dean Abbott, Wiley May 2014<\/a><\/p>\n<p><strong>Follow ODBMS.org on Twitter: <a onclick=\"javascript:pageTracker._trackPageview('\/outgoing\/twitter.com\/odbmsorg');\"  href=\"https:\/\/twitter.com\/odbmsorg\" target=\"_blank\">@odbmsorg<\/a><\/strong><\/p>\n<!-- AddThis Advanced Settings generic via filter on the_content --><!-- AddThis Share Buttons generic via filter on the_content -->","protected":false},"excerpt":{"rendered":"<p>&#8220;A true columnar store is not only about the way you store data, but the engine and the optimizations that are enabled by the column store&#8221;&#8211;Shilpa Lawande. On the subject of column stores, and what are the important features in the new release of the HP Vertica Analytics Platform, I have interviewed Shilpa Lawande, VP [&hellip;]<!-- AddThis Advanced Settings generic via filter on get_the_excerpt --><!-- AddThis Share Buttons generic via filter on get_the_excerpt --><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[35,66,692,260,658,276,490,499,531,549],"_links":{"self":[{"href":"https:\/\/www.odbms.org\/blog\/wp-json\/wp\/v2\/posts\/3357"}],"collection":[{"href":"https:\/\/www.odbms.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.odbms.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.odbms.org\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.odbms.org\/blog\/wp-json\/wp\/v2\/comments?post=3357"}],"version-history":[{"count":20,"href":"https:\/\/www.odbms.org\/blog\/wp-json\/wp\/v2\/posts\/3357\/revisions"}],"predecessor-version":[{"id":3379,"href":"https:\/\/www.odbms.org\/blog\/wp-json\/wp\/v2\/posts\/3357\/revisions\/3379"}],"wp:attachment":[{"href":"https:\/\/www.odbms.org\/blog\/wp-json\/wp\/v2\/media?parent=3357"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.odbms.org\/blog\/wp-json\/wp\/v2\/categories?post=3357"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.odbms.org\/blog\/wp-json\/wp\/v2\/tags?post=3357"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}