ODBMS Industry Watch » RDBMS http://www.odbms.org/blog Trends and Information on Big Data, New Data Management Technologies, Data Science and Innovation. Sun, 02 Apr 2017 17:59:10 +0000 en-US hourly 1 http://wordpress.org/?v=4.2.13 Database Challenges and Innovations. Interview with Jim Starkey http://www.odbms.org/blog/2016/08/database-challenges-and-innovations-interview-with-jim-starkey/ http://www.odbms.org/blog/2016/08/database-challenges-and-innovations-interview-with-jim-starkey/#comments Wed, 31 Aug 2016 03:33:42 +0000 http://www.odbms.org/blog/?p=4218

“Isn’t it ironic that in 2016 a non-skilled user can find a web page from Google’s untold petabytes of data in millisecond time, but a highly trained SQL expert can’t do the same thing in a relational database one billionth the size?.–Jim Starkey.

I have interviewed Jim Starkey. A database legendJim’s career as an entrepreneur, architect, and innovator spans more than three decades of database history.

RVZ

Q1. In your opinion, what are the most significant advances in databases in the last few years?

Jim Starkey: I’d have to say the “atom programming model” where a database is layered on a substrate of peer-to-peer replicating distributed objects rather than disk files. The atom programming model enables scalability, redundancy, high availability, and distribution not available in traditional, disk-based database architectures.

Q2. What was your original motivation to invent the NuoDB Emergent Architecture?

Jim Starkey: It all grew out of a long Sunday morning shower. I knew that the performance limits of single-computer database systems were in sight, so distributing the load was the only possible solution, but existing distributed systems required that a new node copy a complete database or partition before it could do useful work. I started thinking of ways to attack this problem and came up with the idea of peer to peer replicating distributed objects that could be serialized for network delivery and persisted to disk. It was a pretty neat idea. I came out much later with the core architecture nearly complete and very wrinkled (we have an awesome domestic hot water system).

Q3. In your career as an entrepreneur and architect what was the most significant innovation you did?

Jim Starkey: Oh, clearly multi-generational concurrency control (MVCC). The problem I was trying to solve was allowing ad hoc access to a production database for a 4GL product I was working on at the time, but the ramifications go far beyond that. MVCC is the core technology that makes true distributed database systems possible. Transaction serialization is like Newtonian physics – all observers share a single universal reference frame. MVCC is like special relativity, where each observer views the universe from his or her reference frame. The views appear different but are, in fact, consistent.

Q4. Proprietary vs. open source software: what are the pros and cons?

Jim Starkey: It’s complicated. I’ve had feet in both camps for 15 years. But let’s draw a distinction between open source and open development. Open development – where anyone can contribute – is pretty good at delivering implementations of established technologies, but it’s very difficult to push the state of the art in that environment. Innovation, in my experience, requires focus, vision, and consistency that are hard to maintain in open development. If you have a controlled development environment, the question of open source versus propriety is tactics, not philosophy. Yes, there’s an argument that having the source available gives users guarantees they don’t get from proprietary software, but with something as complicated as a database, most users aren’t going to try to master the sources. But having source available lowers the perceived risk of new technologies, which is a big plus.

Q5. You led the Falcon project – a transactional storage engine for the MySQL server- through the acquisition of MySQL by Sun Microsystems. What impact did it have this project in the database space?

Jim Starkey: In all honesty, I’d have to say that Falcon’s most important contribution was its competition with InnoDB. In the end, that competition made InnoDB three times faster. Falcon, multi-version in memory using the disk for backfill, was interesting, but no matter how we cut it, it was limited by the performance of the machine it ran on. It was fast, but no single node database can be fast enough.

Q6. What are the most challenging issues in databases right now?

Jim Starkey: I think it’s time to step back and reexamine the assumptions that have accreted around database technology – data model, API, access language, data semantics, and implementation architectures. The “relational model”, for example, is based on what Codd called relations and we call tables, but otherwise have nothing to do with his mathematic model. That model, based on set theory, requires automatic duplicate elimination. To the best of my knowledge, nobody ever implemented Codd’s model, but we still have tables which bear a scary resemblance to decks of punch cards. Are they necessary? Or do they just get in the way?
Isn’t it ironic that in 2016 a non-skilled user can find a web page from Google’s untold petabytes of data in millisecond time, but a highly trained SQL expert can’t do the same thing in a relational database one billionth the size?. SQL has no provision for flexible text search, no provision for multi-column, multi-table search, and no mechanics in the APIs to handle the results if it could do them. And this is just one a dozen problems that SQL databases can’t handle. It was a really good technical fit for computers, memory, and disks of the 1980’s, but is it right answer now?

Q7. How do you see the database market evolving?

Jim Starkey: I’m afraid my crystal ball isn’t that good. Blobs, another of my creations, spread throughout the industry in two years. MVCC took 25 years to become ubiquitous. I have a good idea of where I think it should go, but little expectation of how or when it will.

Qx. Anything else you wish to add?

Jim Starkey: Let me say a few things about my current project, AmorphousDB, an implementation of the Amorphous Data Model (meaning, no data model at all). AmorphousDB is my modest effort to question everything database.
The best way to think about Amorphous is to envision a relational database and mentally erase the boxes around the tables so all records free float in the same space – including data and metadata. Then, if you’re uncomfortable, add back a “record type” attribute and associated syntactic sugar, so table-type semantics are available, but optional. Then abandon punch card data semantics and view all data as abstract and subject to search. Eliminate the fourteen different types of numbers and strings, leaving simply numbers and strings, but add useful types like URL’s, email addresses, and money. Index everything unless told not to. Finally, imagine an API that fits on a single sheet of paper (OK, 9 point font, both sides) and an implementation that can span hundreds of nodes. That’s AmorphousDB.

————
Jim Starkey invented the NuoDB Emergent Architecture, and developed the initial implementation of the product. He founded NuoDB [formerly NimbusDB] in 2008, and retired at the end of 2012, shortly before the NuoDB product launch.

Jim’s career as an entrepreneur, architect, and innovator spans more than three decades of database history from the Datacomputer project on the fledgling ARPAnet to his most recent startup, NuoDB, Inc. Through the period, he has been
responsible for many database innovations from the date data type to the BLOB to multi-version concurrency control (MVCC). Starkey has extensive experience in proprietary and open source software.

Starkey joined Digital Equipment Corporation in 1975, where he created the Datatrieve family of products, the DEC Standard Relational Interface architecture, and the first of the Rdb products, Rdb/ELN. Starkey was also software architect for DEC’s database machine group.

Leaving DEC in 1984, Starkey founded Interbase Software to develop relational database software for the engineering workstation market. Interbase was a technical leader in the database industry producing the first commercial implementations of heterogeneous networking, blobs, triggers, two phase commit, database events, etc. Ashton-Tate acquired Interbase Software in 1991, and was, in turn, acquired by Borland International a few months later. The Interbase database engine was released open source by Borland in 2000 and became the basis for the Firebird open source database project.

In 2000, Starkey founded Netfrastructure, Inc., to build a unified platform for distributable, high quality Web applications. The Netfrastructure platform included a relational database engine, an integrated search engine, an integrated Java virtual machine, and a high performance page generator.

MySQL, AB, acquired Netfrastructure, Inc. in 2006 to be the kernel of a wholly owned transactional storage engine for the MySQL server, later known as Falcon. Starkey led the Falcon project through the acquisition of MySQL by Sun Microsystems.

Jim has a degree in Mathematics from the University of Wisconsin.
For amusement, Jim codes on weekends, while sailing, but not while flying his plane.

——————

Resources

NuoDB Emergent Architecture (.PDF)

On Database Resilience. Interview with Seth Proctor, ODBMs Industry Watch, March 17, 2015

Related Posts

– Challenges and Opportunities of The Internet of Things. Interview with Steve Cellini, ODBMS Industry Watch, October 7, 2015

– Hands-On with NuoDB and Docker, BY MJ Michaels, NuoDB. ODBMS.org– OCT 27 2015

– How leading Operational DBMSs rank popularity wise? By Michael Waclawiczek– ODBMS.org · JANUARY 27, 2016

– A Glimpse into U-SQL BY Stephen Dillon, Schneider Electric, ODBMS.org-DECEMBER 7, 2015

– Gartner Magic Quadrant for Operational DBMS 2015

Follow us on Twitter: @odbmsorg

##

]]>
http://www.odbms.org/blog/2016/08/database-challenges-and-innovations-interview-with-jim-starkey/feed/ 0
Using NoSQL for Ireland’s Online Tax Research Database. http://www.odbms.org/blog/2016/05/using-nosql-for-irelands-online-tax-research-database/ http://www.odbms.org/blog/2016/05/using-nosql-for-irelands-online-tax-research-database/#comments Mon, 02 May 2016 08:18:17 +0000 http://www.odbms.org/blog/?p=4128

“When the Institute began to look for a new platform, it became apparent that a relational database was not the best solution to effectively manage and deliver our XML content.”–Martin Lambe.

The Irish Tax Institute is the leading representative and educational body for Ireland’s AITI Chartered Tax Advisers (CTA) and is the only professional body exclusively dedicated to tax. One of their service is TaxFind – Ireland’s Leading Online Tax Research Database, offering Search to 200,000 pages of tax content, over 8,000 pages of Irish tax legislation, Irish Tax Institute tax technical papers, over 25 leading tax commentary publications, and 1000s of Irish Tax Review articles.

I did a joint interview with Martin Lambe, CEO of the Irish Tax Institute and Sam Herbert, Client Services Director at 67 Bricks.
Main topics of the interview are the data challenges they currently face, and the implementation of TaxFind using MarkLogic.

RVZ

Q1. What are the main data challenges you currently have at the Irish Tax Institute?

Martin Lambe: The Irish Tax Institute moved its publication workflow to an XML-based process in 2009 and we have a large archive of valuable tax information contained in quite complex XML format. The main challenge was to find a solution that could store the repository of data (XML and other formats) and provide a simple search interface that directs users very quickly to the most relevant result. The “findability” of relevant content is crucial.

Q2. What is the TaxFind research database?

Martin Lambe: The Irish Tax Institute is the main provider of tax information in Ireland and TaxFind is the Institute’s online tax research database. TaxFind offers subscribers access to Irish tax legislation and guidance that includes tax technical papers from seminars and conferences, as well as over 30 tax commentary publications. It is used by thousands of CTAs in Ireland on a daily basis to assist in their tax research.

Q3. Who are the members that benefit from this TaxFind research database?

Martin Lambe: TaxFind serves the Chartered Tax Adviser (CTA) community in Ireland and other tax professionals such as those in the global accounting firms.

Q4. Why did you discard your previous implementation with a relational database system?

Martin Lambe: The previous database was literally creaking at the seams. Users were increasingly frustrated with difficulties accessing the database on different browsers and the old platform did not support mobile devices or tablets. When the Institute began to look for a new platform, it became apparent that a relational database was not the best solution to effectively manage and deliver our XML content. XML content stored in a NoSQL document database is indexed specifically for the search engine and this means the performance of our search engine and the relevancy of results is dramatically improved.

Q5. Why did you select MarkLogic`s NoSQL database platform?

Sam Herbert: MarkLogic is scalable to support fast querying across large amounts of data, it deals with XML content very well (and most of the tax data is either in XML, or in HTML that can be treated as XHTML), and has good searching. It is also a good environment to develop in – it has excellent documentation, and good tooling. It helps that it uses XQuery as one of its query languages, rather than a proprietary database-specific language.

Q6. Is SQL still important for you?

Sam Herbert: I don’t think it’s true to say that any particular type of technology is “important” to ITI – it’s all about how it can benefit users. From a 67 Bricks perspective, we work with relational databases, NoSQL databases, and graph databases depending on what shape the data is and what the needs are around querying it.

Q7 Why not choose an open source solution?

Sam Herbert: We’re using Open Source components in other parts of the system, and we’re keen on using Open Source where possible. However, for the data store, there aren’t any Open Source alternatives that have the combination of good scalability, good support for XML content, a standard query language, and powerful searching that we were looking for.

Q8. Can you tell us a bit about the architecture of the new implementation of the TaxFind research database

Sam Herbert: There are three major components:

– a frontend display and service layer written using the Play framework
– the MarkLogic data store
– a semantic enrichment component using Semaphore SmartLogic and the ITI taxonomy

The Play component is what users interact with – both for human users coming to the web site, and automated use of the web services. The bulk of the data retrieval and manipulation is done via a set of XQuery functions defined within the MarkLogic store. When new data is uploaded, it is processed within the Play code, enriched using Semaphore SmartLogic, and then stored in MarkLogic.

Q9. How do you manage to integrate Irish Tax Institute`s tax data, bringing together in excess of 300,000 pages of tax content including archive material in Word, PDF, XML and HTML?

Sam Herbert: The most complex part of the data is the XML content. These are very large XML files representing legislation, books, and other tax materials, that are inter-related in complex ways, and with a lot of deeply nested hierarchy. An important part of managing the data was splitting these into appropriately sized fragments, and then identifying the linking between different files – for example a piece of legislation will refer to other legislation, and commentary will refer to that legislation, and a new piece of legislation may supersede an earlier piece.

The non-XML content is larger in volume, but each individual document is smaller and is structurally simpler. Managing this content was largely a matter of loading it in and letting it be indexed.

Q10. How do you capture and digitize information in various formats and make it searchable?

Sam Herbert: Making it searchable is straightforward – it’s making it searchable in ways that support the expectations of the users that’s much more difficult.

A good search experience requires both subject matter expertise and good automated tests.

The basic search is using MarkLogic’s full text search. The next step was to work with tax experts within and outside the ITI to identify appropriate facets within the content with which to group the results – based on a combination of what the user requirements were and what was supported by the data.

There were additional complexities around weighting the search results to make the “best” results come at the top in as many circumstances as possible – for example, weighting terms within headings, weighting more recent content, weighting content based on its category so legislation is more important than commentary, and weighting content higher based on its popularity. The semantic enrichment based on tax terms from the ITI taxonomy also enhances the searching.

Q11. How do you ensure that this solution is scalable?

Sam Herbert: The solution is deployed to a load-balanced cluster using Amazon Web Services. The Play frontend is purely stateless REST. This means that we can scale to support more users easily by spinning up more servers – and using AWS makes this easy. Overall, using AWS has been a big win for us, in terms of being able to get servers running easily, being able to increase and decrease things like their memory size easily, and the various ancillary services it provides like DNS and load balancing. By making sure we can scale to support additional data, we can use MarkLogic effectively.

————-

Martin Lambe is Chief Executive of the Irish Tax Institute. His previous role within the Institute was that of Director of Finance.

Sam Herbert is Client Services Director at 67 Bricks, a company that works with information owners (particularly publishers) who want to enrich their content to make it more structured, granular, flexible and reusable.
67 Bricks utilises its deep understanding of the content enrichment challenge to help publishers develop systems and capabilities to increase the value of their content. With expertise in XML, business analysis, semantic tagging and software development, 67 Bricks works closely with its clients to develop and implement content enrichment capabilities and enriched content digital products.

————-
Resources

Irish Tax Institute

TaxFind

67 Bricks

MarkLogic

Related Posts

The rise of immutable data stores. By Alan Morrison, Senior Manager, PwC Center for technology and innovation (CTI). ODBMS.org

Unthink: Moving Beyond the Constraints of Relational Databases. by Tom McGrath, MarkLogic. ODBMS.org March 14, 2016.

MarkLogic Case Study: Royal Society of Chemistry.ODBMS.org

On making information accessible. Interview with David Leeming. ODBMS Industry Watch, on July 30, 2014

Follow us on Twitter: @odbmsorg

##

]]>
http://www.odbms.org/blog/2016/05/using-nosql-for-irelands-online-tax-research-database/feed/ 0
On Hadoop and Big Data. Interview with John Leach http://www.odbms.org/blog/2015/07/on-hadoop-and-big-data-interview-with-john-leach/ http://www.odbms.org/blog/2015/07/on-hadoop-and-big-data-interview-with-john-leach/#comments Mon, 13 Jul 2015 08:32:52 +0000 http://www.odbms.org/blog/?p=3941

“One common struggle for data-driven enterprises is managing unnecessarily complicated data workflows with bloated ETL pipelines and a lack of native system integration.”– John Leach

I have interviewed John Leach, CTO & Cofounder Splice Machine.  Main topics of the interview are Hadoop, Big Data integration and what Splice Machine has to offer in this space.  Monte Zweben, CEO of Splice Machine also contributed to the interview.

RVZ

Q1. What are the Top Ten Pitfalls to Avoid in a SQL-on-Hadoop Implementation?

John Leach, Monte Zweben:
1. Individual record lookups. Most SQL-on-Hadoop engines are designed for full table scans in analytics, but tend to be too slow for the individual record lookups and ranges scan used by operational applications.
2. Dirty Data. Dirty data is a problem for any system, but it is compounded in Big Data, often resulting in bad reports and delays to reload an entire data set.
3. Sharding. It can be difficult to know what key to distribute data and the right shard size. This results in slow queries, especially for large joins or aggregations.
4. Hotspotting. This happens when data becomes too concentrated in a few nodes, especially for time series data. The impact is slow queries and poor parallelization.
5. SQL coverage. Limited SQL dialects will make it so you can’t run queries to meet business needs. You’ll want to make sure you do your homework. Compile the list of toughest queries and test.
6. Concurrency. Low concurrency can result in the inability to power real-time apps, handle many users, support many input sources, and deliver reports as updates happen.
7. Columnar. Not all columnar solutions are created equally. Besides columnar storage, there are many other optimizations, such as vectorization and run length encoding that can have a big impact on analytic performance. If your OLAP queries run slower, common with large joins and aggregations, this will result in poor productivity. Queries may take minutes or hours instead of seconds. On the flip-side is using columnar when you need concurrency and real-time.
8. Node Sizing. Do your homework and profile your workload. Choosing the wrong node size (e.g., CPU cores, memory) can negatively impact price/performance and create performance bottlenecks.
9. Brittle ETL on Hadoop. With many SQL-on-Hadoop solutions being unable to provide update or delete capabilities without a full data reload, this can cause a very brittle ETL that will require restarting your ETL pipeline because of errors or data quality issues. The result is a missed ETL window and delayed reports to business users.
10. Cost-Based Optimizer. A cost-based optimizer improves performance by selecting the right join strategy, the right index, and the right ordering. Some SQL-on-Hadoop engines have no cost-based optimizer or relatively immature ones that can result in poor performance and poor productivity, as well as manual tuning by DBAs.

Q2. In your experience, what are the most common problems in Big Data integration?

John Leach, Monte Zweben: Providing users access to data in a fashion they can understand and at the moment they need it, while ensuring quality and security, can be incredibly challenging.

The volume and velocity of data that businesses are churning out, along with the variety of different sources, can pose many issues.

One common struggle for data-driven enterprises is managing unnecessarily complicated data workflows with bloated ETL pipelines and a lack of native system integration. Businesses may also find their skill sets, workload, and budgets over-stretched by the need to manage terabytes or petabytes of structured and unstructured data in a way that delivers genuine value to business users.

When data is siloed and there is no solution put into place, businesses can’t access the real-time insights they need to make the best decisions for their business. Performance goes down, headaches abound and cost goes way up, all in the effort to manage the data. That’s why a Big Data integration solution is a prerequisite for getting the best performance and the most real-time insights, at the lowest cost.

Q3. What are the capabilities of Hadoop beyond data storage?

John Leach, Monte Zweben: Hadoop has a very broad range of capabilities and tools:

Oozie for workflow
Pig for scripting
Mahout or SparkML for machine learning
Kafka and Storm for streaming
Flume and Sqoop for integration
Hive, Impala, Spark, and Drill for SQL analytic querying
HBase for NoSQL
Splice Machine for operational, transactional RDBMS

Q4. What programming skills are required to handle application development around Big Data platforms like Hadoop?

John Leach, Monte Zweben: To handle application development on Hadoop, individuals have choices to go raw Hadoop or SQL-on-Hadoop. When going the SQL route, very little new skills are required and developers can open connections to an RDBMS on Hadoop just like they used to do on Oracle, DB2, SQLServer, or Teradata. Raw HAdoop application developers should know their way around the core components of the Hadoop stack–such as HDFS, MapReduce, Kafaka, Storm, Oozie, Hive, Pig, HBase, and YARN. They should also be proficient in Java.

Q5. What are the current challenges for real-time application deployment on Hadoop?

John Leach, Monte Zweben: When we talk about real-time at Splice Machine, we’re focused on applications that require not only real-time responses to queries, but also real-time database updates from a variety of data sources. The former is not all that uncommon on Hadoop; the latter is nearly impossible for most Hadoop-based systems.

Deploying real-time applications on Hadoop is really a function of moving Hadoop beyond its batch processing roots to be able to handle real-time database updates with high concurrency and transactional integrity. We harness HBase along with a lockless snapshot isolation design to provide full ACID transactions across rows and tables.

This technology enables Splice Machine to execute the high concurrency of transactions required by real-time applications.

Q6. What is special about Splice Machine auto-sharding replication and failover technology?

John Leach, Monte Zweben: As part of its automatic auto-sharding, HBase horizontally partitions or splits each table into smaller chunks or shards that are distributed across multiple servers. Using the inherent failover and replication capabilities of HBase and Hadoop, Splice Machine can support applications that demand high availability.

HBase co-processors are used to embed Splice Machine in each distributed HBase region (i.e., data shard). This enables Splice Machine to achieve massive parallelization by pushing the computation down to each distributed data shard without any overhead of MapReduce.

Q7. How difficult is it for customers to migrate from legacy databases to Splice Machine?

John Leach, Monte Zweben: Splice Machine offers a variety of services to help businesses efficiently deploy the Splice Machine database and derive maximum value from their investment. These services include both implementation consulting and educational offerings delivered by our expert team.

Splice Machine has designed a Safe Journey program to significantly ease the effort and risk for companies migrating to a Splice Machine database. The Safe Journey program includes a proven methodology that helps choose the right workloads to migrate, implements risk-mitigation best practices, and includes commercial tools that automate most of the PL/SQL conversion process.

This is not to suggest that all legacy databases will convert to a Hadoop RDBMS.
The best candidates will typically have over 1TB of data, which often leads to cost and scaling issues in legacy databases.

Q8. You have recently announced partnership with Talend, mrc (michaels, ross & cole ltd.) and RedPoint Global. Why Talend, mrc, and RedPoint Global? What is the strategic meaning of these partnerships for Splice Machine?

John Leach, Monte Zweben: Our uptick in recent partnerships demonstrates the tremendous progress our team has made over the past year. We have been working relentlessly to develop the Splice Machine Hadoop RDBMS into a fully enterprise-ready database that can replace legacy database systems.

The demand for programming talent to handle application development is growing faster than the supply of skilled talent, especially around newer platforms like Hadoop. We partnered with mrc to give businesses a solution that can speed real-time application deployment on Hadoop with the staff and tools they currently have, while also offering future-proof applications over a database that scales to meet increasing data demands.

We partnered with Talend to bring our customers the benefit of two different approaches for managing data integration affordable and at scale. Talend’s rich capabilities including drag and drop user interface, and adaptable platform allow for increased productivity and streamlined testing for faster deployment of web, mobile, OLTP or Internet of Things applications.

And finally, we integrated and certified our Hadoop RDBMS on RedPoint’s Convergent Marketing Platform™ to create a new breed of solution for marketers. With cost-efficient database scale-out and real-time cross-channel execution, the solution enables enterprises to future-proof their marketing technology investment through affordable access to all their data (social, mobile, click streams, website behaviors, etc.) across a proliferating and ever-changing list of channels. Furthermore, it complements any existing Hadoop deployment, including those on the Cloudera, MapR and Hortonworks distributions.

Q9. How is Splice Machine working with Hadoop distribution partners –such as MapR, Hortonworks and Cloudera?

John Leach, Monte Zweben: Since Splice Machine does not modify HBase, it can be used with any standard Hadoop distribution that includes HBase, including Cloudera, MapR and Hortonworks. Splice Machine enables enterprises using these three companies to tap into real-time updates with transactional integrity, an important feature for companies looking to become real-time, data-driven businesses.

In 2013, Splice Machine partnered with MapR to enable companies to use the MapR distribution for Hadoop to build their real time, SQL-on-Hadoop applications. In 2014, we joined the Cloudera Connect Partner Program, after certifying on CDH 5. We are working closely with Cloudera to maximize the potential of its full suite of Hadoop-powered software and our unique approach to real-time Hadoop.

That same year, we joined Hortonworks Technology Partner program. This enabled our users to harness innovations in management, provisioning and security for HDP deployments. For HDP users, Splice Machine enables them to build applications that use ANSI-standard SQL and support real-time updates with transactional integrity, allowing Hadoop to be used in both OLTP and OLAP applications.

Earlier this year, we were excited to achieve Hortonworks® Data Platform (HDP™) Certification. With the HDP certification, our customers can leverage the pre-built and validated integrations between leading enterprise technologies and the Hortonworks Data Platform, the industry’s only 100-percent open source Hadoop distribution, to simplify and accelerate their Splice Machine and Hadoop deployments.

Q10 What are the challenges of running online transaction processing on Hadoop?

John Leach, Monte Zweben: With its heritage as a batch processing system, Hadoop does not provide the transaction support required by online transaction processing. Transaction support can be tricky enough to implement for shared-disk RDBMSs such as Oracle, but it becomes far more difficult to implement in distributed environments such as Hadoop. A distributed transactional model requires high-levels of coordination across a cluster with too much overhead, while simultaneously providing high performance for a high concurrency of small read and writes, high-speed ingest, and massive bulk loads. We prove this by being able to run the TPC-C benchmark at scale.

Splice Machine met those requirements by using distributed snap isolation, a Multi-Version Concurrency Control model that delivers lockless, and high-concurrency transactional support. Splice Machine extended research from Google’s Percolator project, Yahoo Lab’s OMID project, and the University of Waterloo’s HBaseSI project to develop its own patent-pending, distributed transactions.

 

———————-
John LeachCTO & Cofounder Splice Machine
With over 15 years of software experience under his belt, John’s expertise in analytics and BI drives his role as Chief Technology Officer. Prior to Splice Machine, John founded Incite Retail in June 2008 and led the company’s strategy and development efforts. At Incite Retail, he built custom Big Data systems (leveraging HBase and Hadoop) for Fortune 500 companies.
Prior to Incite Retail, he ran the business intelligence practice at Blue Martini Software and built strategic partnerships with integration partners. John was a key subject matter expert for Blue Martini Software in many strategic implementations across the world. His focus at Blue Martini was helping clients incorporate decision support knowledge into their current business processes utilizing advanced algorithms and machine learning.
John received dual bachelor’s degrees in biomedical and mechanical engineering from Washington University in Saint Louis. Leach currently is the organizer for the Saint Louis Hadoop Users Group and is active in the Washington University Elliot Society.

Monte Zweben – CEO & Cofounder Splice Machine
A technology industry veteran, Monte’s early career was spent with the NASA Ames Research Center as the Deputy Chief of the Artificial Intelligence Branch, where he won the prestigious Space Act Award for his work on the Space Shuttle program.
Monte then founded and was the Chairman and CEO of Red Pepper Software, a leading supply chain optimization company, which merged in 1996 with PeopleSoft, where he was VP and General Manager, Manufacturing Business Unit.

In 1998, Monte was the founder and CEO of Blue Martini Software – the leader in e-commerce and multi-channel systems for retailers. Blue Martini went public on NASDAQ in one of the most successful IPOs of 2000, and is now part of JDA.
Following Blue Martini, he was the chairman of SeeSaw Networks, a digital, place-based media company. Monte is also the co-author of Intelligent Scheduling and has published articles in the Harvard Business Review and various computer science journals and conference proceedings.

Zweben currently serves on the Board of Directors of Rocket Fuel Inc. as well as the Dean’s Advisory Board for Carnegie-Mellon’s School of Computer Science.

 

Resources

– Splice Machine resource page, ODBMS.org

Related Posts

Common misconceptions about SQL on Hadoop. By Cynthia M. Saracco, ODBMS.org, July 2015

– SQL over Hadoop: Performance isn’t everything… By Simon Harris, ODBMS.org, March 2015

– Archiving Everything with Hadoop. By Mark Cusack, ODBMS.org. December 2014.

–  On Hadoop RDBMS. Interview with Monte Zweben. ODBMS Industry Watch  November 2, 2014

– AsterixDB: Better than Hadoop? Interview with Mike Carey, ODBMS Industry Watch, October 22, 2014

 

Follow ODBMS.org on Twitter: @odbmsorg

##

 

]]>
http://www.odbms.org/blog/2015/07/on-hadoop-and-big-data-interview-with-john-leach/feed/ 0
Data for the Common Good. Interview with Andrea Powell http://www.odbms.org/blog/2015/06/data-for-the-common-good-interview-with-andrea-powell/ http://www.odbms.org/blog/2015/06/data-for-the-common-good-interview-with-andrea-powell/#comments Tue, 09 Jun 2015 10:55:08 +0000 http://www.odbms.org/blog/?p=3933

“CABI has a proud history (we were founded in 1910) of serving the needs of agricultural researchers around the world, and it is fascinating to see how technology can now help to achieve our development mission. We can have much greater impact at scale these days on the lives of poor farmers around the world (on whom we are all dependent for our food) by using modern technology and by putting knowledge into the hands of those who need it the most.”–Andrea Powell

I have interviewed Andrea Powell,Chief Information Officer at CABI.
Main topic of the interview is how to use data and knowledge for the Common Good, specifically by solving problems in agriculture and the environment.

RVZ

Q1. What is the main mission of CABI?

Andrea Powell: CABI’s mission is to improve people’s lives and livelihoods by solving problems in agriculture and the environment.
CABI is a not-for-profit, intergovernmental organisation with over 500 staff based in 17 offices around the world. We focus primarily on plant health issues, helping smallholder farmers to lose less of what they grow and therefore to increase their yields and their incomes.

Q2. How effective is scientific publishing in helping the developing world solving agricultural problems?

Andrea Powell: Our role is to bridge the gap between research and practice.
Traditional scientific journals serve a number of purposes in the scholarly communication landscape, but they are often inaccessible or inappropriate for solving the problems of farmers in the developing world. While there are many excellent initiatives which provide free or very low-cost access to the research literature in these countries, what is often more effective is working with local partners to develop and implement local solutions which draw on and build upon that body of research.
Publishers have pioneered innovative uses of technology, such as mobile phones, to ensure that the right information is delivered to the right person in the right format.
This can only be done if the underlying information is properly categorised, indexed and stored, something that publishers have done for many decades, if not centuries. Increasingly we are able to extract extra value from original research content by text and data mining and by adding extra semantic concepts so that we can solve specific problems.

Q3. What are the typical real-world problems that you are trying to solve? Could you give us some examples of your donor-funded development programs?

Andrea Powell: In our Plantwise programme, we are working hard to reduce the crop losses that happen due to the effects of plant pests and diseases. Farmers can typically lose up to 40% of their crop in this way, so achieving just a 1% reduction in such losses could feed 25 million more hungry mouths around the world. Another initiative, called mNutrition, aims to deliver practical advice to farming families in the developing world about how to grow more nutritionally valuable crops, and is aimed at reducing child malnutrition and stunting.

Q4. How do you measure your impact and success?

Andrea Powell: We have a strong focus on Monitoring and Evaluation, and for each of our projects we include a “Theory of Change” which allows us to measure and monitor the impact of the work we are doing. In some cases, our donors carry out their own assessments of our projects and require us to demonstrate value for money in measurable ways.

Q5. What are the main challenges you are currently facing for ensuring CABI’s products and services are fit for purpose in the digital age?

Andrea Powell: The challenges vary considerably depending on the type of customer or beneficiary.
In our developed world markets, we already generate some 90% of our income from digital products, so the challenge there is keeping our products and platforms up-to-date and in tune with the way modern researchers and practitioners interact with digital content. In the developing world, the focus is much more on the use of mobile phone technology, so transforming our content into a format that makes it easy and cheap to deliver via this medium is a key challenge. Often this can take the form of a simple text message which needs to be translated into multiple languages and made highly relevant for the recipient.

Q6. You have one of the world’s largest agricultural database that sits in a RDBMS, and you also have info silos around the company. How do you pull all of these information together?

Andrea Powell: At the moment, with some difficulty! We do use APIs to enable us to consume content from a variety of sources in a single product and to render that content to our customers using a highly flexible Web Content Management System. However, we are in the process of transforming our current technology stack and replacing some of our Relational Databases with MarkLogic, to give us more flexibility and scaleability. We are very excited about the potential this new approach offers.

Q7. How do you represent and model all of this knowledge? Could you give us an idea of how the data management part for your company is designed and implemented?

Andrea Powell: We have a highly structured taxonomy that enables us to classify and categorise all of our information in a consistent and meaningful way, and we have recently implemented a semantic enrichment toolkit, TEMIS Luxid® to make this process even more efficient and automated. We are also planning to build a Knowledge Graph based on linked open data, which will allow us to define our domain even more richly and link our information assets (and those of other content producers) by defining the relationships between different concepts.

Q8. What kind of predictive analytics do you use or plan to use?

Andrea Powell: We are very excited by the prospect of being able to do predictive analysis on the spread of particular crop diseases or on the impact of invasive species. We have had some early investigations into how we can use semantics to achieve this; e.g. if pest A attacks crop B in country C, what is the likelihood of it attacking crop D in country E which has the same climate and soil types as country C?

Q9. How do you intend to implement such predictive analytics?

Andrea Powell: We plan to deploy a combination of expert subject knowledge, data mining techniques and clever programming!

Q10. What are future strategic developments?

Andrea Powell: Increasingly we are developing knowledge-based solutions that focus on solving specific problems and on fitting into user workflows, rather than creating large databases of content with no added analysis or insight. Mobile will become the primary delivery channel and we will also be seeking to use mobile technology to gather user data for further analysis and product development.

Qx Anything else you wish to add?

Andrea Powell: CABI has a proud history (we were founded in 1910) of serving the needs of agricultural researchers around the world, and it is fascinating to see how technology can now help to achieve our development mission. We can have much greater impact at scale these days on the lives of poor farmers around the world (on whom we are all dependent for our food) by using modern technology and by putting knowledge into the hands of those who need it the most.

————–
ANDREA POWELL,Chief Information Officer, CABI, United Kingdom.
I am a linguist by training (French and Russian) with an MA from Cambridge University but have worked in the information industry since graduating in 1988. After two and a half years with Reuters I joined CABI in the Marketing Department in 1991 and have worked here ever since. Since January 2015 I have held the position of Chief Information Officer, leading an integrated team of content specialists and technologists to ensure that all CABI’s digital and print publications are produced on time and to the quality standards expected by our customer worldwide. I am responsible for future strategic development, for overseeing the development of our technical infrastructure and data architecture, and for ensuring that appropriate information & communication technologies are implemented in support of CABI’s agricultural development programmes around the world.

Resources

– More information about how CABI is using MarkLogic can be found in this video, recorded at MarkLogic World San Francisco, April 2015.

Related Posts

Big Data for Good. ODBMS Industry Watch June 4, 2012. A distinguished panel of experts discuss how Big Data can be used to create Social Capital.

Follow ODBMS.org on Twitter: @odbmsorg

##

]]>
http://www.odbms.org/blog/2015/06/data-for-the-common-good-interview-with-andrea-powell/feed/ 0
On Hadoop RDBMS. Interview with Monte Zweben. http://www.odbms.org/blog/2014/11/hadoop-rdbms-interview-monte-zweben/ http://www.odbms.org/blog/2014/11/hadoop-rdbms-interview-monte-zweben/#comments Sun, 02 Nov 2014 18:15:06 +0000 http://www.odbms.org/blog/?p=3530

“HBase and Hadoop are the only technologies proven to scale to dozens of petabytes on commodity servers, currently being used by companies such as Facebook, Twitter, Adobe and Salesforce.com.”–Monte Zweben.

Is it possible to turn Hadoop into a RDBMS? On this topic, I have interviewed Monte Zweben, Co-Founder and Chief Executive Officer of Splice Machine.

RVZ

Q1. What are the main challenges of applications and operational analytics that support real-time, interactive queries on data updated in real-time for Big Data?

Monte Zweben: Let’s break down “real-time, interactive queries on data updated in real-time for Big Data”. “Real-time, interactive queries” means that results need to be returned in milliseconds to a few seconds.
For “Data updated in real-time” to happen, changes in data should be reflected in milliseconds. “Big Data” is often defined as dramatically increased volume, velocity, and variety of data. Of these three attributes, data volume typically dominates, because unlike the other attributes, its growth is virtually unbounded.

Traditional RDBMSs like MySQL or Oracle can support real-time, interactive queries on data updated in real-time, but they struggle on handling Big Data. They can only scale up on larger servers that can cost hundreds of thousands, if not millions of dollars per server.

Big Data technologies such as Hadoop can easily handle Big Data data volumes with their ability to scale-out on commodity hardware. However, with their batch analytics heritage, they often struggle to provide real-time, interactive queries. They also lack ACID transactions to support data updated in real time.

So, real-time applications and operational analytics had to choose between real-time interactive queries on data updated in real-time, or Big Data volumes. With Splice Machine, these applications can have the best of both worlds: real-time interactive queries, the reliability of real-time updates on ACID transactions, and the ability to handle Big Data volumes with a 10x price/performance improvement over traditional RDBMSs.

Q2. You suggested that companies should replace their traditional RDBMS systems. Why and when? Do you really think this is always possible? What about legacy systems?

Monte Zweben: Companies should consider replacing their traditional RDBMSs when they experience significant cost or scaling issues. Our informal surveys of customers indicate that up to half of traditional RDBMSs experience cost or scaling issues. The biggest barrier to migrating from a traditional RDBMS to a new database like Splice Machine is converting custom stored procedure (e.g., PL/SQL code). Operational analytics often have limited custom stored procedure code, so the migration process is generally straightforward.

Operational applications typically have thousands of lines of custom stored procedure code, but in extreme cases it can run into hundreds of thousands to millions of lines of code. There are actually commercially-supported tools that will convert from PL/SQL to the Java needed for Splice Machine. We have typically seen them convert from 70-95% accurately, but it will obviously depend on the complexity of the original code. Financially, migration makes sense for many companies to get an ongoing 10x price/performance, but there are cases when it does not make sense because converting custom code is too expensive.

Q3. Is scale-out the solution to Big Data at scale? Why?

Monte Zweben: Scale-out is definitely the critical technology to making Big Data work at scale. Scale-out leverages inexpensive, commodity hardware to parallelize queries to easily achieve a 10x price/performance improvement over existing database technologies.

Q4. You have announced your real-time relational database management system. What is special about Splice Machine`s Hadoop RDBMS? 

Monte Zweben: We are the only Hadoop RDBMS. There are obviously many RDBMSs, but we are the only one with scale-out technology from Hadoop. Hadoop is the only scale-out technology proven to scales into dozens of petabytes on commodity hardware at companies like Facebook. There are other SQL-on-Hadoop technologies, but none of them can support real-time ACID transactions.

Q5 Hadoop-connected SQL databases do not eliminate “silos”. How do you handle this? 

Monte Zweben: We are not a database that has a connector to Hadoop. We are tightly integrated into Hadoop, using HBase and HDFS as our storage layer.

Q6. How did you manage to move Hadoop beyond its batch analytics heritage to power operational applications and real-time analytics?

Monte Zweben: At its core, Hadoop is a distributed file system (HDFS) where data cannot be updated or deleted. If you want to update or delete anything, you have to reload all the data (i.e., batch load). As a file system, it has very limited ability to seek specific data; instead, you use Java MapReduce programs to scan all of the data to find the data you need. It can easily take hours or even days for queries to return data (i.e., batch analytics). There is no way you could support a real-time application on top of HDFS and MapReduce.

By using HBase (a real-time key value store on top of HDFS), Splice Machine provides a full RDBMS on top of Hadoop.
You can now get real-time, interactive queries on real-time updated data on Hadoop, necessary to support operational applications and analytics.

Q7. How do you use Apache Derby™ and Apache HBase™/Hadoop?

Monte Zweben: Splice Machine marries two proven technology stacks: Apache Derby for ANSI SQL and HBase/Hadoop for proven scale out technology. With over 15 years of development, Apache Derby is a Java-based SQL database. Splice Machine chose Derby because it is a full-featured ANSI SQL database, lightweight (<3 MB), and easy to embed into the HBase/Hadoop stack.

HBase and Hadoop are the only technologies proven to scale to dozens of petabytes on commodity servers, currently being used by companies such as Facebook, Twitter, Adobe and Salesforce.com. Splice Machine chose HBase and Hadoop because of their proven auto-sharding, replication, and failover technology.

Q8. Why did you replace the storage engine in Apache Derby with HBase?

Monte Zweben: Apache Derby has a native shared-disk (i.e., non-distributed) storage layer. We replaced that storage layer with HBase to provide an auto-sharded, distributed computing storage layer.

Q9. Why did you redesign the planner, optimizer, and executor of Apache Derby?

Monte Zweben: We redesigned the planner, optimizer, and executor of Derby because Splice Machine has a distributed computing infrastructure instead of its old shared-disk storage. Distributed computing requires a functional re-architecting because computation must be distributed to where the data is, instead of moving the data to the computation.

Q10. What are the main benefits for developers and database architects who build applications?

Monte Zweben: There are two main benefits to Splice Machine for developers and database architects. First, no longer is data scaling a barrier to using massive amounts of data in an application; you no longer need to prune data or rewrite applications to do unnatural acts like manual sharding. Second, you can enjoy the scaling with all the critical features of an RDBMS – strong consistency, joins, secondary indexes for fast lookups, and reliable updates with transactions. Without those features, developers have to implement those functions for each application, a costly, time-consuming, and error-prone process.

———————
Monte Zweben, Co-Founder and Chief Executive Officer, Splice Machine

A technology industry veteran, Monte’s early career was spent with the NASA Ames Research Center as the Deputy Branch Chief of the Artificial Intelligence Branch, where he won the prestigious Space Act Award for his work on the Space Shuttle program. Monte then founded and was the Chairman and CEO of Red Pepper Software, a leading supply chain optimization company, which merged in 1996 with PeopleSoft, where he was VP and General Manager, Manufacturing Business Unit.

In 1998, Monte was the founder and CEO of Blue Martini Software – the leader in e-commerce and multi-channel systems for retailers. Blue Martini went public on NASDAQ in one of the most successful IPOs of 2000, and is now part of Red Prairie. Following Blue Martini, he was the chairman of SeeSaw Networks, a digital, place-based media company, and is the chairman of Clio Music, an advanced music research and development company. Monte is also the co-author of Intelligent Scheduling and has published articles in the Harvard Business Review and various computer science journals and conference proceedings.

Zweben currently serves on the Board of Directors of Rocket Fuel Inc. as well as the Dean’s Advisory Board for Carnegie-Mellon’s School of Computer Science. Monte’s involvement with CMU, which has been a long-time leader in distributed computing and Big Data research, helped inspire the original concept behind Splice Machine.

Resources

ODBMS.org: Several Free Resources on Hadoop.

Related Posts

AsterixDB: Better than Hadoop? Interview with Mike Carey. ODBMS INDUSTRY WATCH, October 22, 2014

Hadoop at Yahoo. Interview with Mithun Radhakrishnan. ODBMS INDUSTRY WATCH, September 21, 2014

On the Hadoop market. Interview with John Schroeder. ODBMS INDUSTRY WATCH, June 30, 2014

–> FOLLOW ODBMS.ORG ON TWITTER: @odbmsorg 

##

]]>
http://www.odbms.org/blog/2014/11/hadoop-rdbms-interview-monte-zweben/feed/ 2
On making information accessible. Interview with David Leeming. http://www.odbms.org/blog/2014/07/making-information-accessible-interview-david-leeming-royal-society-chemistry/ http://www.odbms.org/blog/2014/07/making-information-accessible-interview-david-leeming-royal-society-chemistry/#comments Wed, 30 Jul 2014 07:51:16 +0000 http://www.odbms.org/blog/?p=3380

“The problem we had was that, if we tried to store the XML in traditional relational databases, we were losing the structure of our articles or having to redesign them.”–David Leeming

On making information accessible, I have interviewed David Leeming, Solutions Manager at the Royal Society of Chemistry. The Royal Society of Chemistry is the world’s leading chemistry community, advancing excellence in the chemical sciences, with 49,000 members worldwide.

RVZ

Q1. What is the main business of The Royal Society of Chemistry?

David Leeming: The Royal Society of Chemistry is the world’s leading chemistry community, advancing excellence in the chemical sciences. With over 49,000 members and a knowledge business that spans the globe, we are the UK’s professional body for chemical scientists; a not-for-profit organisation with 170 years of history and an international vision of the future. We promote, support and celebrate chemistry. We work to shape the future of the chemical sciences – for the benefit of science and humanity.

Q2. What is your role at The Royal Society of Chemistry? And what are your current projects?

David Leeming: I manage the solutions team within the Royal Society of Chemistry’s recently formed Strategic Innovation Group, whose goal is to develop and deliver digital products and platforms that support and enhance our strategy and goals. As well as delivering platforms such as our Publishing Platform and our platform for teaching and learning resources, Learn Chemistry, my team also trials new technologies to scope what the organisation could be doing next in digital delivery to connect the chemistry community with high-quality research and chemical data.

Q3. Why did you digitize and convert over 1 million pages of written word and chemical formulae into XML?

David Leeming: Our aim is to connect the world with the chemical sciences, so we need to be able to make the chemistry research that we publish available and accessible to researchers all over the world. With the age of digital technology and demands from researchers to get their hands on data in new formats, rather than the traditional hard copy journals, we undertook a project to digitise our vast library of scientific data, news and literature and make it available online. By getting the XML for our articles and data, we are able to improve the discoverability and functionality of the platforms we use to deliver our content, by using the content structure. We are also able to hold only one copy of an article and transform the XML into other format layouts without having to hold multiple copies of the article.

Q4. What kind of new products and services were you expecting to develop when you converted the data in XML?

David Leeming: We can enhance and semantically enrich our content and analyse it to discover niche domains or areas of chemistry research that might benefit researchers who may not normally think to look in our journals.
For example, since launch in 2010 we have launched a number of new journals in environmental science.
On Learn Chemistry we have been able to produce mini-sites for specific initiatives such as a collection of Chemistry of Sport education resources that coincided with the 2012 Olympic Games.

Q5. What technical challenges did you encounter when trying to store and manage these XML files?

David Leeming: Before we discovered MarkLogic the problem we had was that, if we tried to store the XML in traditional relational databases, we were losing the structure of our articles or having to redesign them.
The XML was basically stored in flat files, and we needed to build indexes from this. Keeping this up to date and versioned was difficult. Since we’ve been using MarkLogic, we have been able to store our content in its original structure within the database, giving us full version control and adding search capabilities. The initial technical difficulties we faced when switching over to MarkLogic were finding experienced developers and getting traditional database administrators to think a bit differently about the way developers use the database and not to be concerned about traditional data modelling.

Q6. Specifically, how do you manage the logical associations between different types of XML content? And how do you query(update) them?

David Leeming: We have two ways of separating content. One is by product – this was our original method. We stored the XML documents for each product in their own database. We have now changed this method to store by content type. So, for instance, a journal article is stored together with other journal articles, book chapters are stored together, and so on. We keep the original XML schema and different documents have different schemas, but on loading the data we add a ‘meta’ record that is common across all data objects. This holds identifiers and describes what each data object is. This aids in the searching and querying we perform on the data to ensure the correct content is accessed and updated.

Q7. What results did you obtain in using a NoSQL database for this task?

David Leeming: Using the NoSQL database meant that we didn’t need to spend 6 months or so building a data model that doesn’t really fit the structure of the documents we currently have or may have in the future. Each time we get a different XML document, there’s no need to change the underlying data model, so it is very easy to implement.

Q8. What are your plans ahead?

David Leeming: Going forward we are starting to add greater semantic enrichment to data sets to provide better discoverability across chemistry data that will enable increased collaboration among researchers.

——————————
David Leeming is the Royal Society of Chemistry’s Solutions Manager. He is an experienced leader, project manager and business analyst in the digital publishing business, with expertise in building innovative platforms and solutions. He works with a number of technologies, with specific interest in online publishing, XML, sprint methodologies and MarkLogic. He manages teams of highly skilled engineers, analysts and an outsourced development unit based in India.

Related Posts

What are the challenges for modern Data Centers? Interview with David Gorbet. ODBMS Industry Watch, March 25, 2014.

On Linked Data. Interview with John Goodwin.ODBMS Industry Watch September 1, 2013.

Resources

2013 Gartner Magic Quadrant for Operational Database Management Systems

Inside MarkLogic Server

XML Database Benchmark: “Transaction Processing over XML (TPoX)

Benchmarking XML Databases: New TPoX Benchmark Results Available

Follow ODBMS.org and ODBMS Industry Watch on Twitter:
@odbmsorg

##

]]>
http://www.odbms.org/blog/2014/07/making-information-accessible-interview-david-leeming-royal-society-chemistry/feed/ 0
On Column Stores. Interview with Shilpa Lawande http://www.odbms.org/blog/2014/07/interview-shilpa-lawande-2/ http://www.odbms.org/blog/2014/07/interview-shilpa-lawande-2/#comments Mon, 14 Jul 2014 07:39:29 +0000 http://www.odbms.org/blog/?p=3357

“A true columnar store is not only about the way you store data, but the engine and the optimizations that are enabled by the column store”–Shilpa Lawande.

On the subject of column stores, and what are the important features in the new release of the HP Vertica Analytics Platform, I have interviewed Shilpa Lawande, VP Engineering & Customer Experience at HP Vertica.

RVZ

Q1. Back in 2011 I did an interview with you [1] at the time Vertica was just acquired by HP. What is new in the current version of Vertica?

Shilpa Lawande: We’ve come a long way since 2011 and our innovation engine is going strong!
From “Bulldozer” to “Crane” and now “Dragline,” we’ve built on our columnar-compressed, MPP share-nothing core, expanded security and manageability, dramatically expanded data ingestion capabilities, and what’s most exciting is that we’ve added a host of advanced analytics functions and extensibility APIs to the HP Vertica Analytics Platform itself. One key innovation is our ability to ingest and auto-schematize semi-structured data using HP Vertica Flex Zone, which takes away much of the friction in the analytic life-cycle from exploration to production.
We’ve also grown a vibrant community of practitioners and an ecosystem of complementary tools, including Hadoop.

Dragline, our next release of the HP Vertica Analytics Platform addresses the needs of the most demanding, analytic-driven organizations by providing many new features, including:

    • Project Maverick’s Live Aggregate Projections to speed up queries that rely on resource- intensive aggregate functions like SUM, MIN/MAX, and COUNT.
    • Dynamic mixed workload management, which identifies and adapts to varying query complexities — simple and ad-hoc queries as well as long-running advanced queries — and dynamically assigns the appropriate amount of resources to ensure the needs of all data consumers
    • HP Vertica Pulse, which helps organizations leverage an in-database sentiment analysis tool that scores short data posts, including social data, such as Twitter feeds or product reviews, to gauge the most popular topics of interest, analyze how sentiment changes over time and identify advocates and detractors.
    • HP Vertica Place, which stores and analyzes geospatial data in real time, including locations, networks and regions.
    • An expanded SQL-on-Hadoop offering that gives users freedom to pick their data formats and where to store it, including HDFS, but still benefit from the power of the Vertica analytic engine. OF course, there’s a lot more to the “Dragline” release, but these are the highlights.

Q2. Vertica is referred to as an analytics platform. How does it differentiate with respect to conventional relational database systems (RDBMSes)?

Shilpa Lawande: Good question. First, let me clear the misconception that column stores are not relational – Vertica is a relational database, an RDBMS – it speaks tables and columns and standard SQL and ODBC, and like your favorite RDBMS, talks to a variety of BI tools. Now, there are many variations in the database market from low-cost solutions that lack advanced analytics to high-end solutions that can’t handle big data.
HP Vertica is the only one purpose-built for big data analytics – most conventional RDBMS were purpose- built for OLTP and then retrofitted for analytics. Vertica’s core architecture with columnar storage, a columnar engine, aggressive use of data compression, our scale-out architecture, and, most importantly, our unique hybrid load architecture enables what we call real-time analytics, which gives us the edge over the competition.
You can keep loading your data throughout the day — not in batch at night — and you can query the data as it comes in, without any specialized indexes, materialized views, or other pre- processing. And we have a huge and ever-growing library of features and functions to explore and perform analytics on big data–both structured and semi-structured. All of these core capabilities add up to a powerful analytics platform–far beyond a conventional relational database.

Q3. Vertica is column-based. Could you please explain what are the main technological differences with respect to a conventional relational database system?

Shilpa Lawande: It’s about performance. A conventional RDBMS is bottlenecked with disk I/O.
The reason for this is that with a traditional database, data is stored on disks in a row-wise manner, so even if the query needs only a few columns, the entire row must be retrieved from disk. In analytic workloads, often there are hundreds of columns in the data and only a few are used in the query, so row-oriented databases simply don’t scale as the data sets get large.
Vendors who offer this type of database often require that you create indexes and materialized views to retrieve the relevant data in a reasonable about of time. With columnar storage, you store data for each column separately, so that you can grab just the columns you need to answer the query. This can speed query times immensely, where hour-long queries can happen in minutes or seconds. Furthermore, Vertica stores and processes the data sorted, which enables us to do all manner of interesting optimizations to queries that further boost performance.

Some of the traditional database vendors out there claim they now have columnar store, but a true columnar store is not only about the way you store data, but the engine and the optimizations that are enabled by the column store.
For instance, an optimization called late materialized allows Vertica to delay retrieval of columns as late as possible in query processing so that minimal I/O and data movement is done until absolutely necessary. Vertica is the only engine that is true columnar; everything else out there is a retrofit of a general purpose engine that can read some kind of a columnar format.

Q4. What is so special of Vertica data compression?

Shilpa Lawande: The capability of Vertica to store data in columns allows us to take advantage of the similar traits in data. This gives us not only a footprint reduction in the disk needed to store data, but also an I/O performance boost — compressed data takes a shorter time to load. But, even more importantly, we use various encoding techniques on the data itself that enable us to process the data without expanding it first.
We have over a dozen schemes for how we store the data to optimize its storage, retrieval, and processing.

Q5. Vertica is designed for massively parallel processing (MPP). What is it?

Shilpa Lawande: Vertica is a database designed to run on a cluster of industry-standard hardware.
There are no special- purpose hardware components. The database is based on a shared-nothing architecture, where many nodes each store part of the database and do part of the work in processing queries. We optimize the processing so much as to minimize data traffic over the network. We have built-in high availability to handle node failures. We also have a sophisticated elasticity mechanism that allows us to efficiently add and remove nodes from the cluster. This enables us to scale-out to very large data sizes and handle very large data problems. In other words, it is massively parallel processing!

Q6. In the past, columnar databases were said to be slow to load. Is it still true now?

Shilpa Lawande: This may have been true with older unsophisticated columnar databases. We have customers loading over 35 TB data / hour into Vertica, so I think we’ve put that one squarely to rest.

Q7. Who are the users ready to try column-based “data slicers”? And for what kind of use cases?

Shilpa Lawande: Vertica is a technology broadly applicable in many industries and in many business situations. Here are just a few of them.

Data Warehouse Modernization – the customer has some underperforming solution for data warehouse in place and they want to replace or augment their current analytics with a solution that will scale and deliver faster analytics at an overall lower TCO that requires substantially less hardware resources.

Hadoop Acceleration – the customer has bought into Hadoop for a data lake solution and would like a more expressive and faster SQL-on-Hadoop solution or an analytic platform that can offer real-time analytics for production use.

Predictive analytics – the customer has some kind of machine data, clickstream logs, call detail records, security event data, network performance data, etc. over long periods of time and they would like to get value out of this data via predictive analytics. Use-cases include website personalization, network performance optimization, security thread forensics, quality control, predictive maintenance, etc.

Q8. What are the typical indicators which are used to measure how well systems are running and analyzing data in the enterprise? In other words, how “good” is the value derived from analyzing (Big) Data?

Shilpa Lawande: There are many, many advantages and places to derive value from big data.
First, just having the ability to answer your daily analytics faster can be a huge boost for the organization. For example, we had one brick-and-mortar retailer who wanted to brief sales associates and managers daily on what the hottest selling products were, who had inventory and other store trends. With their legacy analytics system, they could not deliver analytics fast enough to have these analytics on hand. With Vertica, they now provide very detailed (and I might add graphically pleasing) analytics across all of their stores, right in the hands of the store manager via a tablet device. The analytics has boosted sales performance and efficiency across the chain. The user experience they get wouldn’t be possible without the speed of Vertica.

But what is most exciting to me is when Vertica is used to save lives and the environment. We have a client in the medical field who has used Vertica analytics to better detect infections in newborn infants by leveraging the data they have from the NICU. It’s difficult to detect infections in newborns because they don’t often run a fever, nor can they explain how they feel. The estimate is that this big data analytics has saved the lives of hundreds of newborn babies in the first year of use. Another example is the HP Earth Insights project, which used Vertica to create an early warning system to identify species threatened by destruction of tropical forests around the world.
This project done in cooperation with Conservation International is making an amazing difference to scientists and helping inform and influence policy decisions around our environment.

There are a LOT of great use cases like these coming out of the Vertica community.

Q9. What are the main technical challenges when analyzing data at speed?

Shilpa Lawande: In an analytics system, you tend to have a lot going on at the same time. There are data loads, both in batch and trickle loads. There is daily and regular analytics for generating daily reports. There may be data discovery where users are trying to find value in data. Of course, there are dashboards that executives rely upon to stay up to date. Finally, you may have ad-hoc queries that come in and try to take away resources. So perhaps the biggest challenge is dealing with all of these workloads and coming up with the most efficient way to manage it all.
We’ve invested a lot of resources in this area and the fruit of that labor is very much evident in the “Dragline” release.

Q10. Do you have some concrete example of use cases where HP Vertica is used to analyze data at speed?

Shilpa Lawande: Yes, we have many, see here.

Q11. How HP Vertica differs with respect to other analytical platforms offered by competitors such as IBM, Teradata, to in-memory databases such as SAP HANA?

Shilpa Lawande: Vertica offers everything that’s good about legacy data warehouse technologies like the ability to use your favorite visualization tools, standard SQL, and advanced analytic functionality.

In general, the legacy databases you mentioned are pretty good at handling analysis of business data, but they are still playing catch-up when it comes to big data – the volume, variety, and velocity. A row store simply cannot deliver the analytical performance and scale of an MPP columnar platform like Vertica.

In-memory databases are a good acceleration solution for some classes of business analytics, but, again, when it comes to very large data problems, the economics of putting all the data in memory simply do not work. That said, Vertica itself has an in-memory component which is at the core of our high-speed loading architecture, so I believe we have the best of both worlds – ability to use memory where it matters and still support petabyte scales!

——————–
ShilpaLawande_web
Shilpa Lawande,VP Software Engineering and Customer Experience, HP/ Vertica

Shilpa Lawande has been an integral part of the Vertica engineering team from its inception to its acquisition by HP in 2011. Shilpa brings over 15 years of experience in databases, data warehousing and grid computing to HP/Vertica.
Besides being responsible for Vertica’s Engineering team, Shilpa also manages the Customer Experience organization for Vertica including Customer Support, Training and Professional Services. Prior to Vertica, she was a key member of the Oracle Server Technologies group where she worked directly on several data warehousing and self-managing features in the Oracle Database.
Shilpa is a co-inventor on several patents on query optimization, materialized views and automatic index tuning for databases. She has also co-authored two books on data warehousing using the Oracle database as well as a book on Enterprise Grid Computing. She has been named to the 2012 Women to Watch list by Mass High Tech and awarded HP Software Business Unit Leader of the year in 2012.
Shilpa has a Masters in Computer Science from the University of Wisconsin-Madison and a Bachelors in Computer Science and Engineering from the Indian Institute of Technology, Mumbai.

Related Posts

[1] On Big Data: Interview with Shilpa Lawande, VP of Engineering at Vertica, ODBMS Industry Watch, November 16, 2011

Resources

Using the HP Vertica Analytics Platform to Manage Massive Volumes of Smart Meter Data. HP Technical white paper 2014.

Knowing your game. How electronic game companies use Big Data for retention and monetization. HP Business white paper June 2014.

Big Data Applications and Analytics. Dr. Geoffrey Fox, Indiana University. Lecture Notes

Applied Predictive Analytics: Principles and Techniques for the Professional Data Analyst. Dean Abbott, Wiley May 2014

Follow ODBMS.org on Twitter: @odbmsorg

]]>
http://www.odbms.org/blog/2014/07/interview-shilpa-lawande-2/feed/ 2
On PostgreSQL. Interview with Bruce Momjian. http://www.odbms.org/blog/2014/06/bruce-momjian/ http://www.odbms.org/blog/2014/06/bruce-momjian/#comments Tue, 17 Jun 2014 15:02:01 +0000 http://www.odbms.org/blog/?p=3216

“There are four things that motivate open source development teams:
1. The challenge/puzzle of programming, 2. Need for the software, 3. Personal advancement, 4. Belief in open source”
— Bruce Momjian.

On PostgreSQL and the challenges of motivating and managing open source teams, I have interviewed Bruce Momjian, Senior Database Architect at EnterpriseDB, and Co-founder of the PostgreSQL Global Development Group and Core Contributor.

RVZ

Q1. How did you manage to transform PostgreSQL from an abandoned academic project into a commercially viable, now enterprise relational database?

Bruce Momjian: Ever since I was a developer of database applications, I have been interested in how SQL databases work internally. In 1996, Postgres was the only open source database available that was like the commercial ones I used at work. And I could look at the Postgres code and see how SQL was processed.

At the time, I also kind of had a boring job, or at least a non-challenging one, writing database reports and applications. So getting to know Postgres was exciting and helped me grow as a developer. I started getting more involved with Postgres. I took over the Postgres website from the university, reading bug reports and fixes, and I started interacting with other developers through the website. Fortunately, I got to know other developers who also found Postgres attractive and exciting, and together we assembled a group of people with similar interests.
We had a small user community at the time, but found enough developers to keep the database feature set moving forward.

As we got more users, we got more developers. Then commercial support opportunities began to grow, helping foster a rich ecosystem of users, developers and support companies that created a self-reinforcing structure that in turn continued to drive growth.

You look at Postgres now and it looks like there was some grand plan. But in fact, it was just a matter of setting up the some structure and continuing to make sure all the aspects continued to efficiently reinforce each other.

Q2. What are the current activities and projects of the PostgreSQL community Global Development Group?

Bruce Momjian: We are working on finalizing the version 9.4 beta, which will feature greatly improved JSON capabilities and lots of other stuff. We hope to release 9.4 in September/October of this year.

Q3. How do you manage motivating and managing open source teams?

Bruce Momjian: There are four things that motivate open source development teams:

* The challenge/puzzle of programming
* Need for the software
* Personal advancement
* Belief in open source

Our developers are motivated by a combination of these. Plus, our community members are very supportive of one another, meaning that working on Postgres is seen as something that gives contributors a sense of purpose and value.

I couldn’t tell you the mix of motivations that drive any one individual and I doubt many people you were to ask could answer the question simply. But it’s clear that a mixture of these motivations really drives everything we do, even if we can’t articulate exactly which are most important at any one time.

Q4. What is the technical roadmap for PostgreSQL?

Bruce Momjian: We are continuing to work on handling NoSQL-like workloads better. Our plans for expanded JSON support in 9.4 are part of that.

We need greater parallelism, particularly to allow a single query to make better user of all the server’s resources.
We have made some small steps in this area in v9.3 and plan to in v9.4, but there is still much work to be done.

We are also focused on data federation, allowing Postgres to access data from other data sources.
We already support many data interfaces, but they need improvement, and we need to add the ability to push joins and aggregates to foreign data sources, where applicable.

I should add that Evan Quinn, an analyst at Enterprise Management Associates, wrote a terrific research report, PostgreSQL: The Quite Giant of Enterprise Database, and included some information on plans for 9.4.

Q5. How do managers and executives view Postgres, and particularly how they make Postgres deployment decisions?

Bruce Momjian: In the early years, users that had little or no money, and organizations with heavy data needs and small profit margins drove the adoption of Postgres. These users were willing to overlook Postgres’ limitations.

Now that Postgres has filled out its feature set, almost every user segment using relational databases is considering Postgres. For management, cost savings are key. For engineers, it’s Postgres’ technology, ease of use and flexibility.

Q6. What is your role at EnterpriseDB?

Bruce Momjian: My primary responsibility at EnterpriseDB is to help the Postgres community.
EnterpriseDB supports my work as a core team member and I play an active role in the overall decision-making and the organization of community initiatives. I also travel frequently to conferences worldwide, delivering presentations on advances in Postgres and leading Postgres training sessions. At EnterpriseDB, I occasionally do trainings, help with tech support, attend conferences as a Postgres ambassador and visit customers. And of course, do some PR and interviews, like this one.

Q7. Has PostgreSQL still something in common with the original Ingres project at the University of California, Berkeley?

Bruce Momjian: Not really. There is no Ingres code in Postgres though I think the psql terminal SQL tool is similar to the one in Ingres.

Q8. If you had to compare PostgreSQL with MySQL and MariaDB, what would be the differentiators?

Bruce Momjian: The original focus of MySQL was simple read-only queries. While it has improved since then, it has struggled to go beyond that. Postgres has always targeted the middle-level SQL workload, and is now targeting high-end usage, and the simple-usage cases of NoSQL.

MySQL certainly has greater adoption and application support. But in almost every other measure, Postgres is a better choice for most users. The good news is that people are finally starting to realize that.

Q9. How do you see the database market evolving? And how do you position PostgreSQL in such database market?

Bruce Momjian: We are in close communication with our user community, with our developers reading and responding to email requests daily. That keeps our focus on users’ needs. Postgres, being an object-relational, extensible database, is well suited to being expanded to meet changing user workloads. I don’t think any other database has that level of flexibility and strong developer/user community interaction.

Qx. Would you like to add something?

Bruce Momjian: We had a strong PG NYC conference recently. I posted a summary about it here.

I think that conference highlights some significant trends for Postgres in the months ahead.

————-
Bruce Momjian co-founded in 1996 the PostgreSQL community Global Development Group, the organization of volunteers that steer the development and release of the PostgreSQL open source database. Bruce played a key role in organizing like-minded database professionals to shepherd PostgreSQL from an abandoned academic project into a commercially viable, now enterprise-class relational database. He dedicates the bulk of his time organizing, educating and evangelizing within the open source database community while acting as Senior Database Architect for EnterpriseDB. Bruce began his career as a high school math and computer science teacher and still serves as an adjunct profession at Drexel University. After leaving high school education, Bruce worked for more than a decade as a database consultant building specialized applications for law firms. He then went on to work for the PostgreSQL community with the support of several private companies before joining EnterpriseDB in 2006 to continue his work in the community. Bruce holds a master’s degree from Arcadia University and earned his bachelor’s degree at Columbia University.

Related Posts

On PostgreSQL. Interview with Tom Kincaid. ODBMS Industry Watch, May 30, 2013

MySQL-State of the Union. Interview with Tomas Ulin. ODBMS Industry Watch, February 11, 2013

Resources

The PostgreSQL Global Development Group

EnterpriseDB Postgres Plus Advanced Server

MariaDB vs MySQL, Daniel Bartholomew, Sr. Technical Writer, Monty Program

Evaluating the energy efficiency of OLTP operations:A case study on PostgreSQL.

Follow ODBMS.org on Twitter: @odbmsorg

##

]]>
http://www.odbms.org/blog/2014/06/bruce-momjian/feed/ 0
Setting up a Big Data project. Interview with Cynthia M. Saracco. http://www.odbms.org/blog/2014/01/setting-up-a-big-data-project-interview-with-cynthia-m-saracco/ http://www.odbms.org/blog/2014/01/setting-up-a-big-data-project-interview-with-cynthia-m-saracco/#comments Mon, 27 Jan 2014 07:39:46 +0000 http://www.odbms.org/blog/?p=2929

“Begin with a clear definition of the project’s business objectives and timeline, and be sure that you have appropriate executive sponsorship. The key stakeholders need to agree on a minimal set of compelling results that will impact your business; furthermore, technical leaders need to buy into the overall feasibility of the project and bring design and implementation ideas to the table.”–Cynthia M. Saracco.

How easy is to set up a Big Data project? On this topic I have interviewed Cynthia M. Saracco, senior solutions architect at IBM’s Silicon Valley Laboratory. Cynthia is an expert in Big Data, analytics, and emerging technologies. She has more than 25 years of software industry experience.

RVZ

Q1. How best is to get started with a Big Data project?

Cynthia M. Saracco: Begin with a clear definition of the project’s business objectives and timeline, and be sure that you have appropriate executive sponsorship.
The key stakeholders need to agree on a minimal set of compelling results that will impact your business; furthermore, technical leaders need to buy into the overall feasibility of the project and bring design and implementation ideas to the table. At that point, you can evaluate your technical options for the best fit. Those options might include Hadoop, a relational DBMS, a stream processing engine, analytic tools, visualization tools, and other types of software. Often, a combination of several types of software is needed for a single Big Data project. Keep in mind that every technology has its strengths and weaknesses, so be sure you understand enough about the technologies you’re inclined to use before moving forward.

If you decide that Hadoop should be part of your project, give serious consideration to using a distribution that packages commonly needed components into a single bundle so you can minimize the time required to install and configure your environment. It’s also helpful to keep in mind the existing skills of your staff and seek out offerings that enable them to be productive quickly.
Tools, applications, and support for common scripting and query languages all contribute to improved productivity. If your business application needs to integrate with existing analytical tools, DBMSs, or other software, look for offerings that have some built-in support for that as well.

Finally, because Big Data projects can get pretty complex, I often find it helpful to segment the work into broad categories and then drill down into each to create a solid plan. Examples of common technical tasks include collecting data (perhaps from various sources), preparing the data for analysis (which can range from simple format conversions to more sophisticated data cleansing and enrichment operations), analyzing the data, and rendering or sharing the results of that analysis with business users or downstream applications. Consider scalability and performance needs in addition to your functional requirements.

Q2. What are the most common problems and challenges encountered in Big Data projects?

Cynthia M. Saracco: Lack of appropriately scoped objectives and lack of required skills are two common problems. Regarding objectives, you need to find an appropriate use case that will impact your business and tailor your project’s technical work to meet the business goals of that project efficiently. Big Data is an exciting, rapidly evolving technology area, and it’s easy to get side tracked experimenting with technical features that may not be essential to solving your business problem. While such experimentation can be fun and educational, it can also result in project delays as well as deliverables that are off target. In addition, without well-scoped business objectives, the technical staff may end up chasing a moving target.

Regarding skills, there’s high demand for data scientists, architects, and developers experienced with Big Data projects. So you may need to decide if you want to engage a service provider to supplement in-house skills or if you want to focus on growing (or acquiring) new in-house skills. Fortunately, there are a number of Big Data training options available today that didn’t exist several years ago. Online courses, conferences, workshops, MeetUps, and self-study tutorials can help motivated technical professionals expand their skill set. However, from a project management point of view, organizations need to be realistic about the time required for staff to learn new Big Data technologies. Giving someone a few days or weeks to master Hadoop and its complementary offerings isn’t very realistic. But really, I see the skills challenge as a point-in-time issue. Many people recognize the demand for Big Data skills and are actively expanding their skills, so supply will grow.

Q3. Do you have any metrics to define how good is the “value” that can be derived by analyzing Big Data?

Cynthia M. Saracco: Most organizations want to focus on their return on investment (ROI). Even if your Big Data solution uses open source software, there are still expenses involved for designing, developing, deploying, and maintaining your solution. So what did your business gain from that investment?
The answer to that question is going to be specific to your application and your business. For example, if a telecommunications firm is able to reduce customer churn by 10% as a result of a Big Data project, what’s that worth? If an organization can improve the effectiveness of an email marketing campaign by 20%, what’s that worth? If an organization can respond to business requests twice as quickly, what’s that worth? Many clients have these kinds of metrics in mind as they seek to quantify the value they have derived — or hope to derive — from their investment in a Big Data project.

Q4. Is Hadoop replacing the role of OLAP (online analytical processing) in preparing data to answer specific questions?

Cynthia M. Saracco: More often, I’ve seen Hadoop used to augment or extend traditional forms of analytical processing, such as OLAP, rather than completely replace them. For example, Hadoop is often deployed to bring large volumes of new types of information into the analytical mix — information that might have traditionally been ignored or discarded. Log data, sensor data, and social data are just a few examples of that. And yes, preparing that data for analysis is certainly one of the tasks for which Hadoop is used.

Q4. IBM is offering BigInsights and Big SQL? What is it?

Cynthia M. Saracco: InfoSphere BigInsights is IBM’s Hadoop-based platform for analyzing and managing Big Data. It includes Hadoop, a number of complementary open source projects (such as HBase, Hive, ZooKeeper, Flume, Pig, and others) and a number of IBM-specific technologies designed to add value.

Big SQL is part of BigInsights. It’s IBM’s SQL interface to data stored in BigInsights. Users can create tables, query data, load data from various sources, and perform other functions. For a quick introduction to Big SQL, read this article.

Q5. How does it compare to RDBMS technology? When’s it most useful?

Cynthia M. Saracco: Big SQL provides standard SQL-based query access to data managed by BigInsights. Query support includes joins, unions, sub-queries, windowed aggregates, and other popular capabilities. Because Big SQL is designed to exploit the Hadoop ecosystem, it introduces Hadoop-specific language extensions for certain SQL statements.
For example, Big SQL supports Hive and HBase for storage management, so a Big SQL CREATE TABLE statement might include clauses related to data formats, field delimiters, SerDes (serializers/deserializers), column mappings, column families, etc. The article I mentioned earlier has some examples of these, and the product InfoCenter has further details.

In many ways, Big SQL can serve as an easy on-ramp to Hadoop for technical professionals who have a relational DBMS background. Big SQL is good for organizations that want to exploit in-house SQL skills to work with data managed by BigInsights. Because Big SQL supports JDBC and ODBC, many traditional SQL-based tools can work readily with Big SQL tables, which can also make Big Data easier to use by a broader user community.

However, Big SQL doesn’t turn Hadoop — or BigInsights — into a relational DBMS. Commercial relational DBMSs come with built-in, ACID-based transaction management services and model data largely in tabular formats. They support granular levels of security via SQL GRANT and REVOKE statements. In addition, some RDBMSs support 3GL applications developed in “legacy” programming languages such as COBOL. These are some examples of capabilities aren’t part of Big SQL.

Q6. What are some of its current limitations?

Cynthia M. Saracco: The current level of Big SQL included in BigInsights V2.1.0.1 enables users to create tables but not views.
Date/time data is supported through a full TIMESTAMP data type, and some common SQL operations supported by relational DBMSs aren’t available or have specific restrictions.
Examples include INSERT, UPDATE, DELETE, GRANT, and REVOKE statements. For more details on what’s currently supported in Big SQL, skim through the InfoCenter.

Q7. How BigInsights differs from / adds value to open source Hadoop?

Cynthia M. Saracco: As I mentioned earlier, BigInsights includes a number of IBM-specific technologies designed to add value to the open source technologies included with the product. Very briefly, these include:
– A Web console with administrative facilities, a Web application catalog, customizable dashboards, and other features.
– A text analytic engine and library that extracts phone numbers, names, URLs, addresses, and other popular business artifacts from messages, documents, and other forms of textual data.
– Big SQL, which I mentioned earlier.
BigSheets, a spreadsheet-style tool for business analysts.
– Web-accessible sample applications for importing and exporting data, collecting data from social media sites, executing ad hoc queries, and monitoring the cluster. In addition, application accelerators (tool kits with dozens of pre-built software articles) are available for those working with social data and machine data.
– Eclipse tooling to speed development and testing of BigInsights applications, new text extractors, BigSheets functions, SQL-based applications, Java applications, and more.
– An integrated installation tool that installs and configures all selected components across the cluster and performs a system-wide health check.
– Connectivity to popular enterprise software offerings, including IBM and non-IBM RDBMSs.
– Platform enhancements focusing on performance, security, and availability. These include options to use with an alternative, POSIX-compliant distributed file system (GPFS-FPO) and an alternative MapReduce layer (Adaptive MapReduce) that features Platform Symphony’s advanced job scheduler, workload manager, and other capabilities.

You might wonder what practical benefits these kinds of capabilities bring. While that varies according to each organization’s usage patterns, one industry analyst study concluded that BigInsights lowers total cost of ownership (TCO) by an average of 28% over a three-year period compared with an open source-only implementation.

Finally, a number of IBM and partner offerings support BigInsights, which is something that’s important to organizations that want to integrate a Hadoop-based environment into their broader IT infrastructure. Some examples of IBM products that support BigInsights include DataStage, Cognos Business Intelligence, Data Explorer, and InfoSphere Streams.

Q8. Could you give some examples of successful Big Data projects?

Cynthia M. Saracco: I’ll summarize a few that have been publicly discussed so you can follow links I provide for more details. An energy firm launched a Big Data project to analyze large volumes of data that could help it improve the placement of new wind turbines and significantly reduce response time to business user requests.
A financial services firm is using Big Data to process large volumes of text data in minutes and offer its clients more comprehensive information based on both in-house and Internet-based data.
An online marketing firm is using Big Data to improve the performance of its clients email campaigns.
And other firms are using Big Data to detect fraud, assess risk, cross-sell products and services, prevent or minimize network outages, and so on. You can find a collection of videos about Big Data projects undertaken by various organizations; many of these videos feature users speaking directly about their Big Data experiences and the results of their projects.
And a recent report on Analytics: The real-world use of big data contains further examples. based the results of a survey of more than 1100 businesses that the Said Business School at the University of Oxford conducted with IBM’s Institute for Business Value.

Qx Anything else to add?

Cynthia M. Saracco: Hadoop isn’t the only technology relevant to managing and analyzing Big Data, and IBM’s Big Data software portfolio certainly includes more than BigInsights (its Hadoop-based offering). But if you’re a technologist who wants to learn more about Hadoop, your best bet is to work with the software. You’ll find a number of free online courses in the public domain, such as those at Big Data University. And IBM offers a free copy of its Quick Start Edition of BigInsights as a VMWare image or an installable image to help you get started with minimal effort.

—–
Cynthia M. Saracco is a senior solutions architect at IBM’s Silicon Valley Laboratory, specializing in Big Data, analytics, and emerging technologies. She has more than 25 years of software industry experience, has written three books and more than 70 technical papers, and holds six patents.
—————
Related Posts

Big Data: Three questions to Pivotal. ODBMS Industry Watch, January 20, 2014.

Big Data: Three questions to InterSystems. ODBMS Industry Watch, January 13, 2014.

Operational Database Management Systems. Interview with Nick Heudecker. ODBMS Industry Watch, December 16, 2013.

On Big Data and Hadoop. Interview with Paul C. Zikopoulos. ODBMS Industry Watch, June 10, 2013.

Resources

What’s the big deal about Big SQL? by Cynthia M. Saracco , Senior Software Engineer, IBM, and Uttam Jain, Software Architect, IBM.

ODBMS.org: Free resources on Big Data and Analytical Data Platforms:
| Blog Posts | Free Software| Articles| Lecture Notes | PhD and Master Thesis|

  • Follow ODBMS.org on Twitter: @odbmsorg
  • ##
    ——-

    ]]>
    http://www.odbms.org/blog/2014/01/setting-up-a-big-data-project-interview-with-cynthia-m-saracco/feed/ 0
    Big Data Analytics at Thomson Reuters. Interview with Jochen L. Leidner http://www.odbms.org/blog/2013/11/big-data-analytics-at-thomson-reuters-interview-with-jochen-l-leidner/ http://www.odbms.org/blog/2013/11/big-data-analytics-at-thomson-reuters-interview-with-jochen-l-leidner/#comments Fri, 15 Nov 2013 09:21:06 +0000 http://www.odbms.org/blog/?p=2779

    “My experience overall with almost all open-source tools has been very positive: open source tools are very high quality, well documented, and if you get stuck there is a helpful and responsive community on mailing lists or Stack Exchange and similar sites.” —Dr. Jochen L. Leidner.

    I wanted to know how Thomson Reuters uses Big Data. I have interviewed Dr. Jochen L. Leidner, Lead Scientist, of the London R&D at Thomson Reuters.

    RVZ

    Q1. What is your current activity at Thomson Reuters?
    Jochen L. Leidner: For the most part, I carry out applied research in information access, and that’s what I have been doing for quite a while. After five years with the company – I joined from the University of Edinburgh, where I had been a postdoctoral Royal Society of Edinburgh Enterprise Fellow half a decade ago – I am currently a Lead Scientist with Thomson Reuters, where I am building up a newly-established London site part of our Corporate Research & Development group (by the way: we are hiring!). Before that, I held research and innovation-related roles in the USA and in Switzerland.
    Let me say a few words about Thomson Reuters before I go more into my own activities, just for background. Thomson Reuters has around 50,000 employees in over 100 countries and sells information to professionals in many verticals, including finance & risk, legal, intellectual property & scientific, tax & accounting.
    Our headquarters are located at 3 Time Square in the city of New York, NY, USA.
    Most people know our REUTERS brand from reading their newspapers (thanks to our highly regarded 3,000+ journalists at news desks in about 100 countries, often putting their lives at risk to give us reliable reports of the world’s events) or receiving share price information on the radio or TV, but as a company, we are also involved in as diverse areas as weather prediction (as the weather influences commodity prices) and determining citation impact of academic journals (which helps publishers sell their academic journals to librarians), or predicting Nobel prize winners.
    My research colleagues and I study information access and especially means to improve it, using including natural language processing, information extraction, machine learning, search engine ranking, recommendation system and similar areas of investigations.
    We carry out a lot of contract research for internal business units (especially if external vendors do not offer what we need, or if we believe we can build something internally that is lower cost and/or better suited to our needs), feasibility studies to de-risk potential future products that are considered, and also more strategic, blue-sky research that anticipates future needs. As you would expect, we protect our findings and publish them in the usual scientific venues.

    Q2. Do you use Data Analytics at Thomson Reuters and for what?
    Jochen L. Leidner: Note that terms like “analytics” are rather too broad to be useful in many instances; but the basic answer is “yes”, we develop, apply internally, and sell as products to our customers what can reasonably be called solutions that incorporate “data analytics” functions.
    One example capability that we developed is the recommendation engine CaRE (Al-Kofahi et al., 2007), which was developed by our group, Corporate Research & Development, which is led by our VP of Research & Development, Khalid al-Kofahi. This is a bit similar in spirit to Amazon’s well-known book recommendations. We used it to recommend legal documents to attorneys, as a service to supplement the user’s active search on our legal search engine with “see also..”-type information. This is an example of a capability developed in-house that also something that made it into a product, and is very popular.
    Thomson Reuters is selling information services, often under a subscription model, and for that it is important to have metrics available that indicate usage, in order to inform our strategy. So another example for data analytics is that we study how document usage can inform personalization and ranking, of from where documents are accessed, and we use this to plan network bandwidth and to determine caching server locations.
    A completely different example is citation information: Since 1969, when our esteemed colleague Eugene Garfield (he is now officially retired, but is still active) came up with the idea of citation impact, our Scientific business division is selling the journal citation impact factor – an analytic that can be used as a proxy for the importance of a journal (and, by implication, as an argument to a librarian to purchase a subscription of that journal for his or her university library).
    Or, to give another example from the financial markets area, we are selling predictive models (Starmine) that estimate how likely it is whether a given company goes bankrupt within the next six months.

    Q3. Do you have Big Data at Thomson Reuters? Could you please give us some examples of Big Data Use Cases at your company?
    Jochen L. Leidner: For most definitions of “big”, yes we do. Consider that we operate a news organization, which daily generates in the tens of thousands of news reports (if we count all languages together). Then we have photo journalists who create large numbers of high-quality, professional photographs to document current events visually, and videos comprising audio-visual storytelling and interviews. We further collect all major laws, statutes, regulations and legal cases around in major jurisdictions around the world, enrich the data with our own meta-data using both manual expertise and automatic classification and tagging tools to enhance findability. We hold collections of scientific articles and patents in full text and abstracts.
    We gather, consolidate and distribute price information for financial instruments from hundreds of exchanges around the world. We sell real-time live feeds as well as access to decades of these time series for the purpose of back-testing trading strategies.

    Q4. What “value” can be derived by analyzing Big Data at Thomson Reuters?
    Jochen L. Leidner: This is the killer question: we take the “value” very much without the double quotes – big data analytics lead to cost savings as well as generate new revenues in a very real, monetary sense of the word “value”. Because our solutions provide what we call “knowledge to act” to our customers, i.e., information that lets them make better decisions, we provide them with value as well: we literally help our customers save the cost of making a wrong decision.

    Q5. What are the main challenges for big data analytics at Thomson Reuters ?
    Jochen L. Leidner: I’d say these are absolute volume, growth, data management/integration, rights management, and privacy are some of the main challenges.
    One obvious challenge is the size of the data. It’s not enough to have enough persistent storage space to keep it, we also need backup space, space to process it, caches and so on – it all adds up. Another is the growth and speed of that growth of the data volume. You can plan for any size, but it’s not easy to adjust your plans if unexpected growth rates come along.
    Another challenge that we must look into is the integration between our internal data holdings, external public data (like the World Wide Web, or commercial third-party sources – like Twitter – which play an important role in the modern news ecosystem), and customer data (customers would like to see their own internal, proprietary data be inter-operable with our data). We need to respect the rights associated with each data set, as we deal with our own data, third party data and our customers’ data. We must be very careful regarding privacy when brainstorming about the next “big data analytics” idea – we take privacy very seriously and our CPO joined us from a government body that is in charge of regulating privacy.

    Q6. How do you handle the Big Data Analytics “process” challenges with deriving insight?
    Jochen L. Leidner: Usually analytics projects happen as an afterthought to leverage existing data created in a legacy process, which means not a lot of change of process is needed at the beginning. This situation changes once there is resulting analytics output, and then the analytics-generating process needs to be integrated with the previous processes.
    Even with new projects, product managers still don’t think of analytics as the first thing to build into a product for a first bare-bones version, and we need to change that; instrumentation is key for data gathering so that analytics functionality can build on it later on.
    In general, analytics projects follow a process of (1) capturing data, (2) aligning data from different sources (e.g., resolving when two objects are the same), (3) pre-processing or transforming the data into a form suitable for analysis, (4) building some model and (5) understanding the output (e.g. visualizing and sharing the results). This five-step process is followed by an integration phase into the production process to make the analytics repeatable.

    Q7. What kind of data management technologies do you use? What is your experience in using them?
    Jochen L. Leidner: For storage and querying, we use relational database management systems to store valuable data assets and also use NoSQL databases such as CouchDB and MongoDB in projects where applicable. We use homegrown indexing and retrieval engines as well as open source libraries and components like Apache Lucene, Solr and ElasticSearch.
    We use parallel, distributed computing platforms such as Hadoop/Pig and Spark to process data, and virtualization to manage isolation of environments.
    We sometimes push out computations to Amazon’s EC2 and storage to S3 (but this can only be done if the data is not sensitive). And of course in any large organization there is a lot of homegrown software around. My experience overall with almost all open-source tools has been very positive: open source tools are very high quality, well documented, and if you get stuck there is a helpful and responsive community on mailing lists or Stack Exchange and similar sites.

    Q8. Do you handle un-structured data? If yes, how?
    Jochen L. Leidner: We curate lots of our own unstructured data from scratch, including the thousands of news stories that our over three thousand REUTERS journalists write every day, and we also enrich the unstructured content produced by others: for instance, in the U.S. we manually enrich legal cases with human-written summaries (written by highly-qualified attorneys) and classify them based on a proprietary taxonomy, which informs our market-leading WestlawNext product (see this CIKM 2011 talk), our search engine for the legal profession, who need to find exactly the right cases. Over the years, we have developed proprietary content repositories to manage content storage, meta-data storage, indexing and retrieval. One of our challenges is to unite our data holdings, which often come from various acquisitions that use their own technology.

    Q9. Do you use Hadoop? If yes, what is your experience with Hadoop so far?
    Jochen L. Leidner: We have two clusters featuring Hadoop, Spark and GraphLab, and we are using them intensively. Hadoop is mature as an implementation of the MapReduce computing paradigm, but has its shortcomings because it is not a true operating system (but probably it should be) – for instance regarding the stability of HDFS, its distributed file system. People have started to realize there are shortcomings, and have started to build other systems around Hadoop to fill gaps, but these are still early stage, so I expect them to become more mature first and then there might be a wave of consolidation and integration. We have definitely come a long way since the early days.
    Typically, on our clusters we run batch information extraction tasks, data transformation tasks and large-scale machine learning processes to train classifiers and taggers for these. We are also inducing language models and training recommender systems on them. Since many training algorithms are iterative, Spark can win over Hadoop for these, as it keeps models in RAM.

    Q10. Hadoop is a batch processing system. How do you handle Big Data Analytics in real time (if any)?
    Jochen L. Leidner: The dichotomy is perhaps between batch processing and dialog processing, whereas real-time (“hard”, deterministic time guarantee for a system response) goes hand-in-hand with its opposite non-real time, but I think what you are after here is that dialog systems have to be responsive. There is no one-size-fits-all method for meeting (near) real-time requirements or for making dialog systems more responsive; a lot of analytics functions require the analysis of more than just a few recent data-points, so if that’s what is needed it may take its time. But it is important that critical functions, such as financial data feeds, are delivered as fast as possible – micro-seconds matter here. The more commoditized analytics functions become, the faster they need to be available to retain at least speed as a differentiator.

    Q11 Cloud computing and open source: Do you they play a role at Thomson Reuters? If yes, how?
    Jochen L. Leidner: Regarding cloud computing, we use cloud services internally and as part of some of our product offerings. However, there are also reservations – a lot of our applications contain information that is too sensitive to entrust a third party, especially as many cloud vendors cannot give you a commitment with respect to hosting (or not hosting) in particular jurisdictions. Therefore, we operate our own set of data centers, and some part of these operates as what has become known as “private clouds”, retaining the benefit of the management outsourcing abstraction, but within our large organization rather than pushing it out to a third party. Of course the notion of private clouds is leading the cloud idea ad absurdum quite a bit, because it sacrifices the economy of scale, but having more control is an advantage.
    Open source plays a huge role at Thomson Reuters – we rely on many open source components, libraries and systems, especially under the MIT, BSD, LGPL and Apache licenses. For example, some of our tagging pipelines rely on Apache UIMA, which is a contribution originally developed at IBM, and which has seen contributions by researchers from all around the world (from Darmstadt to Boulder). To date, we have not been very good about opening up our own services in the form of source code, but we are trying to change that now, and we have just launched a new corporation-wide process for open-sourcing software. We also have an internal sharing repository, “Corporate Source”, but in my personal view the audience in any single company is too small – open source (like other recent waves such as clouds or crowdsourcing) needs Internet-scale to work, and Web search engines for the various projects to be discovered).

    Q12 What are the main research challenges ahead? And what are the main business challenges ahead?
    Jochen L. Leidner: Some of the main business challenges are the cost pressure that some of our customers face, and the increasing availability of low-cost or free-of-charge information sources, i.e. the commoditization of information. I would caution here that whereas the amount of information available for free is large, this in itself does not help you if you have a particular problem and cannot find the information that helps you solve it, either because the solution is not there despite the size, or because it is there but findability is low. Further challenges include information integration, making systems ever more adaptive, but only to the extent it is useful, or supporting better personalization. Having said this sometimes systems need to be run in a non-personalized mode (e.g. in the field of e-discovery, you need to have a certain consistency, namely that the same legal search systems retrieves the same things today and tomorrow, and to different parties.

    Q13 Anything else you wish to add?
    Jochen L. Leidner: I would encourage decision makers of global companies not to get misled by fast-changing “hype” language: words like “cloud”, “analytics” and “big data” are too general to inform a professional discussion. Putting my linguist’s hat on, I can only caution about the lack of precision inherent in marketing language’s use in technological discussions: for instance, cluster computing is not the same as grid computing. And what looks like “big data” today we will almost certainly carry around with us on a mobile computing device tomorrow. Also, buzz words like “Big Data” do not by themselves solve any problem – they are not magic bullets. To solve any problem, look at the input data, specify the desired output data, and think hard about whether and how you can compute the desired result – nothing but “good old” computer science.

    Dr. Jochen L. Leidner is a Lead Scientist with Thomson Reuters, where he is building up the corporation’s London site of its Research & Development group.
    He holds Master’s degrees in computational linguistics, English language and literature and computer science from Friedrich-Alexander University Erlangen-Nuremberg and in computer speech, text and internet technologies from the University of Cambridge, as well as a Ph.D. in information extraction from the University of Edinburgh (“Toponym resolution in Text”). He is recipient of the first ACM SIGIR Doctoral Consortium Award, a Royal Society of Edinburgh Enterprise Fellowship in Electronic Markets, and two DAAD scholarships.
    Prior to his research career, he has worked as a software developer, including for SAP AG (basic technology and knowledge management) as well as for startups.
    He led the development teams of multiple question Answering systems, including the systems QED at Edinburgh and Alyssa at Saarland University, the latter of which ranked third at the last DARPA/NIST TREC open-domain question answering factoid track evaluation.
    His main research interests include information extraction, question answering and search, geo-spatial grounding, applied machine learning with a focus on methodology behind research & development in the area of information access.
    In 2013, Dr. Leidner has also been teaching an invited lecture course “Language Technology and Big Data” at the University of Zurich, Switzerland.

    —————————-
    Related Posts

    Data Analytics at NBCUniversal. Interview with Matthew Eric Bassett. September 23, 2013

    On Linked Data. Interview with John Goodwin. September 1, 2013

    Big Data Analytics at Netflix. Interview with Christos Kalantzis and Jason Brown. February 18, 2013

    Resources

    ODBMS.org free resources on Big Data and Analytical Data Platforms:
    Blog Posts | Free Software| Articles | Lecture Notes | PhD and Master Thesis|

    Follow us on Twitter: @odbmsorg

    ]]>
    http://www.odbms.org/blog/2013/11/big-data-analytics-at-thomson-reuters-interview-with-jochen-l-leidner/feed/ 0