Skip to content

"Trends and Information on Big Data, New Data Management Technologies, and Innovation."

This is the Industry Watch blog. To see the complete ODBMS.org
website with useful articles, downloads and industry information, please click here.

Apr 14 14

On the SciDB array database. Interview with Mike Stonebraker and Paul Brown.

by Roberto V. Zicari

“SciDB is both a data store and a massively parallel compute engine for numerical processing. The inclusion of this computational platform is what makes us the first “computational database”, not just a SQL-style decision support DBMS. Hence, we need a new moniker to describe this class of interactions. We settled on computational databases, but if your readers have a better suggestion, we are all ears!”
–Mike Stonebraker, Paul Brown.

On the SciDB array database, I have interviewed Mike Stonebraker, MIT Professor and Paradigm4 co-founder and CTO, and Paul Brown, Paradigm4 Chief Architect.

RVZ

Q1: What is SciDB and why did you create it?

Mike Stonebraker, Paul Brown: SciDB is an open source array database with scalable, built-in complex analytics, programmable from R and Python. The requirements for SciDB emerged from discussions between academic database researchers—Mike Stonebraker and Dave DeWitt— and scientists at the first Extremely Large Databases conference (XLDB) at SLAC in 2007 about coping with the peta-scale data from the forthcoming LSST telescope.

Recognizing that commercial and industrial users were about to face the same challenges as scientists, Mike Stonebraker founded Paradigm4 in 2010 to make the ideas explored in early prototypes available as a commercial-quality software product. Paradigm4 develops and supports both a free, open-source Community Edition (scidb.org/forum) and an Enterprise Edition with additional features (paradigm4.com).

Q2. With the rise of Big Data analytics, is the convergence of analytic needs between science and industry really happening?

Mike Stonebraker, Paul Brown:  There is a “sea change” occurring as companies move from Business Intelligence (think SQL analytics) to Complex Analytics (think predictive modelling, clustering, correlation, principal components analysis, graph analysis, etc.). Obviously science folks have been doing complex analytics on big data all along.

Another force driving this sea change is all the machine-generated data produced by cell phones, genomic sequencers, and by devices on the Industrial Internet and the Internet of Things.  Here too science folks have been working with big data from sensors, instruments, telescopes and satellites all along.  So it is quite natural that a scalable computational database like SciDB that serves the science world is a good fit for the emerging needs of commercial and industrial users.

There will be a convergence of the two markets as many more companies aspire to develop innovative products and services using complex analytics on big and diverse data. In the forefront are companies doing electronic trading on Wall Street; insurance companies developing new pricing models using telematics data; pharma and biotech companies analyzing genomics and clinical data; and manufacturing companies building predictive models to anticipate repairs on expensive machinery.  We expect everybody will move to this new paradigm over time.  After all, a predictive model integrating diverse data is much more useful than a chart of numbers about past behavior.

Q3. What are the typical challenges posed by scientific analytics?

Mike Stonebraker, Paul Brown:  We asked a lot of working scientists the same question, and published a paper in the IEEE Computing Science & Engineering summarizing their answers (*see citation below). In a nutshell, there are 4 primary issues.

1. Scale. Science has always been intensely “data driven”.  With the ever-increasing massive data-generating capabilities of scientific instruments, sensors, and computer simulations, the average scientist is overwhelmed with data and needs data management and analysis tools that can scale to meet his or her needs, now and in the future.

2. New Analytic Methods. Historically analysis tools have focused on business users, and have provided easy-to-use interfaces for submitting SQL aggregates to data warehouses.  Such business intelligence (BI) tools are not useful to scientists, who universally want much more complex analyses, whether it be outlier detection, curve fitting, analysis of variance, predictive models or network analysis.  Such “complex analytics” is defined on arrays in linear algebra, and requires a new generation of client-side tools and server side tools in DBMSs.

3.   Provenance. One of the central requirements that scientists have is reproducibility. They need to be able to send their data to colleagues to rerun their experiments and produce the same answers.  As such, it is crucial to keep prior versions of data in the face of updates, error correction, and the like.  The right way to provide such provenance is through a no-overwrite DBMS; which allows time-travel back in time to when the experiment in question was performed.

4.  Interactivity. Unlike business users who are often comfortable with batch reporting of information, scientific users are invariably exploring their data, asking “what if” questions and testing hypotheses.  What they need in interactivity on very large data sets.

Q3. What are in your opinion the commonalities between scientific and industrial analytics?

Mike Stonebraker, Paul Brown:  We would state the question in reverse “What are the differences between the two markets?” In our opinion, the two markets will converge quickly as commercial and industrial companies move to the analytic paradigms pervasive in the science marketplace.

Q4. How come in the past the database system software community has failed to build the kinds of systems that scientists needed for managing massive data sets?

Mike Stonebraker, Paul Brown: Mostly it’s because scientific problems represent a $0 billion market! However, the convergence of industrial requirements and science requirements means that science can “piggy back” on the commercial market and get their needs met.

Q5. SciDB is a scalable array database with native complex analytics. Why did you choose a data model based on multidimensional arrays?

Mike Stonebraker, Paul Brown: Our main motivation is that at scale, the complex analyses done by “post sea change” users are invariably about applying parallelized linear algebraic algorithms to arrays. Whether you are doing regression, singular value decomposition, finding eigenvectors, or doing operations on graphs, you are performing a sequence of matrix operations.  Obviously, this is intuitive and natural in an array data model, whereas you have to recast tables into arrays if you begin with an RDBMS or keep data in files.  Also, a native array implementation can be made much faster than a table-based system by directly implementing multi-dimensional clustering and doing selective replication of neighboring data items.

Our secondary motivation is that, just like mathematical matrices, geospatial data, time-series data, image data, and graph data are most naturally organized as arrays.  By preserving the inherent ordering in the data, SciDB supports extremely fast selection (including vectors, planes, ‘hypercubes’), doing multi-dimensional windowed aggregates, and re-gridding it to change spatial or temporal resolution.

Q6. How do you manage in a nutshell scalability with high degrees of tolerance to failures?

Mike Stonebraker, Paul Brown: In a nutshell? Partitioning, and redundancy (k-replication).

First, SciDB splits each array’s attributes apart, just like any columnar system. Then we partition each array into rectilinear blocks we call “chunks”. Then we employ a variety of mapping functions that map an array’s chunks to SciDB instances. For each copy of an array we use a different mapping function to create copies of each chunk on different node of the cluster. If a node goes down, we figure out where there is a redundant copy of the data and move the computation there.

Q7. How do you handle data compression in SciDB?

Mike Stonebraker, Paul Brown:  Use of compression in modern data stores is a very important topic.  Minimizing storage while retaining information and supporting extremely rapid data access informs every level of SciDB’s design. For example, SciDB splits every array into single-attribute components. We compress a chunk’s worth of cell values for a specific attribute.  At the lowest level, we compress attribute data using techniques like run-length encoding on data.  In addition, our implementation has an abstraction for compression to support other compression algorithms.

Q8. Why supporting two query languages?

Mike Stonebraker, Paul Brown:  Actually the primary interfaces we are promoting are R and Python as they are the languages of choice of data scientists, quants, bioinformaticians, and scientists.   SciDB-R and SciDB-Py allow users to interactively query SciDB using R and Python. Data is persisted in SciDB. Math operators are overloaded so that complex analytical computations execute scalably in the database.

Early on we surveyed potential and existing SciDB users, and found there were two very different types. By and large, commercial users using RDMBSs said “make it look like SQL”. For those users we created AQL—array SQL. On the other hand, data scientists and programmers preferred R, Python, and functional languages. For the second class of users we created SciDB-R, SciDB-Py, and AFL—an array functional language.

All queries get compiled into a query plan, which is a sequence of algebraic operations.  Essentially all relational versions of SQL do exactly the same thing. In SciDB, AFL, the array functional language, is the underlying language of algebraic operators. Hence, it is easy to surface and support AFL in addition to AQL, SciDB-R, and SciDB-Py, allowing us to satisfy the preferred mode of working for many classes of users.

Q9. You defined SciDB a computational database – not a data warehouse, not a business-intelligence database, and not a transactional database. Could you please elaborate more on this point?

Mike Stonebraker, Paul Brown: In our opinion, there are two mature markets for DBMSs: transactional DBMSs that are optimized for large numbers of users performing short write-oriented ACID transactions, and data warehouses, which strive for high performance on SQL aggregates and other read-oriented longer queries.  The users of SciDB fit into neither category.  They are universally doing more complex mathematical calculations than SQL aggregates on their data, and their DBMS interactions are typically longer read-oriented queries. SciDB is both a data store and a massively parallel compute engine for numerical processing. The inclusion of this computational platform is what makes us the first “computational database”, not just a SQL-style decision support DBMS. Hence, we need a new moniker to describe this class of interactions. We settled on computational databases, but if your readers have a better suggestion, we are all ears!

Q10. How does SciDB differ from analytical databases, such as for example HP Vertica, and in-memory analytics databases such as SAP HANA?

Mike Stonebraker, Paul Brown: Both are data warehouse products, optimized for warehouse workloads.  SciDB serves a different class of users from these other systems. Our customers’ data are naturally represented as arrays that don’t fit neatly or efficiently into relational tables.  Our users want more sophisticated analytics—more numerical, statistical, and graph analysis—and not so much SQL OLAP.

Q11. What about Teradata?

Mike Stonebraker, Paul Brown: Another data warehouse vendor.  Plus, SciDB runs on commodity hardware clusters or in a cloud and not on a proprietary appliances or expensive servers.

Q12. Anything else you wish to add?

Mike Stonebraker, Paul Brown:  SciDB is currently being used by commercial users for computational finance, bioinformatics and clinical informatics, satellite image analysis, and industrial analytics.  The publicly accessible NIH NCBI One Thousand Genomes browser has been running on SciDB since the Fall of 2012.

Anyone can try out SciDB using an AMI or a VM available at scidb.org/forum.

————————–

Mike Stonebraker , CTO Paradigm4
Renowned database researcher, innovator, and entrepreneur: Berkeley, MIT, Postgres, Ingres, Illustra, Cohera, Streambase, Vertica, VoltDB, and now Paradigm4.

Paul Brown , Chief Architect Paradigm4
Premier database ‘plumber’ and researcher moving from the “I’s” (Ingres, Illustra, Informix, IBM) to a “P” (Paradigm4).

————————-
Resources

*Citation for IEEE paper
Stonebraker, M.; Brown, P.; Donghui Zhang; Becla, J., “SciDB: A Database Management System for Applications with Complex Analytics,” Computing in Science & Engineering , vol.15, no.3, pp.54,62, May-June 2013
doi: 10.1109/MCSE.2013.19, URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6461866&isnumber=6549993

- ODBMS.org: free resources related to Paradigm4

Related Posts

The Gaia mission, one year later. Interview with William O’Mullane. ODBMS Industry Watch, January 16, 2013

- Objects in Space vs. Friends in Facebook. ODBMS Industry Watch, April 13, 2011.

Follow ODBMS.org on Twitter: @odbmsorg

##

Apr 7 14

Big Data: three questions to DataStax

by Roberto V. Zicari

“High volume and data driven businesses have led to new types of data emerging from the cloud, mobile devices, social media and sensor devices. For applications processing such data, traditional relational databases such as Oracle simply run out of steam.”–Robin Schumacher

The sixth interview in the “Big Data: three questions to “ series of interviews, is with Robin Schumacher, VP of Products at DataStax.

RVZ

Q1. What is your current product offering?

Robin Schumacher: DataStax offers the first enterprise-class NoSQL platform for data-driven, real-time online applications. Our flagship product is DataStax Enterprise 4.0, built on Apache Cassandra. It is a complete big data platform with the full power of Cassandra offering a range of solutions including built in analytics, integrated search, an in-memory options, and the most comprehensive security feature set of any NoSQL database.
An integrated analytics component allows users to store and manage line of business application data and analyzes that same data within the platform. The analytics capability allows for comprehensive workload management and allows the user to run real time transactions and enterprise search workloads in a seamlessly integrated database.
Built in search offers robust full text search, faceted search, rich document handling and geospatial search.
Benefits include full workload management, continuous availability, real-time functionality and data protection.
Lastly, security runs through the entire platform to protect unauthorized access to guard sensitive data. Visual backup and restore processes make for retrieving lost data extremely easy.
DataStax OpsCenter, a simplified management solution, is included with DataStax Enterprise. This service makes it easy to manage Cassandra and DataStax Enterprise clusters by giving administrators, architects and developers a view of the system from a centralized dashboard. OpsCenter installs seamlessly and gives system operators the flexibility to monitor and manage the most complex workloads from any web browser.

Q2. Who are your current customers and how do they typically use your products?

Robin Schumacher: DataStax is the first viable alternative to Oracle and powers the online applications for 400+ customers and more than 20 of the Fortune 100. Our customer industries range from e-commerce to education to digital entertainment and the top use cases are the following:
1. Fraud detection
2. The Internet of Things
3. Messaging
4. Personalization
5. Collections/Playlists

Customers include Netflix, eBay, Adobe, Amara Health Analytics and many others.

The most common baseline use for our product is to serve as an operational database management system for online applications that must scale to incredible levels and must remain online at all times.

Q3. What are the main new technical features you are currently working on and why?

Robin Schumacher: We recently added an in-memory option that enables companies to process data up to 100 times faster. This option excels in use cases that require fast write and read operations, and is particularly suited when data is overwritten frequently, but not actually deleted. DataStax Enterprise 4.0 is the first NoSQL database to combine this in memory option with Cassandra¹s always on architecture, linear scalability and datacenter support, delivering lightning performance that allows businesses to scale applications with zero downtime – particularly useful in financial services use cases or any application where performance is key.

High volume and data driven businesses have led to new types of data emerging from the cloud, mobile devices, social media and sensor devices. For applications processing such data, traditional relational databases such as Oracle simply run out of steam. DataStax Enterprise 4.0 offers a powerful, modern alternative to help build online applications that scale as the business grows. This in-memory capability equals faster performance, easy development, flexible performance management and seamless search:
Objects created in-memory optimize performance and deliver increased speed which enables businesses to deliver data to customers faster than ever before.
In-memory objects act as Cassandra tables so they are transparent to applications and developers have no learning curve to manage.Administrators can decide where to assign data, making performance optimization easier than ever.

Enhanced internal cluster communications deliver faster search operations help developers build applications more efficiently.

Related Posts

- Big Data: Three questions to Aerospike. ODBMS Industry Watch, March 2, 2014

Big Data: Three questions to McObject. ODBMS Industry Watch, February 14, 2014

Big Data: Three questions to VoltDB. ODBMS Industry Watch, February 6, 2014.

Big Data: Three questions to Pivotal. ODBMS Industry Watch, January 20, 2014.

-Big Data: Three questions to InterSystems. ODBMS Industry Watch, January 13, 2014.

Operational Database Management Systems. Interview with Nick Heudecker, ODBMS Industry Watch, December 16, 2013.

Resources

- ODBMS.org: free download of technical resources on DataStax

- ODBMS.org: free download of technical resources on Apache Cassandra

- 2013 Gartner Magic Quadrant for Operational Database Management Systems by Donald Feinberg, Merv Adrian, Nick Heudecker, October 21, 2013

Follow ODBMS.org on Twitter@odbmsorg

Mar 25 14

What are the challenges for modern Data Centers? Interview with David Gorbet.

by Roberto V. Zicari

“The real problem here is the word “silo.” To answer today’s data challenges requires a holistic approach. Your storage, network and compute need to work together.”–David Gorbet.

What are the challenges for modern data centers? On this topic I have interviewed David Gorbet, Vice President of Engineering at MarkLogic.
RVZ

Q1. Data centers are evolving to meet the demands and complexities imposed by increasing business requirements. What are the main challenges?

David Gorbet: The biggest business challenge is the rapid pace of change in both available data and business requirements. It’s no longer acceptable to spend years designing a data application, or the infrastructure to run it. You have to be able to iterate on functionality quickly. This means that your applications need to be developed in a much more agile manner, but you also need to be able to reallocate your infrastructure dynamically to the most pressing needs. In the era of Big Data this problem is exacerbated. The increasing volume and complexity of data under management is stressing both existing technologies and IT budgets. It’s not just a matter of scale, although traditional “scale-up” technologies do become very expensive as data volumes grow. It’s also a matter of complexity of data. Today a lot of data has a mix of structured and unstructured components, and the traditional solution to this problem is to split the structured components into an RDBMS, and use a search technology for the unstructured components. This creates additional complexity in the infrastructure, as different technology stacks are required for what really should be components of the same data.

Traditional technologies for data management are not agile. You have to spend an inordinate amount of time designing schemas and planning indexing strategies, both of which require pretty-much full knowledge of the data and query patterns you need to provide the application value. This has to be done before you can even load data. On the infrastructure side, even if you’ve embraced cloud technologies like virtualization, it’s unlikely you’re able to make good use of them at the data layer. Most database technologies are not architected to allow elastic expansion or contraction of capacity or compute power, which makes it hard to achieve many of the benefits (and cost savings) of cloud technologies.

To solve these problems you need to start thinking differently about your data center strategy. You need to be thinking about a data-centered data center, versus today’s more application-centered model.

Q2. You talked about a “data-centered” data center? What is it? and what is the difference with respect to a classical Data-Warehouse?

David Gorbet: To understand what I mean by “data-centered” data center, you have to think about the alternative, which is more application-centered. Today, if you have data that’s useful, you build an application or put it in a data warehouse to unlock its value. These are database applications, so you need to build out a database to power them. This database needs a schema, and that schema is optimized for the application. To build this schema, you need to understand both the data you’ll be using, and the queries that the application requires.
So you have to know in advance everything the application is going to do before you can build anything. What’s more, you then have to ETL this data from wherever it lives into the application-specific database.

Now, if you want another application, you have to do the same thing. Pretty soon, you have hundreds of data stores with data duplicated all over the place. Actually, it’s not really duplicated, it’s data derived from other data, because as you ETL the data you change its form losing some of the context and combining what’s left with bits of data from other sources. That’s even worse than straight-up duplication because provenance is seldom retained through this process, so it’s really hard to tell where data came from and trace it back to its source. Now imagine that you have to correct some data.
Can you be sure that the correction flowed through to every downstream system? Or what if you have to delete data due to a privacy issue, or change security permissions on data? Even with “small data” this is complicated, but it’s much harder and costlier with high volumes of data.

A “data-centered” data center is one that is focused on the data, its use, and its governance through its lifecycle as the primary consideration. It’s architected to allow a single management and governance model, and to bring the applications to the data, rather than copying data to the applications. With the right technologies, you can build a data-centered data center that minimizes all the data duplication, gives you consistent data governance, enables flexibility both in application development over the data and in scaling up and down capacity to match demand, allowing you to manage your data securely and cost-effectively throughout its lifecycle.

Q3. Data center resources are typically stored in three silos: compute, storage and network: is this a problem?

David Gorbet: It depends on your technology choices. Some data management technologies require direct-attached storage (DAS), so obviously you can’t manage storage separately with that kind of technology. Others can make use of either DAS or shared storage like SAN or NAS.
With the right technology, it’s not necessarily a problem to have storage managed independently from compute.
The real problem here is the word “silo.” To answer today’s data challenges requires a holistic approach. Your storage, network and compute need to work together.

Your question could also apply to application architectures. Traditionally, applications are built in a three-tiered architecture, with a DBMS for data management, an application server for business logic, and a front-end client where the UI lives. There are very good reasons for this architecture, and I believe it’s likely to be the predominant model for years to come. But even though business logic is supposed to reside in the app server, every enterprise DBMS supports stored procedures, and these are commonly used to leverage compute power near the data for cases where it would be too slow and inefficient to move data to the middle tier. Increasingly, enterprise DBMSes also have sophisticated built-in functions (and in many cases user-defined functions) to make it easy to do things that are most efficiently done right where the data lives. Analytic aggregate calculations are a good example of this. Compute doesn’t just reside in the middle tier.

This is nothing new, so why am I bringing it up? Because as data volumes grow larger, the problem of moving data out of the DBMS to do something with it is going to get a lot worse. Consider for example the problem faced by the National Cancer Institute. The current model for institutions wanting to do research based on genomic data is to download a data set and analyze it. But by the end of 2014, the Cancer Genome Atlas is expected to have grown from less than 500 TB to 2.5 PB. Just downloading 2.5 PB, even over 10-gigabit network would take almost a month.

The solution? Bring more compute to the data. The implication? Twofold: First, methods for narrowing down data sets prior to acting on them are critical. This is why search technology is fast becoming a key feature of a next-generation DBMS. Search is the query language for unstructured data, and if you have complex data with a mix of structured and unstructured components, you need to be able to mix search and query seamlessly. Second, DBMS technologies need to become much more powerful so that they can execute sophisticated programs and computations efficiently where the data lives, scoped in real-time to a search that can narrow the input set down significantly. That’s the only way this stuff is going to get fast enough to happen in real-time. Another way of putting this is that the “M” in DBMS is going to increase in scope. It’s not enough just to store and retrieve data. Modern DBMS technology needs to be able to do complex, useful computations on it as well.

Q4. How do you build such a “data-centered” data center?

David Gorbet: First you need to change your mindset. Think about the data as the center of everything. Think about managing your data in one place, and bringing the application to the data by exposing data services off your DBMS. The implications for how you architect your systems are significant. Think service-oriented architectures and continuous deployment models.

Next, you need the right technology stack. One that can provide application functionality for transactions, search and discovery, analytics, and batch computation with a single governance and scale model. You need a storage system that gives great SLAs on high-value data and great TCO on lower-value data, without ETL. You need the ability to expand and contract compute power to serve the application needs in real time without downtime, and to run this infrastructure on premises or in the cloud.

You need the ability to manage data throughout its lifecycle, to take it offline for cost savings while leaving it available for batch analytics, and to bring it back online for real-time search, discovery or analytics within minutes if necessary. To power applications, you need the ability to create powerful, performant and secure data services and expose them right from where the data lives, providing the data in the format needed by your application on the fly.
We call this “schema on read.”

Of course all this has to be enterprise class, with high availability, disaster recovery, security, and all the enterprise functionality your data deserves, and it has to fit in your shrinking IT budget. Sounds impossible, but the technology exists today to make this happen.

Q5. For what kind of mission critical apps is such a “data-centered” data center useful?

David Gorbet: If you have a specific application that uses specific data, and you won’t need to incorporate new data sources to that application or use that data for another application, then you don’t need a data-centered data center. Unfortunately, I’m having a hard time thinking of such an application. Even the dull line of business apps don’t stand alone anymore. The data they create and maintain is sent to a data warehouse for analysis.
The new mindset is that all data is potentially valuable, and that isn’t just restricted to data created in-house.
More and more data comes from outside the organization, whether in the form of reference data, social media, linked data, sensor data, log data… the list is endless.

A data-centered data center strategy isn’t about a specific application or application type. It’s about the way you have to think about your data in this new era.

Q6. How Hadoop fits into this “data-centered” data center?

David Gorbet: Hadoop is a key enabling technology for the data-centered data center. HDFS is a great file system for storing loads of data cheaply.
I think of it as the new shared storage infrastructure for “big data.” Now HDFS isn’t fast, so if you need speed, you may need NAS, SAN, or even DAS or SSD. But if you have a lot of data, it’s going to be much cheaper to store it in HDFS than in traditional data center storage technologies. Hadoop MapReduce is a great technology for batch analytics. If you want to comb through a lot of data and do some pretty sophisticated stuff to it, this is a good way to do it. The downside to MapReduce is that it’s for batch jobs. It’s not real-time.

So Hadoop is an enabling technology for a data-centered data center, but it needs to be complemented with high-performance storage technologies for data that needs this kind of SLA, and more powerful analytic technologies for real-time search, discovery and analysis. Hadoop is not a DBMS, so you also need a DBMS with Hadoop to manage transactions, security, real-time query, etc.

Q7. What are the main challenges when designing an ETL strategy?

David Gorbet: ETL is hard to get right, but the biggest challenge is maintaining it. Every app has a v2, and usually this means new queries that require new data that needs a new schema and revised ETL. ETL also just fundamentally adds complexity to a solution.
It adds latency since many ETL jobs are designed to run in batches. It’s hard to track provenance of data through ETL, and it’s hard to apply data security and lifecycle management rules through ETL. This isn’t the fault of ETL or ETL tools.
It’s just that the model is fundamentally complex.

Q8. With Big Data analytics you don’t know in advance what data you’re going to need (or get in the future). What is the solution to this problem?

David Gorbet: This is a big problem for relational technologies, where you need to design a schema that can fit all your data up front.
The best approach here is to use a technology that does not require a predefined schema, and that allows you to store different entities with different schemas (or no schema) together in the same database and analyze them together.
A document database, which is a type of NoSQL database, is great for this, but be careful which one you choose because some NoSQL databases don’t do transactions and some don’t have the indexing capability you need to search and query the data effectively.
Another trend is to use Semantic Web technology. This involves modeling data as triples, which represent assertions with a subject, a predicate, and an object.
Like “This derivative (subject) is based on (predicate) this underlying instrument (object).”
It turns out you can model pretty much any data that way, and you can invent new relationships (predicates) on the fly as you need them.
No schema required. It’s also easy to relate data entities together, since triples are ideal for modeling relationships. The challenge with this approach is that there’s still quite a bit of thought required to figure out the best way to represent your data as triples. To really make it work, you need to define rules about what predicates you’re going to allow and what they mean so that data is modeled consistently.

Q9. What is the cost to analyze a terabyte of data?

David Gorbet: That depends on what technologies you’re using, and what SLAs are required on that data.
If you’re ingesting new data as you analyze, and you need to feed some of the results of the analysis back to the data in real time, for example if you’re analyzing risk on derivatives trades before confirming them, and executing business rules based on that, then you need fast disk, a fair amount of compute power, replicas of your data for HA failover, and additional replicas for DR. Including compute, this could cost you about $25,000/TB.
If your data is read-only and your analysis does not require high-availability, for example a compliance application to search those aforementioned derivatives transactions, you can probably use cheaper, more tightly packed storage and less powerful compute, and get by with about $4,000/TB. If you’re doing mostly batch analytics and can use HDFS as your storage, you can do this for as low as $1,500/TB.

This wide disparity in prices is exactly why you need a technology stack that can provide real-time capability for data that needs it, but can also provide great TCO for the data that doesn’t. There aren’t many technologies that can work across all these data tiers, which is why so many organizations have to ETL their data out of their transactional system to an analytic or archive system to get the cost savings they need. The best solution is to have a technology that can work across all these storage tiers and can manage migration of data through its lifecycle across these tiers seamlessly.
Again, this is achievable today with the right technology choices.

———————————-
David Gorbet, Vice President, Engineering, MarkLogic.
David brings two decades of experience bringing to market some of the highest-volume applications and enterprise software in the world. David has shipped dozens of releases of business and consumer applications, server products and services ranging from open source to large-scale online services for businesses, and twice has helped start and grow billion-dollar software products.

Prior to MarkLogic, David helped pioneer Microsoft’s business online services strategy by founding and leading the SharePoint Online team. In addition to SharePoint Online, David has held a number of positions at Microsoft and elsewhere with a number of products, including Microsoft Office, Extricity B2Bi server software, and numerous incubation products.

David holds a Bachelor of Applied Science degree in Systems Design Engineering with an additional major in Psychology from the University of Waterloo, and an MBA from the University of Washington Foster School of Business.

Related Posts

-On Linked Data. Interview with John Goodwin. ODBMS Industry September 1, 2013

- On NoSQL. Interview with Rick Cattell. ODBMS Industry Watch August 19, 2013

Resources

- Got Loss? Get zOVN!
Authors: Daniel Crisan, Robert Birke, Gilles Cressier, Cyriel Minkenberg and Mitch Gusat. IBM Research – Zurich Research Laboratory.
Abstract: Datacenter networking is currently dominated by two major trends. One aims toward lossless, flat layer-2 fabrics based on Converged Enhanced Ethernet or InfiniBand, with ben- efits in efficiency and performance.

- F1: A Distributed SQL Database That Scales
Authors: Jeff Shute, Radek Vingralek, Eric Rollins, Stephan Ellner, Traian Stancescu, Bart Samwel, Mircea Oancea, John Cieslewicz, Himani Apte, Ben Handy, Kyle Littlefield, Ian Rae*. Google, Inc., *University of Wisconsin-Madison
Abstract: F1 is a distributed relational database system built at Google to support the AdWords business.

Events

David Gorbet will be speaking at MarkLogic World in San Francisco from April 7-10, 2014.

ODBMS.org on Twitter: @odbmsorg

Mar 18 14

On SQL and NoSQL. Interview with Dave Rosenthal

by Roberto V. Zicari

“Despite the obvious shared word ‘transaction’ and the canonical example of a database transaction which modifies multiple bank accounts, I don’t think that database transactions are particularly relevant to financial applications.”–Dave Rosenthal.

On SQL and NoSQL, I have interviewed Dave Rosenthal CEO of FoundationDB.

RVZ

Q1. What are the suggested criteria for users when they need to choose between durability for lower latency, higher throughput and write availability?

Dave Rosenthal: There is a tradeoff in available between commit latency and durability–especially in distributed databases. At one extreme a database client can just report success immediately (without even talking to the database server) and buffer the writes in the background. Obviously, that hides latency well, but you could lose a suffix of transactions. At the other extreme, you can replicate writes across multiple machines, fsync them on each of the machines, and only then report success to the client.

FoundationDB is optimized to provide good performance in its default setting, which is the safest end of that tradeoff.

Usually, if you want some reasonable amount of durability guarantee, you are talking about a commit latency of small constant factor times the network latency. So, the real latency issues come with databases spanning multiple data centers. In that case FoundationDB users are able to choose whether or not they want durability guarantees in all data centers before commit (increasing commitment latencies), which is our default setting, or whether they would like to relax durability guarantees by returning a commit when the data is fsync’d to disk in just one datacenter.

All that said, in general, we think that the application is usually a more appropriate place to try to hide latency than the database.

Q2. Justin Sheehy of Basho in an interview said [1] “I would most certainly include updates to my bank account as applications for which eventual consistency is a good design choice. In fact, bankers have understood and used eventual consistency for far longer than there have been computers in the modern sense”. What is your opinion on this?

Dave Rosenthal: Yes, we totally agree with Justin. Despite the obvious shared word ‘transaction’ and the canonical example of a database transaction which modifies multiple bank accounts, I don’t think that database transactions are particularly relevant to financial applications. In fact, true ACID transactions are way more broadly important than that. They give you the ability to build abstractions and systems that you can provide guarantees about.
As Michael Cahill says in his thesis which became the SIGMOD paper of the year: “Serializable isolation enables the development of a complex system by composing modules, after verifying that each module maintains consistency in isolation.” It’s this incredibly important ability to compose that makes a system with transactions special.

Q3. FoundationDB claim to provide full ACID transactions. How do you do that?

Dave Rosenthal: In the same basic way as many other transactional databases do. We use a few strategies that tend to work well in distributed system such as optimistic concurrency and MVCC. We also, of course, have had to solve some of the fundamental challenges associated with distributed systems and all of the crazy things that can happen in them. Honestly, it’s not very hard to build a distributed transactional database. The hard part is making it work gracefully through failure scenarios and to run fast.

Q4. Is this similar to Oracle NoSQL?

Dave Rosenthal: Not really. Both Oracle NoSQL and FoundationDB provide an automatically-partitioned key-value store with fault tolerance. Both also have a concept of ordering keys (for efficient range operations) though Oracle NoSQL only provides ordering “within a Major Key set”. So, those are the similarities, but there are a bunch of other NoSQL systems with all those properties. The huge difference is that FoundationDB provides for ACID transactions over arbitrary keys and ranges, while Oracle NoSQL does not.

Q5. How would you compare your product offering with respect to NoSQL data stores, such as CouchDB, MongoDB, Cassandra and Riak, and NewSQL such as NuoDB and VoltDB?

Dave Rosenthal: The most obvious response for the NoSQL data stores would be “we have ACID transactions, they don’t”, but the more important difference is in philosophy and strategy.

Each of those products expose a single data model and interface. Maybe two. We are pursuing a fundamentally different strategy.
We are building a storage substrate that can be adapted, via layers, to provide a variety of data models, APIs, and true flexibility.
We can do that because of our transactional capabilities. CouchDB, MongoDB, Cassandra and Riak all have different APIs and we talk to companies that run all of those products side-by-side. The NewSQL database players are also offering a single data model, albeit a very popular one, SQL. FoundationDB is offering an ever increasing number of data models through its “layers”, currently including several popular NoSQL data models and with SQL being the next big one to hit. Our philosophy is that you shouldn’t have to increase the complexity of your architecture by adopting a new NoSQL database each time your engineers need access to a new data model.

Q6. Cloud computing and open source: How does it relate to FoundationDB?

Dave Rosenthal: Cloud computing: FoundationDB has been designed from the beginning to run well in cloud environments that make use of large numbers of commodity machines connected through a network. Probably the most important aspect of a distributed database designed for cloud deployment is exceptional fault tolerance under very harsh and strange failure conditions – the kind of exceptionally unlikely things that can only happen when you have many machines working together with components failing unpredictably. We have put a huge amount of effort into testing FoundationDB in these grueling scenarios, and feel very confident in our ability to perform well in these types of environments. In particular, we have users running FoundationDB successfully on many different cloud providers, and we’ve seen the system keep its guarantees under real-world hardware and network failure conditions experienced by our users.

Open source: Although FoundationDB’s core data storage engine is closed source, our layer ecosystem is open source. Although the core data storage engine has a very simple feature set, and is very difficult to properly modify while maintaining correctness, layers are very feature rich and because they are stateless, are much easier to create and modify which makes them well suited to third-party contributions.

Q7 Pls give some examples of use cases where FoundationDB is currently in use. Is FoundationDB in use for analyzing Big Data as well?

Dave Rosenthal: Some examples: User data, meta data, user social graphs, geo data, via ORMs using the SQL layer, metrics collection, etc.

We’ve mostly focused on operational systems, but a few of our customers have built what I would call “big data” applications, which I think of as analytics-focused. The most common use case has been for collecting and analyzing time-series data. FoundationDB is strongest in big data applications that call for lots of random reads and writes, not just big table scans—which many systems can do well.

Q8. Rick Cattel said in an recent interview [2] “there aren’t enough open source contributors to keep projects competitive in features and performance, and the companies supporting the open source offerings will have trouble making enough money to keep the products competitive themselves”. What is your opinion on this?

Dave Rosenthal: People have great ideas for databases all the time. New data models, new query languages, etc.
If nothing else, this NoSQL experiment that we’ve all been a part of the past few years has shown us all the appetite for data models suited to specific problems. They would love to be able to build these tools, open source them, etc.
The problem is that the checklist of practical considerations for a database is huge: Fault tolerance, scalability, a backup solution, management and monitoring, ACID transactions, etc. Add those together and even the simplest concept sounds like a huge project.

Our vision at FoundationDB is that we have done the hard work to build a storage substrate that simultaneously solves all those tricky practical problems. Our engine can be used to quickly build a database layer for any particular application that inherits all of those solutions and their benefits, like scalability, fault tolerance and ACID compliance.

Q9. Nick Heudecker of Gartner, predicts that [3] “going forward, we see the bifurcation between relational and NoSQL DBMS markets diminishing over time” . What is your take on this?

Dave Rosenthal: I do think that the lines between SQL and NoSQL will start to blur and I believe that we are leading that charge.We acquired another database startup last year called Akiban that builds an amazing SQL database engine.
In 2014 we’ll be bringing that engine to market as a layer running on top of FoundationDB. That will be a true ANSI SQL database operating as a module directly on top of a transactional “NoSQL” engine, inheriting the operational benefits of our core storage engine – scalability, fault tolerance, ease of operation.

When you run multiple of the SQL layer modules, you can point many of them at the same key-space in FoundationDB and it’s as if they are all part of the same database, with ACID transactions enforced across the separate SQL layer processes.
It’s very cool. Of course, you can even run the SQL layer on a FoundationDB cluster that’s also supporting other data models, like graph or document. That’s about as blurry as it gets.

———–
Dave Rosenthal is CEO of FoundationDB. Dave started his career in games, building a 3D real-time strategy game with a team of high-school friends that won the 1st annual Independent Games Festival. Previously, Dave was CTO at Visual Sciences, a pioneering web-analytics company that is now part of Adobe. Dave has a degree in theoretical computer science from MIT.

Related Posts
- Operational Database Management Systems. Interview with Nick Heudecker, ODBMS Industry Watch December 16, 2013

Follow ODBMS.org on Twitter: @odbmsorg

 

Mar 2 14

Big Data: Three questions to Aerospike.

by Roberto V. Zicari

“Many tools now exist to run database software without installing software. From vagrant boxes, to one click cloud install, to a cloud service that doesn’t require any installation, developer ease of use has always been a path to storage platform success.”–Brian Bulkowski.

The fifth interview in the “Big Data: three questions to “ series of interviews, is with Brian Bulkowski, Aerospike co-founder and CTO.

RVZ

Q1. What is your current product offering?

Brian Bulkowski: Aerospike is the first in-memory NoSQL database optimized for flash or solid state drives (SSDs).
In-memory for speed and NoSQL for scale. Our approach to memory is unique – we have built our own file system to access flash, we store indexes in DRAM and you can configure data sets to be in a combination of DRAM or flash. This gives you close to DRAM speeds, the persistence of rotational drives and the price performance of flash.
As next gen apps scale up beyond enterprise scale to “global scale”, managing billions of rows, terabytes of data and processing from 20k to 2 million read/write transactions per second, scaling costs are an important consideration. Servers, DRAM, power and operations – the costs add up, so even developers with small initial deployments must architect their systems with the bottom line in mind and take advantage of flash.
Aerospike is an operational database, a fast key-value store with ACID properties – immediate consistency for single row reads and writes, plus secondary indexes and user defined functions. Values can be simple strings, ints, blobs as well as lists and maps.
Queries are distributed and processed in parallel across the cluster and results on each node can be filtered, transformed, aggregated via user defined functions. This enables developers to enhance key value workloads with a few queries and some in-database processing.

Q2. Who are your current customers and how do they typically use your products?

Brian Bulkowski: We see two use cases – one as an edge database or real-time context store (user profile store, cookie store) and another as a very cost-effective and reliable cache in front of a relational database like mySQL or DB2.

Our customers are some of the biggest names in real-time bidding, cross channel (display, mobile, video, social, gaming) advertising and digital marketing, including AppNexus, BlueKai, TheTradeDesk and [X+1]. These companies use Aerospike to store real-time user profile information like cookies, device-ids, IP addresses, clickstreams, combined with behavioral segment data calculated using analytics platforms and models run in Hadoop or data warehouses. They choose Aerospike for predictable high performance, where reads and writes consistently, meaning 99% of the time, complete within 2-3 milliseconds.

The second set of customers use us in front of an existing database for more cost-effective and reliable caching. In addition to predictable high performance they don’t want to shard Redis, and they need persistence, high availability and reliability. Some need rack-awareness and cross data center support and they all want to take advantage of Aerospike deployments that are both simpler to manage and more cost-effective than alternative NoSQL databases, in-memory databases and caching technologies.

Q3. What are the main new technical features you are currently working on and why?

Brian Bulkowski: We are focused on ease of use, making development easier – quickly writing powerful, scalable applications – with developer tools and connectors. In our Aerospike 3 offering, we launched indexes and distributed queries, user defined functions for in-database processing, expressive API support, and aggregation queries. Performance continues to improve, with support for today’s highly parallel CPUs, higher density flash arrays, and improved allocators for RAM based in-memory use cases.

Developers love Aerospike because it’s easy to run a service operationally. That scale comes after the developer builds their original applications, so developers want samples and connectors that are tested and work easily. Whether that’s an ETL loader for CSV and JSON that’s parallel and scalable, a Hadoop connector to pour insights directly to Aerospike in order to drive hot interface changes, or improving our Mac OSX client that developers need, or HTTP/REST interfaces, developers need the ability to write their core application code to easily use Aerospike.

Many tools now exist to run database software without installing software. From vagrant boxes, to one click cloud install, to a cloud service that doesn’t require any installation, developer ease of use has always been a path to storage platform success.

Related Posts

- Big Data: Three questions to McObject, ODBMS Industry Watch, February 14, 2014

- Big Data: Three questions to VoltDB. ODBMS Industry Watch, February 6, 2014.

- Big Data: Three questions to Pivotal. ODBMS Industry Watch, January 20, 2014.

-Big Data: Three questions to InterSystems. ODBMS Industry Watch, January 13, 2014.

- Operational Database Management Systems. Interview with Nick Heudecker, ODBMS Industry Watch, December 16, 2013.

Resources

- Gartner – Magic Quadrant for Operational Database Management Systems (Access the report via registration). Authors: Donald Feinberg, Merv Adrian, Nick Heudecker, Date Published: 21 October 2013.

-ODBMS.org free resources on NoSQL Data Stores
Blog Posts | Free Software | Articles, Papers, Presentations| Documentations, Tutorials, Lecture Notes | PhD and Master Thesis.

  • Follow ODBMS.org on Twitter: @odbmsorg

    ##

  • Feb 20 14

    Big Data and NoSQL: Interview with Joe Celko

    by Roberto V. Zicari

    “The real problem is not collecting the data, even at insanely high speeds; the real problem is acting on it in time. This is where we have things like automatic stock trading systems. The database is integrated rather than separated from the application.” –Joe Celko.

    I have interviewed Joe Celko, a well know database expert, on the challenges of Big Data and when it makes sense using Non-Relational Databases.

    RVZ

    Q1. Three areas make today’s new data different from the data of the past: Velocity, Volume and Variety. Why?

    Joe Celko: I did a keynote at a PostgreSQL conference in Prague with the title “Our Enemy, the Punch Card” on the theme that we had been mimicking the old data models with the new technology. This is no surprise; the first motion pictures were done with a single camera that never moved to mimic a seat at a theater.
    Eventually, “moving picture shows” evolved into modern cinema. This is the same pattern in data. It is physically impossible to make a punch card and magnetic tape data move as fast as fiber optics, or hold as many bits. More importantly, the cost per bit dropped by orders of magnitude. Now it was practical computerize everything! And since we can do, and do it cheap, we will do it.
    But what we found out that this new, computerizable (is that a word?) data is not always traditionally structured data.

    Q2. What about data Veracity? Is this a problem as well?

    Joe Celko: Oh Yes! Data Quality is an issue at a higher level than the data base. David Loshin, Tom Redman and Jack Olson are some of the people in that area.

    Q3. When information is changing faster than you can collect and query it,it simply cannot be treated the same as static data. What are the solutions available to solve this problem?

    Joe Celko: I have to do a disclaimer here: I have done videos for Streambase and Kx Systems.
    There is an old joke about two morons trying to fix a car. Q: “Is my signal light working?” A: “Yes. No. Yes. No. Yes. No. ..” but it summaries the basic problem with streaming data. That is streaming data or “complex events” in the literature.
    The model is that tables are replaced by streams of data, but the query language in Streambase is an extended SQL dialect.
    The Victory of SELECT-FROM-WHERE!
    The Kx products are more like C or other low level languages.
    The real problem is not collecting the data, even at insanely high speeds; the real problem is acting on it in time. This is where we have things like automatic stock trading systems. The database is integrated rather than separated from the application.

    Q4. Old storage and access models do not work for big data. Why?

    Joe Celko: First of all, the old stuff does not hold enough data. How would you put even a day’s worth of Wal-Mart sales on punch cards? Sequential access will not work; we need parallelism. We do not have time to index the data; the traditional tree indexing requires extra time, usually O(lg2(n)). Our best bets are perfect hashing functions and special hardware.

    Q5. What are different ways available to store and access data such as petabytes and exabytes?

    Joe Celko: Today, we are still stuck with moving disk. Optical storage is still too expensive and slow to write.
    Solid State Disk is still too expensive, but dropping fast. My dream is really cheap solid state drives that have lots of processors in the drive which monitor a small subset of the data. We send out a command “Hey, minions, find red widgets and send me your results!” and it happens all at once. The ultimate Map-Reduce model in the hardware!

    Q6. Not all data can fit into a relational model, including genetic data, semantic data, and data generated by social networks. How do you handle data variety?

    Joe Celko: We have graph databases for social networks. I was a math major, so I love them. Graph theory has a lot of good problems and algorithms we can steal, just like SQL uses set theory and logic. But genetic data and semantics do not have a mature theory behind them. The real way to handle the diversity is new tools, starting at the conceptual level. How many times have you seen someone write 1960′s COBOL file systems in SQL?

    Q7 What are the alternative storage, query, and management frameworks needed by certain kinds of Big Data?

    Joe Celko: As best you can, do not scare your existing staff with a totally new environment.

    Q8. Columnar-data stores, graph-databases, streaming databases, analytic data bases. How do classify and evaluate all of these NewSQL/ NoSQL solutions available?

    Joe Celko: First decide what the problem is, then pick the tool. One of my war stories was consulting at a large California company that wanted to put their labor relations law library on their new DB2 database. It was all text, and used by lawyers. Lawyers do not know SQL. Lawyers do not want to learn SQL. But they do know Lexis and WestLaw text query tools. They know labor law and the special lingo. Programmers do not know labor law. Programmers do not want to learn labor law. But the programmers can set up a textbase for the lawyers.

    Q9. If you were a user, how would you select the “right” data management tools and technology for the job?

    Joe Celko: There is no generic answer. Oh, there will be a better answer by the time you get into production. Welcome to IT!

    —————–
    Joe Celko served 10 years on ANSI/ISO SQL Standards Committee and contributed to the SQL-89 and SQL-92 Standards. Mr. Celko is author a series of books on SQL and RDBMS for Morgan-Kaufmann. He is an independent consultant based in Austin, Texas. He has written over 1300 columns in the computer trade and academic press, mostly dealing with data and databases.

    Related Posts

    - “Setting up a Big Data project. Interview with Cynthia M. Saracco”. ODBMS Industry Watch, January 27, 2014

    Related Resources

    Joe Celko’s Complete Guide to NoSQL: What Every SQL Professional Needs to Know about Non-Relational Databases“- Paperback: 244 pages, Morgan Kaufmann; 1 edition (October 31, 2013), ISBN-10: 0124071929

    “Big Data: Challenges and Opportunities” (.PDF), Roberto V. Zicari, Goethe University Frankfurt, ODBMS.org, October 5, 2012

  • Follow ODBMS.org on Twitter: @odbmsorg
    #
  • Feb 14 14

    Big Data: Three questions to McObject.

    by Roberto V. Zicari

    “In a nutshell, pipelining is a programming technique that combines functions from the database system’s library of vector-based functions into an assembly line of processing for market data, with the output of one function becoming input for the next.”–Steven T. Graves.

    The fourth interview in the “Big Data: three questions to “ series of interviews, is with Steven T. Graves, President and CEO McObject

    RVZ

    Q1. What is your current product offering?

    Steven T. Graves: McObject has two product lines. One is the eXtremeDB product family. eXtremeDB is a real-time embedded database system built on a core in-memory database system (IMDS) architecture, with the eXtremeDB IMDS edition representing the “standard” product. Other eXtremeDB editions offer special features and capabilities such as an optional SQL API, high availability, clustering, 64-bit support, optional and selective persistent storage, transaction logging and more.

    In addition, our eXtremeDB Financial Edition database system targets real-time capital markets systems such as algorithmic trading and risk management (and has its own Web site). eXtremeDB Financial Edition comprises a super-set of the individual eXtremeDB editions (bundling together all specialized libraries such as clustering, 64-bit support, etc.) and offers features including columnar data handling and vector-based statistical processing for managing market data (or any other type of time series data).

    Features shared across the eXtremeDB product family include: ACID-compliant transactions; multiple application programming interfaces (a native and type-safe C/C++ API; SQL/ODBC/JDBC; native Java, C# and Python interfaces); multi-user concurrency with an optional multi-version concurrency control (MVCC) transaction manager; event notifications; cache prioritization; and support for multiple database indexes (b-tree, r-tree, kd-tree, hash, Patricia trie, etc.). eXtremeDB’s footprint is small, with an approximately 150K code size. eXtremeDB is available for a wide range of server, real-time operating system (RTOS) and desktop operating systems, and McObject provides eXtremeDB source code for porting.

    McObject’s second product offering is the Perst open source, object-oriented embedded database system, available in all-Java and all-C# (.NET) versions. Perst is small (code size typically less than 500K) and very fast, with features including ACID-compliant transactions; specialized collection classes (such as a classic b-tree implementation; r-tree indexes for spatial data; database containers optimized for memory-only access, etc.); garbage collection; full-text search; schema evolution; a “wrapper” that provides a SQL-like interface (SubSQL); XML import/export; database replication, and more.

    Perst also operates in specialized environments. Perst for .NET includes support for .NET Compact Framework, Windows Phone 8 (WP8) and Silverlight (check out our browser-based Silverlight CRM demo, which showcases Perst’s support for storage on users’ local file systems). The Java edition supports the Android smartphone platform, and includes the Perst Lite embedded database for Java ME.

    Q2. Who are your current customers and how do they typically use your products?

    Steven T. Graves: eXtremeDB initially targeted real-time embedded systems, often residing in non-PC devices such as set-top boxes, telecom switches or industrial controllers.
    There are literally millions of eXtremeDB -based devices deployed by our customers; a few examples are set-top boxes from DIRECTV (eXtremeDB is the basis of an electronic programming guide); F5 Networks’ BIG-IP network infrastructure (eXtremeDB is built into the devices’ proprietary embedded operating system); and BAE Systems (avionics in the Panavia Tornado GR4 combat jet). A recent new customer in telecom/networking is Compass-EOS, which has released the first photonics-based core IP router, using eXtremeDB High Availability to manage the device’s control plane database.

    Addition of “enterprise-friendly” features (support for SQL, Java, 64-bit, MVCC, etc.) drove eXtremeDB’s adoption for non-embedded systems that demand fast performance. Examples include software-as-a-service provider hetras Gmbh (eXtremeDB handles the most performance-intensive queries in its Cloud-based hotel management system); Transaction Network Services (eXtremeDB is used in a highly scalable system for real-time phone number lookups/ routing); and MeetMe.com (formerly MyYearbook.com – eXtremeDB manages data in social networking applications).

    In the financial industry, eXtremeDB is used by a variety of trading organizations and technology providers. Examples include the broker-dealer TradeStation (McObject’s database technology is part of its next-generation order execution system); Financial Technologies of India, Ltd. (FTIL), which has deployed eXtremeDB in the order-matching application used across its network of financial exchanges in Asia and the Middle East; and NSE.IT (eXtremeDB supports risk management in algorithmic trading).

    Users of Perst are many and varied, too. You can find Perst in many commercial software applications such as enterprise application management solutions from the Wily Division of CA. Perst has also been adopted for community-based open source projects, including the Frost client for the Freenet global peer-to-peer network. Some of the most interesting Perst-based applications are mobile. For example, 7City Learning, which provides training for financial professionals, gives students an Android tablet with study materials that are accessed using Perst. Several other McObject customers use Perst in mobile medical apps.

    Q3. What are the main new technical features you are currently working on and why?

    Steven T. Graves: One feature we’re very excited about is the ability to pipeline vector-based statistical functions in eXtremeDB Financial Edition – we’ve even released a short video and a 10-page white paper describing this capability. In a nutshell, pipelining is a programming technique that combines functions from the database system’s library of vector-based functions into an assembly line of processing for market data, with the output of one function becoming input for the next.

    This may not sound unusual, since almost any algorithm or program can be viewed as a chain of operations acting on data.
    But this pipelining has a unique purpose and a powerful result: it keeps market data inside CPU cache as the data is being worked.
    Without pipelining, the results of each function would typically be materialized outside cache, in temporary tables residing in main memory. Handing interim results back and forth “across the transom” between CPU cache and main memory imposes significant latency, which is eliminated by pipelining. We’ve been improving this capability by adding new statistical functions to the library. (For an explanation of pipelining that’s more in-depth than the video but shorter than the white paper, check out this article on the financial technology site Low-Latency.com.)

    We are also adding to the capabilities of eXtremeDB Cluster edition to make clustering faster and more flexible, and further simplify cluster administration. Improvements include a local tables option, in which database tables can be made exempt from replication, but shareable through a scatter/gather mechanism. Dynamic clustering, added in our recent v. 5.0 upgrade, enables nodes to join and leave clusters without interrupting processing. This further simplifies administration for a clustering database technology that counts minimal run-time maintenance as a key benefit. On selected platforms, clustering now supports the Infiniband switched fabric interconnect and Message Passing Interface (MPI) standard. In our tests, these high performance networking options accelerated performance more than 7.5x compared to “plain vanilla” gigabit networking (TCP/IP and Ethernet).

    Related Posts

    - Big Data: Three questions to VoltDB.
    ODBMS INdustry Watch, February 6, 2014

    - Big Data: Three questions to Pivotal.
    ODBMS Industry Watch, January 20, 2014.

    -Big Data: Three questions to InterSystems.
    ODBMS Industry Watch, January 13, 2014.

    - Cloud based hotel management– Interview with Keith Gruen.
    ODBMS Industry Watch, July 25, 2013

    - In-memory database systems. Interview with Steve Graves, McObject.
    ODBMS Industry Watch, March 16, 2012

    Resources

    ODBMS.org: Free resources on Big Data, Analytics, Cloud Data Stores, Graphs Databases, NewSQL, NoSQL, Object Databases.

  • Follow ODBMS.org on Twitter: @odbmsorg
  • ##

    Feb 6 14

    Big Data: Three questions to VoltDB.

    by Roberto V. Zicari

    “Some of our current priorities include: augmenting capabilities in the area of real-time analytics – especially around online operations, SQL functionality, integrations with messaging applications, statistics and monitoring procedures, and enhanced developer features.”– Ryan Betts.

    The third interview in the “Big Data: three questions to “ series of interviews, is with Ryan Betts, CTO of VoltDB.

    RVZ

    Q1. What are your current product offerings?

    Ryan Betts: VoltDB is a high-velocity database platform that enables developers to build next generation real-time operational applications. VoltDB converges all of the following:

    • A dynamically scalable in-memory relational database delivering high-velocity, ACID-compliant OLTP
    • High-velocity data ingestion, with millions of writes per second
    • Real-time analytics, to enable instant operational visibility at the individual event level
    • Real-time decisioning, to enable applications to act on data when it is most valuable—the moment it arrives

    Version 4.0 delivers enhanced in-memory analytics capabilities and expanded integrations. VoltDB 4.0 is the only high performance operational database that combines in-memory analytics with real-time transactional decision-making in a single system.
    It gives organizations an unprecedented ability to extract actionable intelligence about customer and market behavior, website interactions, service performance and much more by performing real-time analytics on data moving at breakneck speed.

    Specifically, VoltDB 4.0 features a tenfold throughput improvement of analytic queries and is capable of writes and reads on millions of data events per second. It provides large-scale concurrent, multiuser access to data, the ability to factor current incoming data into analytics, and enhanced SQL support. VoltDB 4.0 also delivers expanded integrations with an organization’s existing data infrastructure such as message queue systems, improved JDBC driver and monitoring utilities such as New Relic.

    Q2. Who are your current customers and how do they typically use your products?

    Ryan Betts: Customers use VoltDB for a wide variety of data-management functions, including data caching, stream processing and “on the fly” ETL.
    Current VoltDB customers represent industries ranging from telecommunications to e-commerce, power & energy, financial services, online gaming, retail and more.

    Following are common use cases:

    • Optimized, real-time information delivery
    • Personalized audience targeting
    • Real-time analytics dashboards
    • Caching server replacements
    • Session / user management
    • Network analysis & monitoring
    • Ingestion and on-the-fly-ETL

    Below are the customers that have been publicly announced thus far:

    Eagle Investments
    Conexient
    OpenNet
    Sakura
    Shopzilla
    Social Game Universe
    Yellowhammer

    Q3. What are the main new technical features you are currently working on and why?

    Ryan Betts: Our customers are reaping the benefits of VoltDB in the areas of transactional decision-making and generating real-time analytics on that data—right at the moment it’s coming in.

    Therefore, some of our current priorities include: augmenting capabilities in the area of real-time analytics – especially around online operations, SQL functionality, integrations with messaging applications, statistics and monitoring procedures, and enhanced developer features.

    Although VoltDB has proven to be the industry’s “easiest to use” database, we are also continuing to invest quite heavily in making the process of building and deploying real-time operational applications with VoltDB even easier. Among other things, we are extending the power and simplicity that we offer developers in building high throughput applications to building modest sized throughput applications.

    —————
    Related Posts

    - Setting up a Big Data project. Interview with Cynthia M. Saracco.
    ODBMS Industry Watch, January 27, 2014

    - Big Data: Three questions to Pivotal.
    ODBMS Industry Watch, January 20, 2014.

    -Big Data: Three questions to InterSystems.
    ODBMS Industry Watch, January 13, 2014.

    - Operational Database Management Systems. Interview with Nick Heudecker.
    ODBMS Industry Watch, December 16, 2013.

    Resources

    ODBMS.org: Free resources on Big Data, Analytics, Cloud Data Stores, Graphs Databases, NewSQL, NoSQL, Object Databases.

  • Follow ODBMS.org on Twitter: @odbmsorg
  • ##

    Jan 27 14

    Setting up a Big Data project. Interview with Cynthia M. Saracco.

    by Roberto V. Zicari

    “Begin with a clear definition of the project’s business objectives and timeline, and be sure that you have appropriate executive sponsorship. The key stakeholders need to agree on a minimal set of compelling results that will impact your business; furthermore, technical leaders need to buy into the overall feasibility of the project and bring design and implementation ideas to the table.”–Cynthia M. Saracco.

    How easy is to set up a Big Data project? On this topic I have interviewed Cynthia M. Saracco, senior solutions architect at IBM’s Silicon Valley Laboratory. Cynthia is an expert in Big Data, analytics, and emerging technologies. She has more than 25 years of software industry experience.

    RVZ

    Q1. How best is to get started with a Big Data project?

    Cynthia M. Saracco: Begin with a clear definition of the project’s business objectives and timeline, and be sure that you have appropriate executive sponsorship.
    The key stakeholders need to agree on a minimal set of compelling results that will impact your business; furthermore, technical leaders need to buy into the overall feasibility of the project and bring design and implementation ideas to the table. At that point, you can evaluate your technical options for the best fit. Those options might include Hadoop, a relational DBMS, a stream processing engine, analytic tools, visualization tools, and other types of software. Often, a combination of several types of software is needed for a single Big Data project. Keep in mind that every technology has its strengths and weaknesses, so be sure you understand enough about the technologies you’re inclined to use before moving forward.

    If you decide that Hadoop should be part of your project, give serious consideration to using a distribution that packages commonly needed components into a single bundle so you can minimize the time required to install and configure your environment. It’s also helpful to keep in mind the existing skills of your staff and seek out offerings that enable them to be productive quickly.
    Tools, applications, and support for common scripting and query languages all contribute to improved productivity. If your business application needs to integrate with existing analytical tools, DBMSs, or other software, look for offerings that have some built-in support for that as well.

    Finally, because Big Data projects can get pretty complex, I often find it helpful to segment the work into broad categories and then drill down into each to create a solid plan. Examples of common technical tasks include collecting data (perhaps from various sources), preparing the data for analysis (which can range from simple format conversions to more sophisticated data cleansing and enrichment operations), analyzing the data, and rendering or sharing the results of that analysis with business users or downstream applications. Consider scalability and performance needs in addition to your functional requirements.

    Q2. What are the most common problems and challenges encountered in Big Data projects?

    Cynthia M. Saracco: Lack of appropriately scoped objectives and lack of required skills are two common problems. Regarding objectives, you need to find an appropriate use case that will impact your business and tailor your project’s technical work to meet the business goals of that project efficiently. Big Data is an exciting, rapidly evolving technology area, and it’s easy to get side tracked experimenting with technical features that may not be essential to solving your business problem. While such experimentation can be fun and educational, it can also result in project delays as well as deliverables that are off target. In addition, without well-scoped business objectives, the technical staff may end up chasing a moving target.

    Regarding skills, there’s high demand for data scientists, architects, and developers experienced with Big Data projects. So you may need to decide if you want to engage a service provider to supplement in-house skills or if you want to focus on growing (or acquiring) new in-house skills. Fortunately, there are a number of Big Data training options available today that didn’t exist several years ago. Online courses, conferences, workshops, MeetUps, and self-study tutorials can help motivated technical professionals expand their skill set. However, from a project management point of view, organizations need to be realistic about the time required for staff to learn new Big Data technologies. Giving someone a few days or weeks to master Hadoop and its complementary offerings isn’t very realistic. But really, I see the skills challenge as a point-in-time issue. Many people recognize the demand for Big Data skills and are actively expanding their skills, so supply will grow.

    Q3. Do you have any metrics to define how good is the “value” that can be derived by analyzing Big Data?

    Cynthia M. Saracco: Most organizations want to focus on their return on investment (ROI). Even if your Big Data solution uses open source software, there are still expenses involved for designing, developing, deploying, and maintaining your solution. So what did your business gain from that investment?
    The answer to that question is going to be specific to your application and your business. For example, if a telecommunications firm is able to reduce customer churn by 10% as a result of a Big Data project, what’s that worth? If an organization can improve the effectiveness of an email marketing campaign by 20%, what’s that worth? If an organization can respond to business requests twice as quickly, what’s that worth? Many clients have these kinds of metrics in mind as they seek to quantify the value they have derived — or hope to derive — from their investment in a Big Data project.

    Q4. Is Hadoop replacing the role of OLAP (online analytical processing) in preparing data to answer specific questions?

    Cynthia M. Saracco: More often, I’ve seen Hadoop used to augment or extend traditional forms of analytical processing, such as OLAP, rather than completely replace them. For example, Hadoop is often deployed to bring large volumes of new types of information into the analytical mix — information that might have traditionally been ignored or discarded. Log data, sensor data, and social data are just a few examples of that. And yes, preparing that data for analysis is certainly one of the tasks for which Hadoop is used.

    Q4. IBM is offering BigInsights and Big SQL? What is it?

    Cynthia M. Saracco: InfoSphere BigInsights is IBM’s Hadoop-based platform for analyzing and managing Big Data. It includes Hadoop, a number of complementary open source projects (such as HBase, Hive, ZooKeeper, Flume, Pig, and others) and a number of IBM-specific technologies designed to add value.

    Big SQL is part of BigInsights. It’s IBM’s SQL interface to data stored in BigInsights. Users can create tables, query data, load data from various sources, and perform other functions. For a quick introduction to Big SQL, read this article.

    Q5. How does it compare to RDBMS technology? When’s it most useful?

    Cynthia M. Saracco: Big SQL provides standard SQL-based query access to data managed by BigInsights. Query support includes joins, unions, sub-queries, windowed aggregates, and other popular capabilities. Because Big SQL is designed to exploit the Hadoop ecosystem, it introduces Hadoop-specific language extensions for certain SQL statements.
    For example, Big SQL supports Hive and HBase for storage management, so a Big SQL CREATE TABLE statement might include clauses related to data formats, field delimiters, SerDes (serializers/deserializers), column mappings, column families, etc. The article I mentioned earlier has some examples of these, and the product InfoCenter has further details.

    In many ways, Big SQL can serve as an easy on-ramp to Hadoop for technical professionals who have a relational DBMS background. Big SQL is good for organizations that want to exploit in-house SQL skills to work with data managed by BigInsights. Because Big SQL supports JDBC and ODBC, many traditional SQL-based tools can work readily with Big SQL tables, which can also make Big Data easier to use by a broader user community.

    However, Big SQL doesn’t turn Hadoop — or BigInsights — into a relational DBMS. Commercial relational DBMSs come with built-in, ACID-based transaction management services and model data largely in tabular formats. They support granular levels of security via SQL GRANT and REVOKE statements. In addition, some RDBMSs support 3GL applications developed in “legacy” programming languages such as COBOL. These are some examples of capabilities aren’t part of Big SQL.

    Q6. What are some of its current limitations?

    Cynthia M. Saracco: The current level of Big SQL included in BigInsights V2.1.0.1 enables users to create tables but not views.
    Date/time data is supported through a full TIMESTAMP data type, and some common SQL operations supported by relational DBMSs aren’t available or have specific restrictions.
    Examples include INSERT, UPDATE, DELETE, GRANT, and REVOKE statements. For more details on what’s currently supported in Big SQL, skim through the InfoCenter.

    Q7. How BigInsights differs from / adds value to open source Hadoop?

    Cynthia M. Saracco: As I mentioned earlier, BigInsights includes a number of IBM-specific technologies designed to add value to the open source technologies included with the product. Very briefly, these include:
    - A Web console with administrative facilities, a Web application catalog, customizable dashboards, and other features.
    - A text analytic engine and library that extracts phone numbers, names, URLs, addresses, and other popular business artifacts from messages, documents, and other forms of textual data.
    - Big SQL, which I mentioned earlier.
    - BigSheets, a spreadsheet-style tool for business analysts.
    - Web-accessible sample applications for importing and exporting data, collecting data from social media sites, executing ad hoc queries, and monitoring the cluster. In addition, application accelerators (tool kits with dozens of pre-built software articles) are available for those working with social data and machine data.
    - Eclipse tooling to speed development and testing of BigInsights applications, new text extractors, BigSheets functions, SQL-based applications, Java applications, and more.
    - An integrated installation tool that installs and configures all selected components across the cluster and performs a system-wide health check.
    - Connectivity to popular enterprise software offerings, including IBM and non-IBM RDBMSs.
    - Platform enhancements focusing on performance, security, and availability. These include options to use with an alternative, POSIX-compliant distributed file system (GPFS-FPO) and an alternative MapReduce layer (Adaptive MapReduce) that features Platform Symphony’s advanced job scheduler, workload manager, and other capabilities.

    You might wonder what practical benefits these kinds of capabilities bring. While that varies according to each organization’s usage patterns, one industry analyst study concluded that BigInsights lowers total cost of ownership (TCO) by an average of 28% over a three-year period compared with an open source-only implementation.

    Finally, a number of IBM and partner offerings support BigInsights, which is something that’s important to organizations that want to integrate a Hadoop-based environment into their broader IT infrastructure. Some examples of IBM products that support BigInsights include DataStage, Cognos Business Intelligence, Data Explorer, and InfoSphere Streams.

    Q8. Could you give some examples of successful Big Data projects?

    Cynthia M. Saracco: I’ll summarize a few that have been publicly discussed so you can follow links I provide for more details. An energy firm launched a Big Data project to analyze large volumes of data that could help it improve the placement of new wind turbines and significantly reduce response time to business user requests.
    A financial services firm is using Big Data to process large volumes of text data in minutes and offer its clients more comprehensive information based on both in-house and Internet-based data.
    An online marketing firm is using Big Data to improve the performance of its clients email campaigns.
    And other firms are using Big Data to detect fraud, assess risk, cross-sell products and services, prevent or minimize network outages, and so on. You can find a collection of videos about Big Data projects undertaken by various organizations; many of these videos feature users speaking directly about their Big Data experiences and the results of their projects.
    And a recent report on Analytics: The real-world use of big data contains further examples. based the results of a survey of more than 1100 businesses that the Said Business School at the University of Oxford conducted with IBM’s Institute for Business Value.

    Qx Anything else to add?

    Cynthia M. Saracco: Hadoop isn’t the only technology relevant to managing and analyzing Big Data, and IBM’s Big Data software portfolio certainly includes more than BigInsights (its Hadoop-based offering). But if you’re a technologist who wants to learn more about Hadoop, your best bet is to work with the software. You’ll find a number of free online courses in the public domain, such as those at Big Data University. And IBM offers a free copy of its Quick Start Edition of BigInsights as a VMWare image or an installable image to help you get started with minimal effort.

    —–
    Cynthia M. Saracco is a senior solutions architect at IBM’s Silicon Valley Laboratory, specializing in Big Data, analytics, and emerging technologies. She has more than 25 years of software industry experience, has written three books and more than 70 technical papers, and holds six patents.
    —————
    Related Posts

    - Big Data: Three questions to Pivotal. ODBMS Industry Watch, January 20, 2014.

    -Big Data: Three questions to InterSystems. ODBMS Industry Watch, January 13, 2014.

    - Operational Database Management Systems. Interview with Nick Heudecker. ODBMS Industry Watch, December 16, 2013.

    - On Big Data and Hadoop. Interview with Paul C. Zikopoulos. ODBMS Industry Watch, June 10, 2013.

    Resources

    - What’s the big deal about Big SQL? by Cynthia M. Saracco , Senior Software Engineer, IBM, and Uttam Jain, Software Architect, IBM.

    - ODBMS.org: Free resources on Big Data and Analytical Data Platforms:
    | Blog Posts | Free Software| Articles| Lecture Notes | PhD and Master Thesis|

  • Follow ODBMS.org on Twitter: @odbmsorg
  • ##
    ——-

    Jan 20 14

    Big Data: Three questions to Pivotal.

    by Roberto V. Zicari

    “We are investing heavily in bringing SQL as the standard interface for accessing in real time (GemFire XD) and interactive (HAWQ) response times enabling enterprises to leverage their existing workforce for Hadoop processing.”–Susheel Kaushik.

    I start this new year with a new series of short interviews to leading vendors of Big Data technologies. I call them “Big Data: three questions to“. The second of such interviews is with Susheel Kaushik, Senior Director, Product Management at Pivotal.

    RVZ

    Q1. What is your current products offering?

    Susheel Kaushik: Pivotal suite of products converge Apps, Data and Analytics for the enterprise customers.

      Apps:

    Industry leading application frameworks and runtimes focused on enterprise needs. Pivotal App frameworks provide a rich set of product components that enables rapid application development including support for messaging, database services and robust analytic and visualization instrumentation.
    Pivotal tc Server: Lean, Powerful Apache Tomcat compatible application server that maximizes performance, scales easily, and minimizes cost and overhead.
    Pivotal Web Server: High Performance, Scalable and Secure HTTP server.
    Pivotal Rabbit MQ: Fast and dependable message server that supports a wide range of use cases including reliable integration, content based routing and global data delivery, and high volume monitoring and data ingestion.
    Spring: Takes the complexity out of Enterprise Java.
    vFabric: Provides a proven runtime platform for your Spring applications.

      Data:

    Disruptive Big Data products – MPP & Column store database, in memory data processing and Hadoop
    Pivotal Greenplum Database: A massively parallel platform for large-scale data analytics warehouse to manage, store and analyze petabytes of data.
    Pivotal GemFire: A real time distributed data store that with linear scalability and continuous uptime capabilities.
    Pivotal HD with HAWQ and GemFire XD: Commercially supported Apache Hadoop. HAWQ brings Enterprise class SQL capabilities and GemFireXD brings real time data access to Hadoop.
    Pivotal CF: Next generation enterprise PaaS – Pivotal CF makes applications the new unit of deployment and control (not VMs or middleware), radically improving developer productivity and operator agility.

      Analytics:

    Accelerate and help enterprise extract insights from their data assets. Pivotal analytic products offer advanced query and visualization capabilities to business analysts.

    Q2. Who are your current customers and how do they typically use your products?

    Susheel Kaushik: We have customers in all business verticals – Finance, Telco, Manufacturing, Energy, Medical, Retail to name a few.
    Some of the typical uses of the products are:
    Big Data Store: Today, we find enterprises are NOT saving all of the data – cost efficiency is one of the reasons. Hadoop brings the price of the storage tier to a point where storing large amounts of data is not cost prohibitive. Enterprises now have mandates to not throw away any data in the hope that they can later unlock the potential insights from the data.
    Extend life of Existing EDW systems: Today most of the EDW system are challenged on the storage and processing aspects to provide a cost effective solution internally. Most of the data stored in EDW is not analyzed and the Pivotal Big Data products provide a platform for the customers to offload some of the data storage and the analytics processing. This offloaded processing, typically ETL like workloads, is ideal for the Big Data platforms. As a result, the processing times are reduced and the ETL relieved EDW now has excess capacity to satisfy the needs for some more years – thereby extending the life.
    Data Driven Applications: Some of the advanced enterprises already have peta-byte of varying formats data and are looking to derive insights from the data in real time/interactive time. These customers are building scalable applications leveraging the insights to assist business in decisioning (automated/manual).
    In addition, Customers prefer the deployment choice provided from the Pivotal products, some prefer the bare metal infrastructure whereas some prefer the cloud deployment (on premise or the public clounds).

    Q3. What are the main new technical features you are currently working on and why?

    Susheel Kaushik: Here are some of the key technical features we are working on.
    1. Better integration with HDFS
    a. HDFS is becoming a cost effective storage interface for the enterprise customers. Pivotal is investing making the integration with HDFS even better. Enterprise customers demand security and performance from HDFS and we are actively investing in these capabilities.
    In addition, storing the data in a single platform reduces the data duplication costs along with the data management costs to manage the multiple copies.
    2. Integration with other Open Source projects
    a. We are investing in Spring and Cloud foundry to integrate better with Hadoop. Spring and Cloud Foundry have a healthy eco system already. Making Hadoop easier to use for these users increases the talent pool available to build next generation data applications for Hadoop data.
    3. SQL as a standard interface
    a. SQL is the most expressive language for data analysis and enterprise customers have already made massive investments in training their workforce on SQL. We are investing heavily in bringing SQL as the standard interface for accessing in real time (GemFire XD) and interactive (HAWQ) response times enabling enterprises to leverage their existing workforce for Hadoop processing.
    4. Improved Manageability and Operability
    a. Managing and operating clusters is not easy for Hadoop and some of our enterprises do not have in-house capabilities to build/manage these large scale clusters. We are innovating to provide a simplified interface to manage and operate these clusters.
    5. Improved Quality of Service
    a. Resource contention is a challenge in any multi-tenant environment. We are actively working to make resource sharing in a multi-tenant environment easier. We already have products in the portfolio (MoreVRP) that allow customers to do fine grain control at CPU and IO level. We are making active investments to bring this capability across multiple processing paradigms.
    —————
    Related Posts

    -Big Data: Three questions to InterSystems. ODBMS Industry Watch, January 13, 2014.

    - Operational Database Management Systems. Interview with Nick Heudecker. ODBMS Industry Watch, December 16, 2013.

    Resources

    ODBMS.org: Free resources on Big Data Analytics, NewSQL, NoSQL, Object Database Vendors: Blog Posts| Commercial | Open Source|

  • Follow ODBMS.org on Twitter: @odbmsorg
  • ##