ODBMS Industry Watch » OMG http://www.odbms.org/blog Trends and Information on Big Data, New Data Management Technologies, Data Science and Innovation. Fri, 09 Feb 2018 21:04:31 +0000 en-US hourly 1 http://wordpress.org/?v=4.2.19 Graphs vs. SQL. Interview with Michael Blaha http://www.odbms.org/blog/2013/04/graphs-vs-sql-interview-with-michael-blaha/ http://www.odbms.org/blog/2013/04/graphs-vs-sql-interview-with-michael-blaha/#comments Thu, 11 Apr 2013 09:36:02 +0000 http://www.odbms.org/blog/?p=2047

“For traditional business applications, the schema is known in advance, so there is no need to use a graph database which has weaker enforcement of integrity. If instead, you’re dealing with at best a generic model to which it conforms, then a schema-oriented approach does not provide much. Instead a graph-oriented approach is more natural and easier to develop against.”— Michael Blaha

Graphs, SQL and Databases. On this topic I have interviewed our expert Michael Blaha.


Q1. A lot of today’s data can be modeled as a heterogeneous set of “vertices” connected by a heterogeneous set of “edges”, people, events, items, etc. related by knowing, attending, purchasing, etc. This world view is not new as the object-oriented community has a similar perspective on data. What is in your opinion the main difference with respect to a graph-centric data world?

Michael Blaha: This world view is also not new because this is the approach Charlie Bachman took with network databases many years ago. I can think of at least two major distinguishing aspects of graph-centric databases relative to relational databases.
(1) Graph-centric databases are occurrence-oriented while relational databases are schema-oriented. If you know the schema in advance and must ensure that data conforms to it, then a schema-oriented approach is best. Examples include traditional business applications, such as flight reservations, payroll, and order processing.
(2) Graph-centric databases emphasize navigation. You start with a root object and pull together a meaningful group of related objects. Relational databases permit navigation via joins, but such navigation is more cumbersome and less natural. Many relational database developers are not adept at performing such navigation.

Q2. The development of scalable graph applications such for example for Facebook, and Twitter require different kind of databases than SQL. Most of these large Web companies have built their own internal graph databases. But what about other enterprise applications?

Michael Blaha: The key is the distinction between being occurrence-oriented and schema-oriented. For traditional business applications, the schema is known in advance, so there is no need to use a graph database which has weaker enforcement of integrity. If instead, you’re dealing with at best a generic model to which it conforms, then a schema-oriented approach does not provide much. Instead a graph-oriented approach is more natural and easier to develop against.

Q3: Marko Rodriguez and Peter Neubauer in an interview say that “the benefit of the graph comes from being able to rapidly traverse structures to an arbitrary depth (e.g., tree structures, cyclic structures) and with an arbitrary path description (e.g. friends that work together, roads below a certain congestion threshold). We call this data processing pattern, the graph traversal pattern. This mind set is much different from the set theoretic notions of the relational database world. In the world of graphs, everything is seen as a walk’s traversal”. What is your take on this?

Michael Blaha: That’s a great point and one that I should have mentioned in my answer to Q1. Relational databases have poor handling of recursion. I will note that the vendor products have extensions for this but they aren’t natural and are an awkward graft onto SQL. Graph databases, in contrast, are great with handling recursion. This is a big advantage of graph databases for applications where recursion arises.

Q4. Is there any synergy between graphs and conventional relational databases?

Michael Blaha: Graphs are also important for relational databases, and more so than some persons may realize…
Graphs are clearly relevant for data modeling. An Entity-Relationship data model portrays the database structure as a graph.
Graphs are also important for expressing database constraints. The OMG’s Object Constraint Language (OCL) expresses database constraints using graph traversal. The OCL is a textual language so it can be tedious to use, but it is powerful. The Common Warehouse Metamodel (CWM) specifies many fine constraints with the OCL and is a superb example of proper OCL usage.
— Even though the standard does not emphasize it, the OCL is also an excellent language for database traversal as a starting point for database queries. Bill Premerlani and I explained this in a past book (Object-Oriented Modeling and Design for Database Applications).
— Graphs are also helpful for characterizing the complexity of a relational database design. Robert Hilliard presents an excellent technique for doing this in his book (Information-Driven Business).

Q5. You say that graphs are important for data modeling, but at the end you do not store graphs in a relational database but tables, and you need joins to link them together… Graph databases in contrast cache what is on disk into memory and vendors claim that this makes for a highly reusable in-memory cache. What is your take on this?

Michael Blaha: Relational databases play many optimization games behind the covers. So in general, possible performance differences are often not obvious. I would say that the difference in expressiveness is what determines suitable applications for graph and relational databases and performance is a secondary issue, except for very specialized applications.

Q6: What are advantages of SQL relative to graph databases?

Michael Blaha: Here are some advantages of SQL:

— SQL has a widely-accepted standard.
— SQL is a set-oriented language. This is good for mass-processing of set-oriented data.
— SQL databases have powerful query optimizers for handling set-oriented queries, such as for data warehouses.
— The transaction processing behavior of relational databases (the ACID properties) are robust, powerful, and sound.
— SQL has extensive support for controlling data access.

Q7: What are disadvantages of SQL relative to graph databases?

Michael Blaha: Here are some disadvantages of SQL:

— SQL is awkward for processing the explosion of data that can result from starting with an object and traversing a graph.
SQL, at best, awkwardly handles recursion.
— SQL has lots of overhead for multi-user locking that can make it difficult to access individual objects and their data.
— Advanced and specialty applications often require less rigorous transaction processing with reduced overhead and higher throughput.

Q8: For which applications is SQL best? For which applications are graph databases best?

Michael Blaha:
— SQL is schema based. Define the structure in advance and then store the data. This is a good approach for conventional data processing such as many business and financial systems.
— Graph databases are occurrence based. Store data and relationships as they are encountered. Do not presume that there is an encompassing structure. This is a good approach for some scientific and engineering applications as well as data that is acquired from Web browsers and search engines.

Q9. What about RDF quad/triple stores?

Michael Blaha: I have not paid much attention to this. RDF is an entity-attribute-value approach. From what I can tell, it seems occurrence based and not schema based and my earlier comments apply.

Michael Blaha is a partner at Modelsoft Consulting Corporation.
Blaha received his doctorate from Washington University in St. Louis, Missouri in Chemical Engineering with his dissertation being about databases. Both his academic background and working experience involve engineering and computer science. He is an alumnus of the GE R&D Center in Schenectady, New York, working there for eight years. Since 1993, Blaha has been a consultant and trainer in the areas of modeling, software architecture, database design, and reverse engineering. Blaha has authored six U.S. patents, four books, and many papers. Blaha is an editor for IEEE Computer as well as a member of the IEEE-CS publications board. He has also been active in the IEEE Working Conferences on Reverse Engineering.

Related Posts

On Big Graph Data. August 6, 2012

Applying Graph Analysis and Manipulation to Data Stores. June 22, 2011


ODBMS.org Resources on Graphs and Data Stores
Blog Posts | Free Software | Articles, Papers, Presentations| Tutorials, Lecture Notes

Follow ODBMS.org on Twitter: @odbmsorg

http://www.odbms.org/blog/2013/04/graphs-vs-sql-interview-with-michael-blaha/feed/ 7
On Impedance Mismatch. Interview with Reinhold Thurner http://www.odbms.org/blog/2012/08/on-impedance-mismatch-interview-with-reinhold-thurner/ http://www.odbms.org/blog/2012/08/on-impedance-mismatch-interview-with-reinhold-thurner/#comments Mon, 27 Aug 2012 16:34:21 +0000 http://www.odbms.org/blog/?p=1693 “Many enterprises sidestep applications with “Shadow IT” to solve planning, reporting and analysis problems” — Reinhold Thurner.

I am coming back to the topic of “Impedance Mismatch”.
I have interviewed one of our experts, Dr. Reinhold Thurner founder of Metasafe GmbH in Switzerland.


Q1. In a recent interview José A. Blakeley and Rowan Miller of Microsoft, said that “the impedance mismatch problem has been significantly reduced, but not entirely eliminated”? Do you agree?

Thurner: Yes I agree, with some reservations and only for the special case for the impedance mismatch between a conceptual model, a relational database and an oo-program. However even an advanced ORM is not really a solution for the more general case of complex data which affects any (also non oo) programmer and especially also an end user.

Q2. Could you please explain better what you mean here?

Thurner: My reservations concern the tools and the development process: Several standalone tools (model-designer, mapper, code generator, schema-loader) are connected by intermediate files. Is is difficult if not impossible to develop a transparent model transformation which relieves the developer from the necessity to “think” on both levels – the original model and the transformed model – at the same time. The conceptual models can be “painted” easily, but they cannot be “executed” and tested with test data.
They are practically disjoint from the instance data. It takes a lot of discipline to avoid that changes in the data structures are directly applied to the final database with the consequence that the conceptual model is lost.
I rephrase from a document about ADO.net: “Most significant applications involve a conceptual design phase early in the application development lifecycle. Unfortunately, however, the conceptual data model is captured inside a database design tool that has little or no connection with the code and the relational schema used to implement the application. The database design diagrams created in the early phases of the application life cycle usually stay “pinned to a wall” growing increasingly disjoint from the reality of the application implementation with time.”

Q3. You are criticizing the process and the tools – what is the alternative?

Thurner: I compare this tool-architecture with the idea of an “integrated view of conceptual modeling, databases and CASE” (actually the title of one of your books). The basic ideas did exist already in the early 90es but were not realized because the means to implement a “CASE database” were missing: modeling concepts (OMG), languages (java, c#), frameworks (Eclipse), big cheap memory, powerful cpus, big screens etc. Today we are in a much better position and it is now feasible to create a data platform (i.e. a database for CASE) for tool integration. As José A. Blakeley argues, ‘(…) modern applications and data services need to target a higher-level conceptual model based on entities and relationships (…)’. A modern data platform is a prerequisite to supports such a concept?

Q4. Could you give us some examples of typical (impedance mismatch) problems still existing in the enterprise? How are they practically handled in the enterprise?

Thurner: As a consequence of the problems with the impedance mismatch some applications don’t use database technology at all or develop a thick layer of proprietary services which are in fact a sort of private DBMS.
Many enterprises sidestep applications with “Shadow
” to solve planning, reporting and analysis problems– i.e. Spreadsheets instead of databases, mail for multi-user access and data exchange, security by obscurity and a lot of manual copy and paste.
Another important area is development tools: Development tools deal with a large number of highly interconnected artifacts which must be managed in configurations and versions. These artifacts are still stored in files, libraries and some in relational databases with a thick layer on top. A proper repository would provide better services for a tool developer and helps to create products which are more flexible and easier to use.
Data management and information dictionary: IT-Governance (COBIT) stipulates that a company should maintain an “information dictionary” which contains all “information assets”, their logical structure, the physical implementations and the responsibilities (RACI-Charts, data steward). The common warehouse model (OMG) describes the model of the most common types of data stores – which is a good start: but companies with several DMBSs, hundreds of databases, servers and programs accessing thousands of tables and IMS-segments need a proper database to store the instances to make the “information model real”. Users of such a dictionary (designers, programmers, testers, integration services, operations, problem management etc.) need an easy to use query-language to access these data in an explorative manner.

Q5. If ORM technology cannot solve this kind of problem? What are the alternatives?

Thurner: The essence of ORM-technology is to create a bridge between the “existing eco-system of databases based on the relational model and the conceptual model”. The “relational model” is not the one and only possible approach to persist data. Data storage technology has moved up the ladder from sequential files to index-sequential, to multi-index, to codasyl, to hierarchical (IMS) and today’s market leader RDBMS. This is certainly not the end and the future seems to become very colorful. As Michael Stonebraker explains “In summary, there may be a substantial number of domain-specific database engines with differing capabilities off into the future. See his paper “One Size fits all – an Idea whose time has come and gone“.
ADO.net has been described as “a part of the broader Microsoft Data Access vision” and covers a specific subset of applications. Is the “other part” – the “executable conceptual model” which was mentioned by Peter Chen in a discussion with José Blakely about “The future of Database Systems”?
I am convinced that an executable conceptual model will play an important role for the aforementioned problems: A DMBS with an entity-relationship model implements the conceptual model without an impedance mismatch. To succeed it needs however all the properties José mentioned like queries, transactions, access-rights and integrated tools.

Q6. You started a company which developed a system called Metasafe-Repository. What is it?

Thurner: It started long ago with a first version developed in C, which was e.g. used in a reengineering project to manage a few hundred types, one million instances and about five million bidirectional relationships. In 2006 we decided to redevelop the system from scratch in java and the necessary tools with the Eclipse framework. We started with the basic elements – multi-level architecture based on the entity-relationship-model, integration of models and instance-data, ACID transactions, versioning and user access rights. During development the system morphed from the initial idea of a repository-service to a complete DBMS. We developed a set of entirely model driven tools – modeler, browser, import-/export utilities, Excel-interface, ADO-Driver for BIRT etc.
Metasafe has a multilevel structure: an internal metamodel, the global data model, external models as subsets (views) of the global data model and the instance data – in OMG-Terms it stretches from M3 to M0. All types (M2, M1) are described by meta-attributes as defined in the metamodel. User access rights to models and instance data are tied to the external models. Entity instances (M0) may exist in several manifestations (Catalog, Version, Variant). An extension of the data model e.g. by additional data types, entity types, relationship types or submodels can be defined using the metaModeler tool (or via the API by a program). From the moment the model changes are committed, the database is ready to accept instance data for the new types without unload/reload of the database.

Q7. Is the Metasafe repository the solution to the impedance mismatch problem?

Thurner: It is certainly a substantial step forward because we made the conceptual model and the database definition one and the same. We take the conceptual model literally by its word: If an ‘Order has Orderdetails’, we tell the database to create two entity types ‘Order’ and ‘Orderdetails’ and the relation ‘has’ between them. This way Metasafe implements an executable conceptual model with all the required properties of a real database management system: an open API, versioning, “declarative, model related queries and transactions” etc. Our own set of tools and especially the integration of BIRT (the eclipse Business Intelligence and Reporting Tool) demonstrate how it can be done. Our graphical erSQL query builder is even integrated into the BIRT designer. The erSQL queries are translated on the fly and BIRT accesses our database without any intermediate files.

Q8: What is the API of the Metasafe repository?

Thurner: Metasafe provides an object-oriented Java-API for all methods required to search, create, read, update, delete the elements of the database – i.e. schemas, user groups /users, entities, relationships and their attributes – both on type- and on instance-level. All the tools of Metasafe (modeler, browser, import/export, query builder etc.) are built with this public API. This approach has led to an elaborate set of methods to support an application programmer. The erSQL query-builder and also the query-translator and processor were implemented with this API. An erSQL query can be embedded in a java-program to retrieve a result-set (including its metadata) or to export the result-set.
In early versions we had a C#-version in parallel but we discontinued this branch when we started with the development of the tools based on Eclipse RCP. The reimplementation of the core in C# would be relatively easy. I think that also the tools could be reimplemented because they are entirely model-driven.

Q9 How does Metasafeˈs query language differ from the Microsoft Entity Framework built-in query capabilities (i.e. Language Integrated Query (LINQ)?

Thurner: It is difficult to compare because Metasafe’s ersql query-language was designed with respect to the special nature of an Entity-Relationship model with heavily cross linked information. So the erSQL query language maps directly to the conceptual model. Also “end users” can create queries with the graphical query builder with point and click on the graphical representation of the conceptual model to identify the path through the model and to collect the information of interest.

The queries are translated on the fly and processed by the query processor. The validation and translation of a query into a command structure of the query processor is a matter or milliseconds. The query processor returns result sets of metadata and typed instance data. The query result can also be exported as Excel-Table or as XML-file. In “read-mode” the result of each retrieval step (instance objects and their attributes) is returned to the invoking program instead of building the complete result set. A query represents a sort of “user” model and is also documented graphically. “End users” can easily create queries and retrieve data from the database. erSQL and the graphical query builder is fully integrated in BIRT to create reports on the fly.
The present version supports only information retrieval. We plan to extend it by a ” … for update” feature which locks all selected entity instances for further operation.
E.g. an update query for {an order and all its order items and products} would lock “the order” until backout or commit.

Q10. There are concerns about the performance and the overhead generated by ORM technology. Is performance an issue for Metasafe?

Thurner: Performance is always an issue when the number of concurrent users and the size and complexity of the data grow. The system works quite well for medium size systems with a few hundred types, a few million instances and a few GBs. The performance depends on the translation of the logical requests into physical access commands and on the execution of the physical access to the persistence. Metasafe uses a very limited functionality of an RDBMS (currently SQLServer, Derby, Oracle) for persistence. Locking, transactions, multi-user management is handled by Metasafe; the locking tables are kept in memory. After a commit it writes all changes in one burst to the database. We could of course use an in-memory DBMS to gain performance. E.g. VoltDB with the direct transaction access could be integrated easily and would certainly lead to superior performance.
We have also another kind of performance in mind – the user performance. For many applications the number of milliseconds to execute a transaction are less important than the ability to quickly create or change a database and to create and launch queries in a matter of minutes. Metasafe is especially helpful for this kind of application.

Q11. What problems is Metasafe designed to solve?

Thurner: Metasafe is designed as a generic data platform for medium sized (XX GB) model-driven applications. The primary purpose is the support for applications with large, complex and volatile data structures as tools, models, catalogs or process managers etc. Metasafe could be used to replace some legacy repositories.
Metasafe is certainly the best data platform (repository) for the construction of an integrated development environment. Metasafe can also serve as DBMS for business applications.
We evaluate also the possibilities to use that Metasafe DBMS as data platform for portable devices as phones and tablet computers: This could be a real killer application for application developers.

Q12. How do you position Metasafe in the market?

Thurner: I had the vision of an entity relationship base database system as future data platform and decided to develop Metasafe to a really useful level without the pressure of the market (namely the first time users). Now we have our product on the necessary level of quality and we are planning the next steps. It could be the “open source approach” for a limited version or the integration into a larger organization.
We have a number of applications and POCs but we have no substantial customer base yet, which would require an adequate support and sales organization. But we have not the intension to convert a successful development setup into a mediocre service and sales organization. We are not under time pressure and are looking at a number of possibilities.

Q13. How can the developers community test your system?

Thurner: We provide an evaluation version upon request.

Related Posts

Do we still have an impedance mismatch problem? – Interview with José A. Blakeley and Rowan Miller. by Roberto V. Zicari on May 21, 2012


“Implementing the Executable Conceptual Model (ECM)” (download as .pdf),
by Dr. Reinhold Thurner, Metasafe.

ODBMS.org Free Resources on:
Entity Framework (EF) Resources
ORM Technology
Object-Relational Impedance Mismatch


http://www.odbms.org/blog/2012/08/on-impedance-mismatch-interview-with-reinhold-thurner/feed/ 0
How good is UML for Database Design? Interview with Michael Blaha. http://www.odbms.org/blog/2011/07/how-good-is-uml-for-database-design-interview-with-michael-blaha/ http://www.odbms.org/blog/2011/07/how-good-is-uml-for-database-design-interview-with-michael-blaha/#comments Mon, 25 Jul 2011 06:28:38 +0000 http://www.odbms.org/blog/?p=1009 „ The tools are not good at taking changes to a model and generating incremental design code to alter a populated database.”— Michael Blaha

The Unified Modeling Language™ – UML – is OMG’s most-used specification.
UML is a de facto standard for object modeling, and it is often used for database design as well. But how good is UML really for the task of database conceptual modeling?
I asked a few questions to Dr. Michael Blaha, one of the leading authorities on databases and data modeling.


Q1. Why using UML for database design?

Blaha: Often the most difficult aspect of software development is abstracting a problem and thinking about it clearly — that is the purpose of conceptual data modeling.
A conceptual model lets developers think deeply about a system, understand its core essence, and then choose a proper representation. A sound data model is extensible, understandable, ready to implement, less prone to errors, and usually performs well without special tuning effort.

The UML is a good notation for conceptual data modeling. The representation stands apart from implementation choices, be it a relational database, object oriented database, files, or some other mechanism.

Q2. What are the main weaknesses of UML for database design? And how do you cope with them in practice?

Blaha: First consider object databases. The design of object database code is similar to the design of OO programming code. The UML class model specifies the static data structure. The most difficult implementation issue is the weak support in many object database engines for associations. The workaround depends on object database features and the application architecture.

Now consider relational databases. Relational database tools do not support the UML. There is no technical reason for this, but several cultural reasons. One is that there is a divide between the programming and database communities; each has their own jargon, style, and history and pay little attention to the other.
Also the UML creators focused on unifying programming notation, but spent little time talking to the database community.
The bottom line is that the relational database tools do not support the UML and the UML tools do not support relational databases. In practice, I usually construct a conceptual model with a UML tool (so that I can think deeply and abstractly).
Then I rekey the model into a database tool (so that I can generate schema).

Q3. Even if you have a sound high level UML design, what else can get wrong?

Blaha: I do lots of database reverse engineering for my consulting clients, mostly for relational database applications because that’s what’s used most often in practice. I start with the database schema and work backwards to a conceptual model. I published a paper 10 years ago with statistics for what does go wrong.

In practice, I would say that about 25% of applications have a solid conceptual model, 50% have a mediocre conceptual model, and 25% are just downright awful. Given that a conceptual model is the foundation for an application, you can see why many applications go awry.

In practice, about 50% of applications have a professional database design and 50% are substantially flawed. It’s odd to see so many database design mistakes, given the wide availability of database design tools. It’s relatively easy to take a conceptual model and generate a database design. This illustrates that the importance of software engineering has not reached many developers.

Of course, there can always be flaws in programming logic and user interface code, but these kinds of flaws are easier to correct if there is a sound conceptual model underlying the application and if the model is implemented well with a database schema.

Q4. And specifically for object databases?

Blaha: An object database is nothing special when it comes to the benefits of a sound model and software engineering practice. A carefully considered conceptual model gives you a free hand to choose the most appropriate development platform.

One of my past books (Object-Oriented Modeling and Design for Database Applications) vividly illustrated this point by driving object-oriented data models into different implementation targets, specifically relational databases, object databases, and flat files.

Q5. What are most common pitfalls?

Blaha: It is difficult to construct a robust conceptual model. A skilled modeler must quickly learn the nuances of a problem domain and be able to meld problem content with data abstractions and data patterns.

Another pitfall is that it is important to perform agile development. Developers much work quickly, deliver often, obtain feedback, and build on prior results to evolve an application. I have seen too many developers not take the principles of agile development to heart and become bogged down by ponderous development of interminable scope.

Another pitfall is that some developers are sloppy with database design. Nowdays there really is no excuse for that as tools can
generate database code. Object-oriented CASE tools can generate programming stubs that can seed an object database.
For relational database projects, I first construct an object-oriented model, then re-enter the design into a relational database tool, and finally generate the database schema. (The UML data modeling notation is nearly isomorphic with the modeling language in most relational database design tools.)

Q6. In your experience, how do you handle the situation when a UML conceptual database design is done and a database is implemented using such design, but then later on, updates to the implementation are done without considering the original conceptual design. What to do in such cases?

Blaha: The more common situation is that an application gradually evolves and the software engineering documentation (such as the conceptual model) is not kept up to date.
With a lack of clarity for its intellectual focus, an application gradually degrades. Eventually there has to be a major effort to revamp the application and clean it up, or replace the application with a new one.

The database design tools are good at taking a model and generating the initial database design.
The tools are not good at taking changes to a model and generating incremental design code to alter a populated database.
Thus much manual effort is needed to make changes as an application evolves and keep documentation up to date. However, the alternative of not doing so is an application that eventually becomes a mess and is unmaintainable.

Michael Blaha is a partner at Modelsoft Consulting Corporation.
Dr. Blaha is recognized as one of the world’s leading authorities on databases and data modeling. He has more than 25 years of experience as a consultant and trainer in conceiving, architecting, modeling, designing, and tuning databases for dozens of major organizations around the world. He has authored six U.S. patents, six books, and many papers. Dr. Blaha received his doctorate from Washington University in St. Louis and is an alumnus of GE Global Research in Schenectady, New York.

Related Resources

OMG UML Resource Page.

Object-Oriented Design of Database Stored Procedures, By Michael Blaha, Bill Huth, Peter Cheung

Models, By Michael Blaha

Universal Antipatterns, By Michael Blaha

Patterns of Data Modelling (Database Systems and Applications),Blaha, Michael, CRC Press, May 2010, ISBN 1439819890

http://www.odbms.org/blog/2011/07/how-good-is-uml-for-database-design-interview-with-michael-blaha/feed/ 4
Do we really need a standard for Object Databases? http://www.odbms.org/blog/2009/04/do-we-really-need-standard-for-object/ http://www.odbms.org/blog/2009/04/do-we-really-need-standard-for-object/#comments Fri, 17 Apr 2009 08:15:00 +0000 http://www.odbms.org/odbmsblog/2009/04/17/do-we-really-need-a-standard-for-object-databases/ If you recall, in February 2006, the Object Management Group (OMG) has decided to develop the “4th generation” standard for object databases in order to facilitate broader adoption of standards-based object database technology.

To this end, the OMG had set up the Object Database Technology Working Group (ODBT WG) and acquired the rights to develop new OMG specifications based on the works of the disbanded Object Data Management Group (ODMG), which issued the last ODMG 3.0 standard in 2001.

However, no significant progresss has been made until now…

This despite some interesting discussion who took place in 2008.

This is the result on a first analysis, of a luck of active participation from vendor companies.

So the question to address at this point is: Do we really need a standard for Object Databases?

http://www.odbms.org/blog/2009/04/do-we-really-need-standard-for-object/feed/ 2
OMG ODBTWG next steps http://www.odbms.org/blog/2008/12/omg-odbtwg-next-steps/ http://www.odbms.org/blog/2008/12/omg-odbtwg-next-steps/#comments Tue, 16 Dec 2008 08:23:00 +0000 http://www.odbms.org/odbmsblog/2008/12/16/omg-odbtwg-next-steps/ This is a short note related to the OMG ODBTWG meeting, on December 9, 2008.

During the meeting there was a consensus that the OMG’s Semantic Meta Object Facility (“semantic MOF” or “S-MOF”) would be a good place to start for the object model in the Object Database Standard RFP.

Mike Card is planning to publish a rough draft of an OMG RFP for the new database standard in advance of the March 2009 OMG meeting in Washington DC.

RFP stands for Request for Proposals; the OMG technology adoptions revolve around the RFP.
More info on the OMG Technology Adoption Process.

http://www.odbms.org/blog/2008/12/omg-odbtwg-next-steps/feed/ 0
OMG is hosting an Object Database Standard Definition Scope meeting in Santa Clara http://www.odbms.org/blog/2008/12/omg-is-hosting-object-database-standard/ http://www.odbms.org/blog/2008/12/omg-is-hosting-object-database-standard/#comments Fri, 05 Dec 2008 06:18:00 +0000 http://www.odbms.org/odbmsblog/2008/12/05/omg-is-hosting-an-object-database-standard-definition-scope-meeting-in-santa-clara/ I have received a note from Mike Card that I would like to share with you.

“The OMG is hosting an Object Database Standard Definition Scope meeting in Santa Clara, CA at the Hyatt Regency on Tuesday afternoon, December 9th.

The purpose of this meeting will be to define what the scope of the new object database standard should be.

We have already done some work in this area but more remains to be done.
Our goal is to complete the definition of what will and will not be included in the scope of the new standard at this meeting. Once we have defined what will and will not be included, I can begin work on a draft OMG Request For Proposal (RFP).
The RFP is important because this is the mechanism by which the OMG generates standards – an RFP is put out there and a group of vendors who intend to implement the final standard responds to the RFP with a standard.
So, we cannot get the ball rolling until we get the RFP out there, and we are getting close. Once the RFP is put out by the OMG, then the “real work” begins where object database vendors intending to submit and other interested parties begin working together to develop a response to the RFP.
It is this response that will become the successor to ODMG 3.0.

The agenda for this meeting will be as follows:

1300-1310 Welcome and introductory comments (Mike Card)
1310-1330 Review of scoping consensus thus far and db4o comments from last meeting (Mike Card)
1330-1630 Discussion of scope areas to be included or excluded (all participants)
1630-1700 Wrap-up and discussion of next steps (Mike Card)

We got some excellent feedback from db4o at our last meeting on these topics and we would like input from other vendors as well.

We very much hope to see you there! There is a $150 registration fee for this event, to register please visit the registration page

There should be a link there soon to register for this event. Thanks!

Michael P. Card
Syracuse Research Corporation “

For a summary of the work done until now by the OMG on the definition of a new object database standard, pls see my interview to Mike Card

http://www.odbms.org/blog/2008/12/omg-is-hosting-object-database-standard/feed/ 1
LINQ: the best option for a future Java query API? http://www.odbms.org/blog/2008/10/is-really-linq-best-option-for-future/ http://www.odbms.org/blog/2008/10/is-really-linq-best-option-for-future/#comments Tue, 07 Oct 2008 04:49:00 +0000 http://www.odbms.org/odbmsblog/2008/10/07/linq-the-best-option-for-a-future-java-query-api/ My interview to Mike Card has triggered an intense discussion (still ongoing), on the pros and cons of considering LINQ as the best option for a future Java query API.

There is a consensus that a common query mechanism for odbms is needed.

However, there is quite a disagreement on how this should be done. In particular, some see LINQ as a solution, provided that LINQ is also available for Java. Others on the contrary do not like LINQ, but would rather prefer a vendor neutral solution, for example based on SBQL.

You can follow the discussion here.

I have listed here some useful resources I published in ODBMS.ORG – related to this discussion:

Erik Meijer, José Blakeley
The Microsoft perspective on ORM
An Interview in ACM Queue Magazine with Erik Meijer and José Blakeley. With LINQ (language-integrated query) and the Entity Framework, Microsoft divided its traditional ORM technology into two parts: one part that handles querying (LINQ) and one part that handles mapping (Entity Framework).| September 2008 |

Panel Discussion “ODBMS: Quo Vadis?
Panel discussion with Mike Card, Jim Paterson, and Kazimierz Subieta, on their views on on some critical questions related to Object Databases: Where are Object Database Systems going? Are Relational database systems becoming Object Databases?
Do we need a standard for Object Databases? Why ODMG did not succeed?

Java Object Persistence: State of the Union PART II
Panel discussion with Jose Blakeley (Microsoft), Rick Cattell (Sun Microsystems), William Cook (University of Texas at Austin), Robert Greene (Versant), and Alan Santos (Progress). The panel addressed the ever open issue of the impedance mismatch.

Java Object Persistence: State of the Union PART I
Panel discussion with Mike Keith: EJB co-spec lead, main architect of Oracle Toplink ORM, Ted Neward: Independent consultant, often blogging on ORM and persistence topics, Carl Rosenberger: lead architect of db4objects, open source embeddable object database. Craig Russell: Spec lead of Java Data Objects (JDO) JSR, architect of entity bean engine in Sun’s appservers prior to Glassfish, on their views on the current State of the Union of object persistence with respect to Java.

Stack-Based Approach (SBA) and Stack-Based Query Language (SBQL)
Kazimierz Subieta, Polish-Japanese Institute of Information Technology
Introduction to object-oriented concepts in programming languages and databases, SBA and SBQL

The Object-Relational Impedance Mismatch
Scott Ambler, IBM. Scott explores the technical and the cultural impedance mismatch between the relational and the object world.

ORM Smackdown – Transcript
Ted Neward, Oren “Ayende” Eini. Transcripts of the Panel discussion “ORM Smackdown” on different viewpoints on Object-Relational Mapping (ORM) systems, courtesy of FranklinsNet.

OOPSLA Panel Objects and Databases
William Cook et.al. Transcript of a high ranking panel on objects and databases at the OOPSLA conference 2006, with representatives from BEA, db4objects, GemStone, Microsoft, Progress, Sun, and Versant.

http://www.odbms.org/blog/2008/10/is-really-linq-best-option-for-future/feed/ 0
LINQ is the best option for a future Java query API http://www.odbms.org/blog/2008/08/linq-is-best-option-for-future-java/ http://www.odbms.org/blog/2008/08/linq-is-best-option-for-future-java/#comments Wed, 27 Aug 2008 12:26:00 +0000 http://www.odbms.org/odbmsblog/2008/08/27/linq-is-the-best-option-for-a-future-java-query-api/ A conversation with Mike Card.

I have interviewed Mike Card on the latest development of the OMG working group which aims at defining a new standards for Object Database Systems.

Mike works with Syracuse Research Corporation (SRC) and is involved in object databases and their application to challenging problems, including pattern recognition. He chairs the ODBT group in OMG to advance object database standardization.

R. Zicari: Mike, you recently chaired an OMG ODBTWG meeting, on June 24, 2008 What kind of synergy do you see outside OMG in relation to your work?

Mike Card: We think it is likely that the OMG would need to participate in the Java Community Process (JCP) in order to write a Java Specification Request (JSR) to add LINQ functionality to Java.

R. Zicari: There has been a lot of discussion lately on the merit of SBQL vs. LINQ as a possible query API standard for object databases . Did you discuss this issue at the meeting?

M. Card: I began the technical part of our meeting by reviewing Professor Subieta’s comparison of SBQL and LINQ. It was my understanding from this comparison that LINQ was technically capable of performing any query that could be performed by SBQL, and I wanted to know if the participants saw this the same way. They agreed in general, and believed that even if LINQ were only able to do 90% of what SBQL could do in terms of data retrieval that it would still be the way to go.

R. Zicari: Could you please go a bit more in detail on this?

M. Card: Sure. At the meeting it was pointed out that Prof. Subieta had noted in his comparison that he had not shown queries using features that are not a part of LINQ, such as fixed-point arithmetic, numeric ranges, etc.

These are language features that would be familiar to users of Ada but which are not found in languages like C++, C#, and Java so they would likely not be missed and would be considered esoteric.

It was also pointed out that the query examples chosen by Prof. Subieta in his comparison were all “projections” (relational term meaning a query or operation that produces as its output table a subset of the input table, usually containing only some of the input table’s columns).

A query like this by definition will rely on iteration, and this will show the inherent expressive power of SBQL since the abstract machine contains a stack that can be used to do the iteration processing and thus avoid the loops, variables, etc. needed by SQL/LINQ.

R. Zicari: Did you agree on a common direction for your work in the group?

M. Card: The consensus at this meeting and at ICOODB conference in Berlin was that LINQ was the best option for a future Java query API since it already had broad support in the .Net community. We will have to choose a new name for the OMG-Java effort, however, as LINQ is trademarked by Microsoft.

It was also agreed that the query language need not include object update capability, as object updates were generally handled by object method invocations and not from within query expressions.

Now, since LINQ allows method invocations as part of navigation (e.g. “my_object.getBoss().getName()”) it is entirely possible that these method calls could have side effects that update the target objects, perhaps in such a way that the changes would not get saved to the database.

This was recognized as a problem, ideas kicked around for how to solve it included source code analysis tools.
This is something we will need a good answer for as it is a potential “open manhole cover” if we intend the LINQ API to be read-only and not capable of updating the database (especially unintentionally!)

R. Zicari: What else did you address at the meeting?

Mike Card: The discussion then moved on to a list of items included Carl Rosenberger’s ICOODB presentation.
Other items were also reviewed from an e-mail thread in the ODBMS.ORG forumthat included comments from both Prof. Subieta and Prof. William Cook.

The areas discussed were broken down into 3 groups:
i) those things there was consensus on for standardization,
ii) those things that needed more discussion/participation by a larger group, and
iii) those things that there was consensus on for exclusion from standardization.

R. Zicari: What are the areas you agree to standardize?

Mike Card: The areas we agree to standardize are:

1. object lifecycle (in memory): What happens at object creation/deletion, “attached” and “detached” objects, what happens during a database transaction (activation and de-activation), etc. It is desirable that we base our efforts in this area on what has already been done in existing standards for Java such as JDO, JPA, OMG, et. al. This interacts with the concurrency control mechanism for the database engine, may need to refer to Bernstein et. al. for serialization theory / CC algorithms.

2. object identification: A participant raised a concern here RE: re-use of OID where the OID is implemented as a physical pointer and memory is re-cycled resulting in re-use of an OID, which can corrupt some applications. He favored a standard requiring all OIDs to be unique and not re-used

3. session:: what are the definition and semantics of a session?
a. Concurrency control: again, we should refer to Bernstein et. al. for proven algorithms and mathematical definitions in lieu of ACID criteria (ACA: Avoidance of Cascading Aborts, ST: Strict, SR: Serializable, RC: Recoverable for characterizing transaction execution sequences)
b. Transactions: semantics/behavior and span/scope

4. Object model: what OM will we base our work upon?

5. Native language APIs: how will we define these? Will they be based on the Java APIs in ODMG 3.0, or will they be different? Will they be interfaces?

6. Conformance test suite: we will need one of these for each OO language we intend to define a standard for. The test suite, however, is not the definition of the standard; the definition must exist in the specification.

7. Error behavior: exception definitions etc.

R. Zicari: What are the areas where no agreement was (yet) found?

Mike Card: Areas we need to find agreement on are:

1. keys and indices: how do you sort objects? How do you define compound keys or spatial keys? Uniqueness constraints? Can this be handled by annotation, with the annotation being standardized but the implementation being vendor-specific? This interacts with the query mechanism, e.g. availability of an index could be checked for by the query optimizer.

2. referential integrity: do we want to enforce this? Avoidance of dangling pointers, this interacts with object lifecycle/GC considerations.

3. cascaded delete: when you delete an object, do you also delete all objects that it references? It was pointed out that this has issues for a client/server model ODBMS like Versant because it may have to “push” out to clients that objects on the server have been deleted, so you have a distributed cache consistency problem to solve.

4. replication/synchronization: how much should we standardize the ability to keep a synchronized copy of part or all of an object database? Should the replication mechanism be interoperable with relational databases? Part or all of this capability could be included in an optional portion of the standard.

a. Backup:
this is a specialized form of replication, how much should this be standardized? Is the answer to this
question dependent upon the kind of environment (DBA or DBA-less/embedded) that the ODBMS is operating in?

5. events/triggers: do we want to standardize certain kinds of activity (callbacks et. al.) when certain database operations occur?

6. update within query facility: this is a recognition of the limitations of LINQ, which does not support object update it is “read-only.” Generally, object updates and deletes are performed by method invocations in a program and not by query statements.
The question is, since LINQ allows method invocations as part of navigation, e.g. “my_employee_obj.getBoss().getName(),” is it possible in cases like this that such method calls could have side effects which update the object(s) in the navigation statement? If so, what should be done?

7. extents: do we expose APIs for extents to the user?

8. support for C++: how will we support C++/legacy languages for which a LINQ-like facility is not available? We could investigate string-based QL like OQL and/or we could use a facility similar to Cook/db4o “native queries”

R. Zicari: And what are the areas you definitely do not want to standardize?

Mike Card: Areas we do not want to standardize are:

1. garbage collection: issue here is behavioral differences between “embedded” (linked-in) OODBMS vs. client/server OODBMS

2. stored procedures/functions/views: these are relational/SQL concepts that are not necessarily applicable to object-oriented programming languages which are the purview of object databases.

R. Zicari: How will you ensure that the vendor community will support this proposal?

Mike Card: We plan on discussing this list and verify that others not present agree with the grouping of these items. We should also figure out what we want to do with the items in the “middle” group and then begin prioritizing these things. It appears likely that a next-generation ODBMS standard will follow a “dual-track” model in that the query mechanism (at least for Java) will be developed as a JSR within the JCP, while all of the other items will be developed within the OMG process.

For C# (assuming C# is a language we will want an ODBMS standard for, and I think it is), the query API will be built into the language via LINQ and we will need to address all of the “other” issues within our OMG effort just as with Java. In the case of C# and Java, most of these issues can probably be dealt with in the same manner.

How much interest there is in a C++ standardization effort is unclear, this is an area we will need to discuss further.
A LINQ-like facility for C++ is not an option since unlike C# and Java there is no central maintenance point for C++ compilers.

There is an ISO WG that maintains the C++ standard, but C++ “culture” accepts non-conformant compilers so there are many C++ compilers out there that only conform to part of the ISO standard.

The developers present who work with C++ mentioned that their C++ code base must be “tweaked” to work with various compilers as a given set of C++ code might compile fine with 7 compilers but fail with the compiler from vendor number 8.
In general, the maintenance of C++ is more difficult than for Java and C# due to inconsistency in compiler implementation and this complicates anything we want to do with something as complex as object persistence.

Some Useful Resources:
Panel Discussion “ODBMS: Quo Vadis?

Java Object Persistence: State of the Union PART II

Java Object Persistence: State of the Union PART I

http://www.odbms.org/blog/2008/08/linq-is-best-option-for-future-java/feed/ 72
Object Database Systems: Quo vadis? http://www.odbms.org/blog/2008/05/object-database-systems-quo-vadis/ http://www.odbms.org/blog/2008/05/object-database-systems-quo-vadis/#comments Tue, 20 May 2008 09:08:00 +0000 http://www.odbms.org/odbmsblog/2008/05/20/object-database-systems-quo-vadis/ I wanted to have an opinion on some critical questions related to Object Databases:

Where are Object Database Systems going? Are Relational database systems becoming Object Databases? Do we need a standard for Object Databases? Why ODMG did not succeed?

I have therefore interviewed one of our Experts, Mike Card , on his view on the current State of the Union of object database systems.
Mike works with Syracuse Research Corporation (SRC) and is involved in object databases and their application to challenging problems, including pattern recognition. He chairs the ODBT group in OMG to advance object database standardization.

It has been said (See Java Panel II ) that an Object Database System in order to be a suitable solution to the object persistence problem needs to support not only a richer object model, but it also has to support set-oriented, scalable, cost-based-optimized query processing, and high-throughput transactions.
Do current ODBMS offer these features?

Mike Card:
In my opinion, no though the support for true transactional processing varies between vendors. Some products use “optimistic” concurrency control, which is suitable only for environments where there is very little concurrent access to the database, such as single-threaded embedded applications. In my opinion, a database engine is not “scalable” (at least in the enterprise sense of the word) if it is based on optimistic concurrency control. This is because most truly large-scale applications will require optimal performance with many concurrent transactions, and this cannot be achieved when updates have to be rolled back at transaction commit time and re-attempted due to access conflicts.

Relational systems are rapidly becoming object database systems (See Java Panel II ). Do you agree or disagree with this statement? Why?

Mike Card:
I would disagree, because relational databases still fundamentally access objects as rows of tables and do not offer seamless integration into a host programming language’s type system. It is true that there are some good ORMs out there, but these will never offer the performance or seamlessness that is available with a good ODBMS. I would agree that ORMs are getting better, but relational databases themselves are not becoming object databases.

A lot of the worlds systems are built on relational technology and those systems need to be extended and integrated.
That job is always difficult. An ODBMS should be able to fully participate in the enterprise data ecosystem as well as any other DBMS for both new development as well as enhancing existing applications. How this can be achieved?
What is your opinion on this issue?

Mike Card:
As many vendors have noted, this is to some extent a marketing problem in terms of making enterprise customers aware of what object databases can do. It is also a technology issue, however, as engines based on “small-scale” concepts like optimistic concurrency control are not suitable to many enterprise environments.

Object databases vary greatly from vendor to vendor. Is a standard for object databases (still) needed? If yes, what needs to be standardized in your opinion?

Mike Card:
Yes, I believe it is. The APIs for creating, opening, deleting, and synchronizing/replicating databases as well as the native query APIs should be standardized to allow application portability. Any APIs needed to insert objects into the database, remove them from the database, or create an index on them should also be standardized, again for the sake of application portability. I would also like to see a standard XML format for exporting object database contents to allow for data portability. I am not sure our current OMG effort can achieve all of these standardization goals, but I would like to.

How would this new standard would different to the previous effort in ODMG? And what relationships this new standard would have with standards such as SQL?

Mike Card:
Unlike the previous ODMG standard, the new standard should have a conformance test suite that anyone can download and run against a candidate product. The standard itself should also be unambiguous and use precise language as is done in ISO standards for things like programming languages, e.g. ISO/IEC 8652 (Ada programming language standard).

The primary focus of an object database standard should be its support of a native programming language, so I would expect that an object database standard might be more closely tied to an ISO standard for an object programming language (Ada, C++, other ISO-standardized languages that may appear) than to SQL, though perhaps if a LINQ-like native query capability were included in the object database standard would also reference the SQL standard due to the use of SQL-like verbs and semantics in LINQ.

LINQ is leading in database API innovation, providing native language data access. Is this a suitable standard for ODBMS? Why?

Mike Card:
LINQ looks like it has a lot of promise in this area. We (the Object Database Technology Working Group in OMG) are currently evaluating LINQ vs. the stack-based query language (SBQL) developed at the Polish-Japanese Institute for Information Technology to see how these technologies compare for handling complex queries. SBQL has proven to be very good for complex queries and is being deployed in several EU projects, though it is unknown to most American developers. We are doing this evaluation to ensure LINQ is a good foundation for developers of applications that require complex queries, and is not too “small-scale” in its current form. We also want to hear from the LINQ community on plans (if any) to include update capability in LINQ and we need to be sure there are no surprises for parallel transaction execution.

When object databases are a suitable solution for an Enterprise and when they are not?

Mike Card:
They are not suitable when the engine is intended primarily for use in single-threaded embedded systems (optimistic concurrency control is a good indicator of this as I mentioned earlier).

An object database would be suitable for use in an enterprise system if it was really good at large-scale data management, i.e. the engine was designed to handle large volumes of data and many parallel transactions. Some object databases are not built like this, they are designed for use primarily in single-threaded embedded applications with fairly small data volumes and as such they would not be good candidates for enterprise applications.

Besides the technology used in the database engine itself, a good enterprise object database would need database maintenance tools (e.g. taking database A offline and replacing it with database B, updating or fiddling with database A and then bringing it back on-line, scheduling backups of databases and replicating databases between sites etc.).

Question 8:
Future direction of object databases. Where do they go?

Mike Card:
The answer to this question depends on where object programming languages themselves go. Up to this point, programming languages have not included the concept of persistence, it is always included as a “foreign” thing to be dealt with using APIs for things like file I/O etc. This is a very 1960s view of persistence, where programs were things that lived in core memory and persistent things were data fil
es written out to tape or disk.

The closest thing to true integration of persistence I have seen is in Ruby with its “PStore” class. I would like to see persistence integrated even more fully, where objects can be declared persistent or made persistent a la

public class myClass {

persistent Integer[] myInts = new Integer[5];
Integer[] myOtherInts = new Integer[2];

public void aMethod() {


and the programming language itself would take care of maintaining them in files and loading them in at program start-up etc. without any additional work from the programmer.

Now there are obviously challenges with this as this small example shows. What does it mean to initialize a persistent object in a class declaration? Is the object re-initialized when the program starts up? Or is the persisted value retained, rendering the initialization clause meaningless on a subsequent run of the program? Should persistent objects be allowed to have initialization clauses like this? What are the rules about inter-object access? Must persistence by reachability be used to ensure referential integrity? Can a “stack” variable (i.e. a variable declared in a method) be declared or made persistent, or must persistent variables be at the class level or even “global” (static)? Are these questions different for interpreted languages like Ruby which do not have the same notions of class as languages like Java? These are computer science/discrete math questions that will be answered during the language design process which will in turn determine how much “database” functionality ends up in the language itself.

If persistence were fully integrated into an object programming language in this way, then the role of an object database for that language might be to just provide an efficient way to organize and search the program’s persistent variables. This would reduce the scope of what an object database has to do, since today an object database not only has to provide efficient organization and search (index and query) capability, but it also has to make objects persistent as seamlessly as possible. Of course, this “reduction in scope” would only be possible if the default persistence mechanism for the programming language was implemented in a way that was efficient and fast for large numbers of objects.


http://www.odbms.org/blog/2008/05/object-database-systems-quo-vadis/feed/ 0
News from the OMG Object Database Technology Users and Vendors http://www.odbms.org/blog/2008/02/news-from-omg-object-database/ http://www.odbms.org/blog/2008/02/news-from-omg-object-database/#comments Wed, 06 Feb 2008 02:33:00 +0000 http://www.odbms.org/odbmsblog/2008/02/06/news-from-the-omg-object-database-technology-users-and-vendors/ I have received some information from Mrs. Charlotte W. Wales (The MITRE Corporation) related to the OMG Object Database Technology Users and Vendors Roundtable, which took place on 11 December 2007 at the OMG meeting in Burlingame, CA. I have listed it below as I have received it.


News from OMG Object Database Technology Users and Vendors Roundtable, 11 December 2007

All the hard work that went into preparation of the Next Generation Object Database Standardization White Paper, augmented by the publicity received here at the ODBMS.ORG Portal (in the Forum), resulted in a successful Users and Vendors Users and Vendors Roundtable at the OMG meeting last December in Burlingame, CA. The meeting attendance of 14 was a healthy mixture of users and vendors representing Objectivity, Versant, Gemstone, db4Objects, and Fujitsu (used to market Jasmine) Tibco, Progeny, Boeing, TUMunich, Kangwon Univ (Korea), PJI, Syracuse Research, and MITRE.

After a welcome and introductions conducted by Char Wales (MITRE), Mike Card (Syracuse Research), calling in from his sickbed in New York, introduced the Next Generation Object Database Standardization effort, providing important historical and technical background including his role in the ODMG.

Prof K. Subieta (PJIT) then gave a presentation on his Stack Based Approach to Object Databases. Anat Ghafni (db4Objects) presented and summarized the high points of the sometimes lively discussions that appeared in the ODBMS Forum in response to the White Paper. These presentations laid an excellent groundwork for discussions during the ensuing Roundtable, moderated by Mike Card and Char Wales, which fulfilled the Roundtable’s “Objectives” – a completely open Forum, with nothing off limits.

The conclusion of the Roundtable was an agreement to work on a Roadmap for achieving the goal of an adopted Next Generation Object Database Standard with vendor implementations by 2009. Facilitated by teleconferences – the plan is to have an initial version of this Roadmap ready in time to present at the ICOODB 2008 ICOODB 2008 conference in Berlin and at the OMG Technical Committee meeting in Washington, DC, both scheduled for the same week in March 2008. If things proceed well, it is hoped that an RFP will be ready for issuance by June 2008, and – with luck – initial
submissions ready for review by the end of this year.

For the benefit of those who have not been part of this “from the beginning”, a recap of a few of the significant events within OMG leading to the Roundtable last December is in order:

-Sep 03: 1st Object Database Working Group meeting; idea of improving existing ODMG3.0 standard introduced.

-Nov 03, Apr ’04: “Socialization” of this idea within OMG.

-May 04: Morgan-Kauffman grants OMG the right “to publish, revise, disseminate and use original and revised versions of the Standard as an OMG specification (the “Specification”)” subject to limitations detailed in letter to OMG.

-Sep 05: ODBMS.ORG portal launched.

-Dec 06: Decision to expand scope to Object Database Technology (including modeling and mappings between object and relational).

-Feb 06: Object Database Technology Request for Information (RFI) Issued.

-Jun 06: Report summarizing 11 RFI responses identified three ways forward.

-Sep 07: Next-Generation Object Database Standardization White Paper issued.

Charlotte W. Wales

http://www.odbms.org/blog/2008/02/news-from-omg-object-database/feed/ 0