ODBMS Industry Watch » object persistence http://www.odbms.org/blog Trends and Information on Big Data, New Data Management Technologies, Data Science and Innovation. Sun, 02 Apr 2017 17:59:10 +0000 en-US hourly 1 http://wordpress.org/?v=4.2.13 Big Data from Space: the “Herschel” telescope. http://www.odbms.org/blog/2013/08/big-data-from-space-the-herschel-telescope/ http://www.odbms.org/blog/2013/08/big-data-from-space-the-herschel-telescope/#comments Fri, 02 Aug 2013 12:45:02 +0000 http://www.odbms.org/blog/?p=2169

” One of the biggest challenges with any project of such a long duration is coping with change. There are many aspects to coping with change, including changes in requirements, changes in technology, vendor stability, changes in staffing and so on”–Jon Brumfitt.

On May 14, 2009, the European Space Agency launched an Arianne 5 rocket carrying the largest telescope ever flown: the “Herschel” telescope, 3.5 meters in diameter.

I first did an interview with Dr. Jon Brumfitt, System Architect & System Engineer of Herschel Scientific Ground Segment, at the European Space Agency in March 2011. You can read that interview here.

Two years later, I wanted to know the status of the project. This is a follow up interview.

RVZ

Q1. What is the status of the mission?

Jon Brumfitt: The operational phase of the Herschel mission came to an end on 29th April 2013, when the super-fluid helium used to cool the instruments was finally exhausted. By operating in the far infra-red, Herschel has been able to see cold objects that are invisible to normal telescopes.
However, this requires that the detectors are cooled to an even lower temperature. The helium cools the instruments down to 1.7K (about -271 Celsius). Individual detectors are then cooled down further to about 0.3K. This is very close to absolute zero, which is the coldest possible temperature. The exhaustion of the helium marks the end of new observations, but it is by no means the end of the mission.
We still have a lot of work to do in getting the best results from the data processing to give astronomers a final legacy archive of high-quality data to work with for years to come.

The spacecraft has been in orbit around a point known as the second Lagrangian point “L2″, which is about 1.5 million kilometres from Earth (around four times as far away as the Moon). This location provided a good thermal environment and a relatively unrestricted view of the sky. The spacecraft cannot be left in this orbit because regular correction manoeuvres would be needed. Consequently, it is being transferred into a “parking” orbit around the Sun.

Q2. What are the main results obtained so far by using the “Herschel” telescope?

Jon Brumfitt: That is a difficult one to answer in a few sentences. Just to take a few examples, Herschel has given us new insights into the way that stars form and the history of star formation and galaxy evolution since the big-bang.
It has discovered large quantities of cold water vapour in the dusty disk surrounding a young star, which suggests the possibility of other water covered planets. It has also given us new evidence for the origins of water on Earth.
The following are some links giving more detailed highlights from the mission:

– Press
– Results
– Press Releases
– Latest news

With its 3.5 metre diameter mirror, Herschel is the largest space telescope ever launched. The large mirror not only gives it a high sensitivity but also allows us to observe the sky with a high spatial resolution. So in a sense every observation we make is showing us something we have never seen before. We have performed around 35,000 science observations, which have already resulted in over 600 papers being published in scientific journals. There are many years of work ahead for astronomers in interpreting the results, which will undoubtedly lead to many new discoveries.

Q3. How much data did you receive and process so far? Could you give us some up to date information?

Jon Brumfitt: We have about 3 TB of data in the Versant database, most of which is raw data from the spacecraft. The data received each day is processed by our data processing pipeline and the resulting data products, such as images and spectra, are placed in an archive for access by astronomers.
Each time we make a major new release of the software (roughly every six months at this stage), with improvements to the data processing, we reprocess everything.
The data processing runs on a grid with around 35 nodes, each with typically 8 cores and between 16 and 256 GB of memory. This is able to process around 40 days worth of data per day, so it is possible to reprocess everything in a few weeks. The data in the archive is stored as FITS files (a standard format for astronomical data).
The archive uses a relational (PostgreSQL) database to catalogue the data and allow queries to find relevant data. This relational database is only about 60 GB, whereas the product files account for about 60 TB.
This may reduce somewhat for the final archive, once we have cleaned it up by removing the results of earlier processing runs.

Q4. What are the main technical challenges in the data management part of this mission and how did you solve them?

Jon Brumfitt: One of the biggest challenges with any project of such a long duration is coping with change. There are many aspects to coping with change, including changes in requirements, changes in technology, vendor stability, changes in staffing and so on.

The lifetime of Herschel will have been 18 years from the start of software development to the end of the post-operations phase.
We designed a single system to meet the needs of all mission phases, from early instrument development, through routine in-flight operations to the end of the post-operations phase. Although the spacecraft was not launched until 2009, the database was in regular use from 2002 for developing and testing the instruments in the laboratory. By using the same software to control the instruments in the laboratory as we used to control them in flight, we ended up with a very robust and well-tested system. We call this approach “smooth transition”.

The development approach we adopted is probably best classified as an Agile iterative and incremental one. Object orientation helps a lot because changes in the problem domain, resulting from changing requirements, tend to result in localised changes in the data model.
Other important factors in managing change are separation of concerns and minimization of dependencies, for example using component-based architectures.

When we decided to use an object database, it was a new technology and it would have been unwise to rely on any database vendor or product surviving for such a long time. Although work was under way on the ODMG and JDO standards, these were quite immature and the only suitable object databases used proprietary interfaces.
We therefore chose to implement our own abstraction layer around the database. This was similar in concept to JDO, with a factory providing a pluggable implementation of a persistence manager. This abstraction provided a route to change to a different object database, or even a relational database with an object-relational mapping layer, should it have proved necessary.

One aspect that is difficult to abstract is the use of queries, because query languages differ. In principle, an object database could be used without any queries, by navigating to everything from a global root object. However, in practice navigation and queries both have their role. For example, to find all the observation requests that have not yet been scheduled, it is much faster to perform a query than to iterate by navigation to find them. However, once an observation request is in memory it is much easier and faster to navigate to all the associated objects needed to process it. We have used a variety of techniques for encapsulating queries. One is to implement them as methods of an extent class that acts as a query factory.

Another challenge was designing a robust data model that would serve all phases of the mission from instrument development in the laboratory, through pre-flight tests and routine operations to the end of post-operations. We approached this by starting with a model of the problem domain and then analysing use-cases to see what data needed to be persistent and where we needed associations. It was important to avoid the temptation to store too much just because transitive persistence made it so easy.

One criticism that is sometimes raised against object databases is that the associations tend to encode business logic in the object schema, whereas relational databases just store data in a neutral form that can outlive the software that created it; if you subsequently decide that you need a new use-case, such as report generation, the associations may not be there to support it. This is true to some extent, but consideration of use cases for the entire project lifetime helped a lot. It is of course possible to use queries to work-around missing associations.

Examples are sometimes given of how easy an object database is to use by directly persisting your business objects. This may be fine for a simple application with an embedded database, but for a complex system you still need to cleanly decouple your business logic from the data storage. This is true whether you are using a relational or an object database. With an object database, the persistent classes should only be responsible for persistence and referential integrity and so typically just have getter and setter methods.
We have encapsulated our persistent classes in a package called the Core Class Model (CCM) that has a factory to create instances. This complements the pluggable persistence manager. Hence, the application sees the persistence manager and CCM factories and interfaces, but the implementations are hidden.
Applications define their own business classes which can work like decorators for the persistent classes.

Q5. What is your experience in having two separate database systems for Herschel? A relational database for storing and managing processed data products and an object database for storing and managing proposal data, mission planning data, telecommands and raw (unprocessed) telemetry?

Jon Brumfitt: There are essentially two parts to the ground segment for a space observatory.
One is the “uplink” which is used for controlling the spacecraft and instruments. This includes submission of observing proposals, observation planning, scheduling, flight dynamics and commanding.
The other is the “downlink”, which involves ingesting and processing the data received from the spacecraft.

On some missions the data processing is carried out by a data centre, which is separate from spacecraft operations. In that case there is a very clear separation.
On Herschel, the original concept was to build a completely integrated system around an object database that would hold all uplink and downlink data, including processed data products. However, after further analysis it became clear that it was better to integrate our product archive with those from other missions. This also means that the Herschel data will remain available long after the project has finished. The role of the object database is essentially for operating the spacecraft and storing the raw data.

The Herschel archive is part of a common infrastructure shared by many of our ESA science projects. This provides a uniform way of accessing data from multiple missions.
The following is a nice example of how data from Herschel and our XMM-Newton X-ray telescope have been combined to make a multi-spectral image of the Andromeda Galaxy.

Our archive, in turn, forms part of a larger international archive known as the “Virtual Observatory” (VO), which includes both space and ground-based observatories from all over the world.

I think that using separate databases for operations and product archiving has worked well. In fact, it is more the norm rather than the exception. The two databases serve very different roles.
The uplink database manages the day-to-day operations of the spacecraft and is constantly being updated. The uplink data forms a complex object graph which is accessed by navigation, so an object database is well suited.
The product archive is essentially a write-once-read-many repository. The data is not modified, but new versions of products may be added as a result of reprocessing. There are a large number of clients accessing it via the Internet. The archive database is a catalogue containing the product meta-data, which can be queried to find the relevant product files. This is better suited to a relational database.

The motivation for the original idea of using a single object database for everything was that it allowed direct association between uplink and downlink data. For example, processed products could be associated with their observation requests. However, using separate databases does not prevent one database being queried with an observation identifier obtained from the other.
One complication is that processing an observation requires both downlink data and the associated uplink data.
We solved this by creating “uplink products” from the relevant uplink data and placing them in the archive. This has the advantage that external users, who do not have access to the Versant database, have everything they need to process the data themselves.

Q6. What are the main lessons learned so far in using Versant object database for managing telemetry data and information on steering and calibrating scientific on-board instruments?

Jon Brumfitt: Object databases can be very effective for certain kinds of application, but may have less benefit for others. A complex system typically has a mixture of application types, so the advantages are not always clear cut. Object databases can give a high performance for applications that need to navigate through a complex object graph, particularly if used with fairly long transactions where a significant part of the object graph remains in memory. Web (JavaEE) applications lose some of the benefit because they typically perform many short transactions with each one performing a query. They also use additional access layers that result in a system which loses the simplicity of the transparent persistence of an object database.

In our case, the object database was best suited for the uplink. It simplified the uplink development by avoiding object-relational mapping and the complexity of a design based on JDBC or EJB 2. Nowadays with JPA, relational databases are much easier to use for object persistence, so the rationale for using an object database is largely determined by whether the application can benefit from fast navigational access and how much effort is saved in mapping. There are now at least two object database vendors that support both JDO and JPA, so the distinction is becoming somewhat blurred.

For telemetry access we query the database instead of using navigation, as the packets don’t fit neatly into a single containment hierarchy. Queries allows packets to be accessed by many different criteria, such as time, instrument, type, source and so on.
Processing calibration observations does not introduce any special considerations as far as the database is concerned.

Q7. Did you have any scalability and or availability issues during the project? If yes, how did you solve them?

Jon Brumfitt: Scalability would have been an important issue if we had kept to the original concept of storing everything including products in a single database. However, using the object database for just uplink and telemetry meant that this was not a big issue.

The data processing grid retrieves the raw telemetry data from the object database server, which is a 16-core Linux machine with 64 GB of memory. The average load on the server is quite low, but occasionally there have been high peak loads from the grid that have saturated the server disk I/O and slowed down other users of the database. Interactive applications such as mission planning need a rapid response, whereas batch data processing is less critical. We solved this by implementing a mechanism to spread out the grid load by treating the database as a resource.

Once a year, we have made an “Announcement of Opportunity” for astronomers to propose observations that they would like to perform with Herschel. It is only human nature that many people leave it until the last minute and we get a very high peak load on the server in the last hour or two before the deadline! We have used a separate server for this purpose, rather than ingesting proposals directly into our operational database. This has avoided any risk of interfering with routine operations. After the deadline, we have copied the objects into the operational database.

Q8. What about the overall performance of the two databases? What are the lessons learned?

Jon Brumfitt: The databases are good at different things.
As mentioned before, an object database can give a high performance for applications involving a complex object graph which you navigate around. An example is our mission planning system. Object persistence makes application design very simple, although in a real system you still need to introduce layers to decouple the business logic from the persistence.

For the archive, on the other hand, a relational database is more appropriate. We are querying the archive to find data that matches a set of criteria. The data is stored in files rather than as objects in the database.

Q9. What are the next steps planned for the project and the main technical challenges ahead?

Jon Brumfitt: As I mentioned earlier, the coming post-operations phase will concentrate on further improving the data processing software to generate a top-quality legacy archive, and on provision of high-quality support documentation and continued interactive support for the community of astronomers that forms our “customer base”. The system was designed from the outset to support all phases of the mission, from early instrument development tests in the laboratory, though routine operations to the end of the post-operations phase of the mission. The main difference moving into post-operations is that we will stop uplink activities and ingesting new telemetry. We will continue to reprocess all the data regularly as improvements are made to the data processing software.

We are currently in the process of upgrading from Versant 7 to Versant 8.
We have been using Versant 7 since launch and the system has been running well, so there has been little urgency to upgrade.
However, with routine operations coming to an end, we are doing some “technology refresh”, including upgrading to Java 7 and Versant 8.

Q10. Anything else you wish to add?

Jon Brumfitt: These are just some personal thoughts on the way the database market has evolved over the lifetime of Herschel. Thirteen years ago, when we started development of our system, there were expectations that object databases would really take off in line with the growing use of object orientation, but this did not happen. Object databases still represent rather a niche market. It is a pity there is no open-source object-database equivalent of MySQL. This would have encouraged more people to try object databases.

JDO has developed into a mature standard over the years. One of its key features is that it is “architecture neutral”, but in fact there are very few implementations for relational databases. However, it seems to be finding a new role for some NoSQL databases, such as the Google AppEngine datastore.
NoSQL appears to be taking off far quicker than object databases did, although it is an umbrella term that covers quite a few kinds of datastore. Horizontal scaling is likely to be an important feature for many systems in the future. The relational model is still dominant, but there is a growing appreciation of alternatives. There is even talk of “Polyglot Persistence” using different kinds of databases within a system; in a sense we are doing this with our object database and relational archive.

More recently, JPA has created considerable interest in object persistence for relational databases and appears to be rapidly overtaking JDO.
This is partly because it is being adopted by developers of enterprise applications who previously used EJB 2.
If you look at the APIs of JDO and JPA they are actually quite similar apart from the locking modes. However, there is an enormous difference in the way they are typically used in practice. This is more to do with the fact that JPA is often used for enterprise applications. The distinction is getting blurred by some object database vendors who now support JPA with an object database. This could expand the market for object databases by attracting some traditional relational type applications.

So, I wonder what the next 13 years will bring! I am certainly watching developments with interest.
——

Dr Jon Brumfitt, System Architect & System Engineer of Herschel Scientific Ground Segment, European Space Agency.

Jon Brumfitt has a background in Electronics with Physics and Mathematics and has worked on several of ESA’s astrophysics missions, including IUE, Hipparcos, ISO, XMM and currently Herschel. After completing his PhD and a post-doctoral fellowship in image processing, Jon worked on data reduction for the IUE satellite before joining Logica Space and Defence in 1980. In 1984 he moved to Logica’s research centre in Cambridge and then in 1993 to ESTEC in the Netherlands to work on the scientific ground segments for ISO and XMM. In January 2000, he joined the newly formed Herschel team as science ground segment System Architect. As Herschel approached launch, he moved down to the European Space Astronomy Centre in Madrid to become part of the Herschel Science Operations Team, where he is currently System Engineer and System Architect.

Related Posts

The Gaia mission, one year later. Interview with William O’Mullane. January 16, 2013

Objects in Space: “Herschel” the largest telescope ever flown. March 18, 2011

Resources

Introduction to ODBMS By Rick Grehan

ODBMS.org Resources on Object Database Vendors.

—————————————
You can follow ODBMS.org on Twitter : @odbmsorg

##

]]>
http://www.odbms.org/blog/2013/08/big-data-from-space-the-herschel-telescope/feed/ 0
In-memory database systems. Interview with Steve Graves, McObject. http://www.odbms.org/blog/2012/03/in-memory-database-systems-interview-with-steve-graves-mcobject/ http://www.odbms.org/blog/2012/03/in-memory-database-systems-interview-with-steve-graves-mcobject/#comments Fri, 16 Mar 2012 07:43:44 +0000 http://www.odbms.org/blog/?p=1371 “Application types that benefit from an in-memory database system are those for which eliminating latency is a key design goal, and those that run on systems that simply have no persistent storage, like network routers and low-end set-top boxes” — Steve Graves.

On the topic of in-memory database systems, I did interview one of our expert, Steve Graves, co-founder and CEO of McObject.

RVZ

Q1. What is an in-memory database system (IMDS)?

Steve Graves: An in-memory database system (IMDS) is a database management system (DBMS) that uses main memory as its primary storage medium.
A “pure” in-memory database system is one that requires no disk or file I/O, whatsoever.
In contrast, a conventional DBMS is designed around the assumption that records will ultimately be written to persistent storage (usually hard disk or flash memory).
Obviously, disk or flash I/O is expensive, in performance terms, and therefore retrieving data from RAM is faster than fetching it from disk or flash, so IMDSs are very fast.
An IMDS also offers a more streamlined design. Because it is not built around the assumption of storage on hard disk or flash memory, the IMDS can eliminate the various DBMS sub-systems required for persistent storage, including cache management, file management and others. For this reason, an in-memory database is also faster than a conventional database that is either fully-cached or stored on a RAM-disk.

In other areas (not related to persistent storage) an IMDS can offer the same features as a traditional DBMS. These include SQL and/or native language (C/C++, Java, C#, etc.) programming interfaces; formal data definition language (DDL) and database schemas; support for relational, object-oriented, network or combination data designs; transaction logging; database indexes; client/server or in-process system architectures; security features, etc. The list could go on and on. In-memory database systems are a sub-category of DBMSs, and should be able to do everything that entails.

Q2. What are significant differences between an in-memory database versus a database that happens to be in memory (e.g. deployed on a RAM-disk).

Steve Graves: We use the comparison to illustrate IMDSs’ contribution to performance beyond the obvious elimination of disk I/O. If IMDSs’ sole benefit stemmed from getting rid of physical I/O, then we could get the same performance by deploying a traditional DBMS entirely in memory – for example, using a RAM-disk in place of a hard drive.

We tested an application performing the same tasks with three storage scenarios: using an on-disk DBMS with a hard drive; the same on-disk DBMS with a RAM-disk; and an IMDS (McObject’s eXtremeDB). Moving the on-disk database to a RAM drive resulted in nearly 4x improvement in database reads, and more than 3x improvement in writes. But the IMDS (using main memory for storage) outperformed the RAM-disk database by 4x for reads and 420x for writes.

Clearly, factors other than eliminating disk I/O contribute to the IMDS’s performance – otherwise, the DBMS-on-RAM-disk would have matched it. The explanation is that even when using a RAM-disk, the traditional DBMS is still performing many persistent storage-related tasks.
For example, it is still managing a database cache – even though the cache is now entirely redundant, because the data is already in RAM. And the DBMS on a RAM-disk is transferring data to and from various locations, such as a file system, the file system cache, the database cache and the client application, compared to an IMDS, which stores data in main memory and transfers it only to the application. These sources of processing overhead are hard-wired into on-disk DBMS design, and persist even when the DBMS uses a RAM-disk.

An in-memory database system also uses the storage space (memory) more efficiently.
A conventional DBMS can use extra storage space in a trade-off to minimize disk I/O (the assumption being that disk I/O is expensive, and storage space is abundant, so it’s a reasonable trade-off). Conversely, an IMDS needs to maximize storage efficiency because memory is not abundant in the way that disk space is. So a 10 gigabyte traditional database might only be 2 gigabytes when stored in an in-memory database.

Q3. What is in your opinion the current status of the in-memory database technology market?

Steve Graves: The best word for the IMDS market right now is “confusing.” “In-memory database” has become a hot buzzword, with seemingly every DBMS vendor now claiming to have one. Often these purported IMDSs are simply the providers’ existing disk-based DBMS products, which have been tweaked to keep all records in memory – and they more closely resemble a 100% cached database (or a DBMS that is using a RAM-disk for storage) than a true IMDS. The underlying design of these products has not changed, and they are still burdened with DBMS overhead such as caching, data transfer, etc. (McObject has published a white paper, Will the Real IMDS Please Stand Up?, about this proliferation of claims to IMDS status.)

Only a handful of vendors offer IMDSs that are built from scratch as in-memory databases. If you consider these to comprise the in-memory database technology market, then the status of the market is mature. The products are stable, have existed for a decade or more and are deployed in a variety of real-time software applications, ranging from embedded systems to real-time enterprise systems.

Q4. What are the application types that benefit the use of an in-memory database system?

Steve Graves: Application types that benefit from an IMDS are those for which eliminating latency is a key design goal, and those that run on systems that simply have no persistent storage, like network routers and low-end set-top boxes. Sometimes these types overlap, as in the case of a network router that needs to be fast, and has no persistent storage. Embedded systems often fall into the latter category, in fields such as telco and networking gear, avionics, industrial control, consumer electronics, and medical technology. What we call the real-time enterprise sector is represented in the first category, encompassing uses such as analytics, capital markets (algorithmic trading, order matching engines, etc.), real-time cache for e-commerce and other Web-based systems, and more.

Software that must run with minimal hardware resources (RAM and CPU) can also benefit.
As discussed above, IMDSs eliminate sub-systems that are part-and-parcel of on-disk DBMS processing. This streamlined design results in a smaller database system code size and reduced demand for CPU cycles. When it comes to hardware, IMDSs can “do more with less.” This means that the manufacturer of, say, a set-top box that requires a database system for its electronic programming guide, may be able to use a less powerful CPU and/or less memory in each box when it opts for an IMDS instead of an on-disk DBMS. These manufacturing cost savings are particularly desirable in embedded systems products targeting the mass market.

Q5. McObject offers an in-memory database system called eXtremeDB, and an open source embedded DBMS, called Perst. What is the difference between the two? Is there any synergy between the two products?

Steve Graves: Perst is an object-oriented embedded database system.
It is open source and available in Java (including Java ME) and C# (.NET) editions. The design goal for Perst is to provide as nearly transparent persistence for Java and C# objects as practically possibly within the normal Java and .NET frameworks. In other words, no special tools, byte codes, or virtual machine are needed. Perst should provide persistence to Java and C# objects while changing the way a programmer uses those objects as little as possible.

eXtremeDB is not an object-oriented database system, though it does have attributes that give it an object-oriented “flavor.” The design goals of eXtremeDB were to provide a full-featured, in-memory DBMS that could be used right across the computing spectrum: from resource-constrained embedded systems to high-end servers used in systems that strive to squeeze out every possible microsecond of latency. McObject’s eXtremeDB in-memory database system product family has features including support for multiple APIs (SQL ODBC/JDBC & native C/C++, Java and C#), varied database indexes (hash, B-tree, R-tree, KD-tree, and Patricia Trie), ACID transactions, multi-user concurrency (via both locking and “optimistic” transaction managers), and more. The core technology is embodied in the eXtremeDB IMDS edition. The product family includes specialized editions, built on this core IMDS, with capabilities including clustering, high availability, transaction logging, hybrid (in-memory and on-disk) storage, 64-bit support, and even kernel mode deployment. eXtremeDB is not open source, although McObject does license the source code.

The two products do not overlap. There is no shared code, and there is no mechanism for them to share or exchange data. Perst for Java is written in Java, Perst for .NET is written in C#, and eXtremeDB is written in C, with optional APIs for Java and .NET. Perst is a candidate for Java and .NET developers that want an object-oriented embedded database system, have no need for the more advanced features of eXtremeDB, do not need to access their database from C/C++ or from multiple programming languages (a Perst database is compatible with Java or C#), and/or prefer the open source model. Perst has been popular for smartphone apps, thanks to its small footprint and smart engineering that enables Perst to run on mobile platforms such as Windows Phone 7 and Java ME.
eXtremeDB will be a candidate when eliminating latency is a key concern (Perst is quite fast, but not positioned for real-time applications), when the target system doesn’t have a JVM (or sufficient resources for one), when the system needs to support multiple programming languages, and/or when any of eXtremeDB’s advanced features are required.

Q6. What are the current main technological developments for in-memory database systems?

Steve Graves: At McObject, we’re excited about the potential of IMDS technology to scale horizontally, across multiple hardware nodes, to deliver greater scalability and fault-tolerance while enabling more cost-effective system expansion through the use of low-cost (i.e. “commodity”) servers. This enthusiasm is embodied in our new eXtremeDB Cluster edition, which manages data stores across distributed nodes. Among eXtremeDB Cluster’s advantages is that it eliminates any performance ceiling from being CPU-bound on a single server.

Scaling across multiple hardware nodes is receiving a lot of attention these days with the emergence of NoSQL solutions. But database system clustering actually has much deeper roots. One of the application areas where it is used most widely is in telecommunications and networking infrastructure, where eXtremeDB has always been a strong player. And many emerging application categories – ranging from software-as-a-service (SaaS) platforms to e-commmerce and social networking applications – can benefit from a technology that marries IMDSs’ performance and “real” DBMS features, with a distributed system model.

Q7. What are the similarities and differences between current various database clustering solutions? In particular, let’s look at dimensions such as scalability, ACID vs. CAP, intended/applicable problem domains, structured vs. unstructured, and complexity of implementation.

Steve Graves: ACID support vs. “eventual consistency” is a good place to start looking at the differences between clustering database solutions (including some cluster-like NoSQL products). ACID-compliant transactions will be Atomic, Consistent, Isolated and Durable; consistency implies the transaction will bring the database from one valid state to another and that every process will have a consistent view of the database. ACID-compliance enables an on-line bookstore to ensure that a purchase transaction updates the Customers, Orders and Inventory tables of its DBMS. All other things being equal, this is desirable: updating Customers and Orders while failing to change Inventory could potentially result in other orders being taken for items that are no longer available.

However, enforcing the ACID properties becomes more of a challenge with distributed solutions, such as database clusters, because the node initiating a transaction has to wait for acknowledgement from the other nodes that the transaction can be successfully committed (i.e. there are no conflicts with concurrent transactions on other nodes). To speed up transactions, some solutions have relaxed their enforcement of these rules in favor of an “eventual consistency” that allows portions of the database (typically on different nodes) to become temporarily out-of-synch (inconsistent).

Systems embracing eventual consistency will be able to scale horizontally better than ACID solutions – it boils down to their asynchronous rather than synchronous nature.

Eventual consistency is, obviously, a weaker consistency model, and implies some process for resolving consistency problems that will arise when multiple asynchronous transactions give rise to conflicts. Resolving such conflicts increases complexity.

Another area where clustering solutions differ is along the lines of shared-nothing vs. shared-everything approaches. In a shared-nothing cluster, each node has its own set of data.
In a shared-everything cluster, each node works on a common copy of database tables and rows, usually stored in a fast storage area network (SAN). Shared-nothing architecture is naturally more complex: if the data in such a system is partitioned (each node has only a subset of the data) and a query requests data that “lives” on another node, there must be code to locate and fetch it. If the data is not partitioned (each node has its own copy) then there must be code to replicate changes to all nodes when any node commits a transaction that modifies data.

NoSQL solutions emerged in the past several years to address challenges that occur when scaling the traditional RDBMS. To achieve scale, these solutions generally embrace eventual consistency (thus validating the CAP Theorem, which holds that a system cannot simultaneously provide Consistency, Availability and Partition tolerance). And this choice defines the intended/applicable problem domains. Specifically, it eliminates systems that must have consistency. However, many systems don’t have this strict consistency requirement – an on-line retailer such as the bookstore mentioned above may accept the occasional order for a non-existent inventory item as a small price to pay for being able to meet its scalability goals. Conversely, transaction processing systems typically demand absolute consistency.

NoSQL is often described as a better choice for so-called unstructured data. Whereas RDBMSs have a data definition language that describes a database schema and becomes recorded in a database dictionary, NoSQL databases are often schema-less, storing opaque “documents” that are keyed by one or more attributes for subsequent retrieval. Proponents argue that schema-less solutions free us from the rigidity imposed by the relational model and make it easier to adapt to real-world changes. Opponents argue that schema-less systems are for lazy programmers, create a maintenance nightmare, and that there is no equivalent to relational calculus or the ANSI standard for SQL. But the entire structured or unstructured discussion is tangential to database cluster solutions.

Q7. Are in-memory database systems an alternative to classical disk-based relational database systems?

Steve Graves: In-memory database systems are an ideal alternative to disk-based DBMSs when performance and efficiency are priorities. However, this explanation is a bit fuzzy, because what programmer would not claim speed and efficiency as goals? To nail down the answer, it’s useful to ask, “When is an IMDS not an alternative to a disk-based database system?”

Volatility is pointed to as a weak point for IMDSs. If someone pulls the plug on a system, all the data in memory can be lost. In some cases, this is not a terrible outcome. For example, if a set-top box programming guide database goes down, it will be re-provisioned from the satellite transponder or cable head-end. In cases where volatility is more of a problem, IMDSs can mitigate the risk. For example, an IMDS can incorporate transaction logging to provide recoverability. In fact, transaction logging is unavoidable with some products, such as Oracle’s TimesTen (it is optional in eXtremeDB). Database clustering and other distributed approaches (such as master/slave replication) contribute to database durability, as does use of non-volatile RAM (NVRAM, or battery-backed RAM) as storage instead of standard DRAM. Hybrid IMDS technology enables the developer to specify persistent storage for selected record types (presumably those for which the “pain” of loss is highest) while all other records are managed in memory.

However, all of these strategies require some effort to plan and implement. The easiest way to reduce volatility is to use a database system that implements persistent storage for all records by default – and that’s a traditional DBMS. So, the IMDS use-case occurs when the need to eliminate latency outweighs the risk of data loss or the cost of the effort to mitigate volatility.

It is also the case that FLASH and, especially, spinning memory are much less expensive than DRAM, which puts an economic lid on very large in-memory databases for all but the richest users. And, riches notwithstanding, it is not yet possible to build a system with 100’s of terabytes, let alone petabytes or exabytes, of memory, whereas spinning memory has no such limitation.

By continuing to use traditional databases for most applications, developers and end-users are signaling that DBMSs’ built-in persistence is worth its cost in latency. But the growing role of IMDSs in real-time technology ranging from financial trading to e-commerce, avionics, telecom/Netcom, analytics, industrial control and more shows that the need for speed and efficiency often outweighs the convenience of a traditional DBMS.

———–
Steve Graves is co-founder and CEO of McObject, a company specializing in embedded Database Management System (DBMS) software. Prior to McObject, Steve was president and chairman of Centura Solutions Corporation and vice president of worldwide consulting for Centura Software Corporation.

Related Posts

A super-set of MySQL for Big Data. Interview with John Busch, Schooner.

Re-thinking Relational Database Technology. Interview with Barry Morris, Founder & CEO NuoDB.

On Data Management: Interview with Kristof Kloeckner, GM IBM Rational Software.

vFabric SQLFire: Better then RDBMS and NoSQL?

Related Resources

ODBMS.ORG: Free Downloads and Links:
Object Databases
NoSQL Data Stores
Graphs and Data Stores
Cloud Data Stores
Object-Oriented Programming
Entity Framework (EF) Resources
ORM Technology
Object-Relational Impedance Mismatch
Databases in general
Big Data and Analytical Data Platforms

#

]]>
http://www.odbms.org/blog/2012/03/in-memory-database-systems-interview-with-steve-graves-mcobject/feed/ 0
Call for Submissions /deadline May 29, 2009: Common Persistent Model Patterns http://www.odbms.org/blog/2009/04/call-for-submissions-deadline-may-29/ http://www.odbms.org/blog/2009/04/call-for-submissions-deadline-may-29/#comments Mon, 27 Apr 2009 14:25:00 +0000 http://www.odbms.org/odbmsblog/2009/04/27/call-for-submissions-deadline-may-29-2009-common-persistent-model-patterns/ We invite both vendors and Application architects, Enterprise architects, Developers who use databases to submit implementation techniques (database design patterns) which are generally useful for all adopters.

The best submissions will be published in a new series of reports in ODBMS.ORG. All submissions will be published under free software licenses.

Moreover, ODBMS.ORG will give an Award for the most valuable pattern as voted by the ODBMS.ORG community.

Submission modalities:
Submissions should be sent as reports in .pdf only.

Submissions will be considered only if indicating the name of the auhor(s) (or team), affiliation, complete address, and e-mail.
If the submission includes some actual software, you should *before* contact me to verify the modality of the submission.

Please send your submission by e-mail to: editor at odbms dot org

Deadline for submissions: —-> May 29, 2009

]]>
http://www.odbms.org/blog/2009/04/call-for-submissions-deadline-may-29/feed/ 0
ODBMS and RDBMS? http://www.odbms.org/blog/2009/04/for-very-large-databases-you-need-to/ http://www.odbms.org/blog/2009/04/for-very-large-databases-you-need-to/#comments Tue, 07 Apr 2009 23:02:00 +0000 http://www.odbms.org/odbmsblog/2009/04/07/odbms-and-rdbms/ I have recently asked Alexander Jaehne -Application Infrastructure & Integration Team Lead, at a major Swiss bank, what experience does he have in using the various options available for persistence for new projects.

“For very large databases, you need to complement an ODBMS with some relational database. We prefer to have both.. ” replied Jaehne.

You can read the interview with Jaehne: User Report 31/09 .

Of course, this is not true in general.

For example, Richard Ahrens, Director at Merrill Lynch explains : “Our order and quote management system combines an embedded object-based continuous event processor with an embedded object database. This allows us to rapidly add new derivative products to our environment and keeps developers focused on writing code that adds direct business value. With our design, we have strived to eliminate “nonproductive” development: keeping objects in sync with a relational data model adds no value to our business, so we rely on object database technology to make that problem go away.
We have found this approach not only enables us to deliver incremental functionality faster, but also reduces our testing burden since there are fewer moving parts for us to maintain ourselves. ”

The complete set of User Reports includes:

User Report 1/08: Gerd Klevesaat at Siemens
Segment: Industry – Automation
User: Gerd Klevesaat – Software architect – Siemens, Germany

User Report 2/08: Pieter van Zyl at CSIR
Segment: Academia
User: Pieter van Zyl – Researcher – CSIR, South Africa

User Report 3/08: Philippe Roose at Liuppa
Segment: Academia
User: Philippe Roose – Ass. Professor / Researcher – LIUPPA, France

User Report 4/08: William Westlake at SAIC
Segment: Industry – Medical
User: William Westlake – Principal Systems Engineer – SAIC, USA

User Report 5/08: Stefan Edlich at TFH Berlin
Segment: Academia
User: Stefan Edlich – Professor – TFH Berlin, Germany

User Report 6/08: Udayan Banerjee at NIIT
Segment: Industry – Various
User: Udayan Banerjee – CTO – NIIT, India

User Report 7/08: Nishio Shuichi at ATR
Segment: Industry – Robotics
User: Nishio Shuichi – Senior Researcher – ATR Labs, Japan

User Report 8/08: John Davies at Iona
Segment: Industry – Finance
User: John Davies – Technical Director – Iona, USA

User Report 9/08: Scott Ambler at IBM
Segment: Industry – Various
User: Scott Ambler – Practice Leader – IBM Rational, Canada

User Report 10/08: Mike Card at Syracuse
Segment: Industry – Defense
User: Mike Card – Researcher – Syracuse, USA

User Report 11/08: Rich Ahrens at Merrill Lynch
Segment: Industry – Finance
User: Richard Ahrens – Director – Merrill Lynch, USA

User Report 12/08: Ajay Deshpande at Persistent
Segment: Industry – Various
User: Ajay Deshpande – Senior Architect – Persistent, India

User Report 13/08: Horst Braeuner at City of Schwaebisch Hall
Segment: Public – Government
User: Horst Braeuner – CTO, CIO – City of Schwaebisch Hall, Germany

User Report 14/08: Tore Risch at University of Uppsala
Segment: Academia
User: Tore Risch – Professor – University of Uppsala, Sweden

User Report 15/08: Michael Blaha at OMT
Segment: Industry – Consulting
User: Michael Blaha – Principal – OMT Associates, USA

User Report 16/08: Stefan Keller at HSR Rapperswil
Segment: Academia
User: Stefan Keller – Professor – HSR Rapperswil, USA

User Report 17/08: Mohammed Zaki at Rensselaer Polytechnic Institute
Segment: Academia
User: Mohammed Zaki – Associate Professor – Rensselaer Polytechnic Institute, USA

User Report 18/08: Peter Train at Standard Bank
Segment: Industry – Finance
User: Peter Train – Architect – Standard Bank, South Africa

User Report 19/08: Biren Gandhi at IBM
Segment: Industry – Consulting
User: Biren Gandhi – Architect – IBM, Germany

User Report 20/08: Sven Pecher at IBM
Segment: Industry – Consulting
User: Sven Pecher – Senior Consultant – IBM, Germany

User Report 21/08: Frank Stuch at IBM
Segment: Industry – Consulting
User: Sven Pecher – Managing Consultant – IBM, Germany

User Report 22/08: Hiroshi Miyazaki at Fujitsu
Segment: Industry – Various
User: Hiroshi Miyazaki – Methodology – Fujitsu, Japan

User Report 23/08: Robert Huber at 7r
Segment: Industry – Various
User: Robert Huber – Managing Director – 7r, Switzerland

User Report 24/08: Thomas Amberg at Oberon
Segment: Industry – Various
User: Thomas Amberg – Software Engineer, Oberon, Switzerland

User Report 25/08: Martin F. Kraft
Segment: Industry – Logistics
User: Martin F. Kraft – Application Architect, Shipping Company (not disclosed), USA

User Report 26/08: Serena Pizzi at Banca Fideuram
Segment: Industry – Finance
User: Serena Pizzi – Responsible Application Management Back End, Banca Fideuram SpA, Italy

User Report 27/08: Dan Schutzer at FSTC
Segment: Industry – Financial Services
User: Dan Schutzer – Director, FSTC, USA

User Report 28/08: Peter Fallon at Castle Software Australia
Segment: Industry – Software development and consulting
User: Peter Fallon – Director , Castle Software Australia, Australia

User Report 29/08: Benny Schaich-Lebek at SAP
Segment: Industry – ERP
User: Benny Schaich-Lebek – Product Management, SAP, Germany

User Report 30/08: Stephan Kiemle at German Aerospace Center
Segment: Industry – Aereospace
User: Stephan Kiemle – Chief software engineer, German Aerospace Center DLR, Germany

User Report 31/09: Alexander Jaehne at Major Swiss Bank
Segment: Industry – Finance
User: Alexander Jaehne -Application Infrastructure & Integration Team Lead, Switzerland.

]]>
http://www.odbms.org/blog/2009/04/for-very-large-databases-you-need-to/feed/ 0
ODBMS.ORG Useful Links http://www.odbms.org/blog/2009/02/odbmsorg-useful-links/ http://www.odbms.org/blog/2009/02/odbmsorg-useful-links/#comments Mon, 09 Feb 2009 04:14:00 +0000 http://www.odbms.org/odbmsblog/2009/02/09/odbms-org-useful-links/ Since we started up in September 2005, ODBMS.ORG has grown quite a bit. A lot of free resources have been added in the course of the years.

I thought it could be useful to give you a few links to easy your search for useful resources….

Here we are:

If you are interested in Lecture Notes:
Object Databases – Lecture Notes

OO Programming – Lecture Notes

Database in General Lecture notes

If you are interested in testing some vendors software and/or download some free software:
Object Databases – Free Software

OO Programming – Free Software

If you are interested in standards, and in the Object Data Management Group -Past Resources in particular:
Object Data Management Group -Past Resources (ODMG Version 1-3)

If you would like to read user reports on how persistent objects are handled in various domains.

If you are interested in dedicated articles from ODBMS.ORG’s Panel of Experts

And plenty more of Articles and Papers on Object Databases

If you are looking to know more about Commercial and Open Source Object Database Vendors

Last but least if you are looking for books

Hope it helps….

RVZ

]]>
http://www.odbms.org/blog/2009/02/odbmsorg-useful-links/feed/ 0
O/R Impedance Mismatch? Users Speak Up! Fourth Series of User Reports published. http://www.odbms.org/blog/2009/01/or-impedance-mismatch-users-speak-up/ http://www.odbms.org/blog/2009/01/or-impedance-mismatch-users-speak-up/#comments Tue, 13 Jan 2009 23:06:00 +0000 http://www.odbms.org/odbmsblog/2009/01/13/or-impedance-mismatch-users-speak-up-fourth-series-of-user-reports-published/ I have published the fourth series of user reports on using technologies for storing and handling persistent objects.

The fourth series includes 6 new user reports from the following users:

-Martin F. Kraft
-Serena Pizzi at Banca Fideuram
-Dan Schutzer at FSTC
-Peter Fallon at Castle Software Australia
-Benny Schaich-Lebek at SAP
-Stephan Kiemle at German Aerospace Center

The new 6 reports and the complete series of user reports are available for free download.

I have also published a new paper by ODBMS.ORG panel member William Cook on Interprocedural Query Extraction for Transparent Persistence.
Transparent Persistence promises to integrate programming languages and databases by allowing programs to access persistent data with the same ease as non-persistent data. The work is focused on programs written in the current version of Java, without languages changes. However, the techniques developed by Cook and his colleagues may also be of value in conjunction with object-oriented languages extended with high-level query syntax.

]]>
http://www.odbms.org/blog/2009/01/or-impedance-mismatch-users-speak-up/feed/ 2
TechView Product Reports http://www.odbms.org/blog/2008/12/techview-product-reports/ http://www.odbms.org/blog/2008/12/techview-product-reports/#comments Sat, 20 Dec 2008 00:54:00 +0000 http://www.odbms.org/odbmsblog/2008/12/20/techview-product-reports/ Most of the time it is difficult to gather good technical information on products, without marketing or sales hype.

I therefore decided to create a series of product reports on some of the leading Object Database Systems around.

For that, I have prepared 23 questions which I sent to four vendors: db4objects,Objectivity, Inc.,Progress Software and Versant Corporation.
I asked them detailed information on their products, such as: Support of Programming Languages, Queries, Data Modeling, Integration with relational data, Transactions,Persistence,Storage, Architecture,Applications, and Performance.

The result are four TechView Product Reports, which contain detailed useful information on the respective products:
-db4o
– Objectivity/DB
– ObjectStore
– Versant Object Database

I hope these will be useful resources for developers and architects alike.
As always you can freely download the reports.

]]>
http://www.odbms.org/blog/2008/12/techview-product-reports/feed/ 3
O/R Impedance Mismatch? Users Speak Up! Third Series of User Reports published. http://www.odbms.org/blog/2008/10/or-impedance-mismatch-users-speak-up-2/ http://www.odbms.org/blog/2008/10/or-impedance-mismatch-users-speak-up-2/#comments Thu, 23 Oct 2008 02:12:00 +0000 http://www.odbms.org/odbmsblog/2008/10/23/or-impedance-mismatch-users-speak-up-third-series-of-user-reports-published/ I have published the third series of user reports on using technologies for storing and handling persistent objects.
I have defined “users” in a very broad sense, including: CTOs, Technical Directors, Software Architects, Consultants, Developers, and Researchers.

The third series includes 7 new user reports from the following users:

– Peter Train, Architect, Standard Bank Group Limited, South Africa.
– Biren Gandhi, IT Architect and Technical Consultant, IBM Global Business Services, Germany.
– Sven Pecher, Senior Consultant, IBM Global Business Services, Germany.
– Frank Stuch, Managing Consultant, IBM Global Business Services, Germany.
– Hiroshi Miyazaki, Software Architect, Fujitsu, Japan.
– Robert Huber, Managing Director, 7r gmbh, Switzerland.
– Thomas Amberg, Software Engineer, Oberon microsystems, Switzerland.

I asked each users a number of equal questions, among them what experience do they have in using the various options available for persistence for new projects and what are the lessons learned in using such solution(s).

“Some of our newer systems have been developed in-house using an object oriented paradigm. Most (if not all) of these use Relational Database systems to store data and the “impedance mismatch” problem does apply” says Peter Train from Standard Bank.

The lessons learned using Object Relational mapping tools confirm the complexity of such technologies.

Peter Train explains: “The most common problems that we have experienced with object Relational mapping tools are:
i) The effort required to define mappings between the object and the relational models; ii) Difficulty in understanding how the mapping will be implemented at runtime and how this might impact performance and memory utilization. In some cases, a great deal of effort is spent tweaking configurations to achieve satisfactory performance.”

Frank Stuch from IBM Global Business Services has used Hibernate, EJB 2 and EJB 3 Entity Beans in several projects.
Talking about his experience with such tools he says: “EJB 2 is too heavy weight and outdated by EJB 3. EJB 3 is not supported well by development environments like Rational Application Developer and not mature enough. In general all of these solutions give the developer 90% of the comfort of an OODBMS with well established RDBMS.
The problem is that this comfort needs a good understanding of the impedance mismatch and the consequences on performance (e.g. “select n+1 problem”). Many junior developers don’t understand the impact and therefore the performance of the generated/created data queries are often very poor. Senior developers can work very efficient with e.g. Hibernate. “

In some special cases custom solutions have been built, like in the case of Thomas Amberg who works in mobile and embedded software and explains “We use a custom object persistence solution based on sequential serialized update operations appended to a binary file”.

The new 7 reports and the complete series of user reports are available for free download.

I plan to continue to publish users reports on a regular base.

]]>
http://www.odbms.org/blog/2008/10/or-impedance-mismatch-users-speak-up-2/feed/ 0
Do you have an impedance mismatch problem? Users speak up! Second series of user reports published. http://www.odbms.org/blog/2008/09/second-series-of-user-reports-published/ http://www.odbms.org/blog/2008/09/second-series-of-user-reports-published/#comments Thu, 04 Sep 2008 05:03:00 +0000 http://www.odbms.org/odbmsblog/2008/09/04/do-you-have-an-impedance-mismatch-problem-users-speak-up-second-series-of-user-reports-published/ I have started a new series of interviews with users of technologies for storing and handling persistent objects, around the globe.

6 additional user reports (12-17/08) have been published, from the following users:

  • Ajay Deshpande, Persistent
  • Horst Braeuner, City of Schwaebisch Hall
  • Tore Risch, Uppsala University
  • Michael Blaha, OMT Associates
  • Stefan Keller, HSR Rapperswil
  • Mohammed Zaki, Rensselaer

The complete initial series of user reports is available as always for free download.

Here I define “users” in a very broad sense, including: CTOs, Technical Directors, Software Architects, Consultants, Developers, Researchers.

I have asked 5 questions:

Q1. Please explain briefly what are your application domains and your role in the enterprise.

Q2. When the data models used to persistently store data (whether file systems or database management systems) and the data models used to write programs against the data (C++, Smalltalk, Visual Basic, Java, C#) are different, this is referred to as the “impedance mismatch” problem. Do you have an “impedance mismatch” problem?

Q3. What solution(s) do you use for storing and managing persistence objects? What experience do you have in using the various options available for persistence for new projects? What are the lessons learned in using such solution(s)?

Q4. Do you believe that Object Database systems are a suitable solution to the “object persistence” problem? If yes why? If not, why?

Q5. What would you wish as new research/development in the area of Object Persistence in the next 12-24 months?

More information here.

]]>
http://www.odbms.org/blog/2008/09/second-series-of-user-reports-published/feed/ 2
Do you have an impedance mismatch problem? Users speak up! http://www.odbms.org/blog/2008/07/do-you-have-impedance-mismatch-problem/ http://www.odbms.org/blog/2008/07/do-you-have-impedance-mismatch-problem/#comments Tue, 01 Jul 2008 01:15:00 +0000 http://www.odbms.org/odbmsblog/2008/07/01/do-you-have-an-impedance-mismatch-problem-users-speak-up/ I have started a new series of interviews with users of technologies for storing and handling persistent objects, around the globe.

Here I define “users” in a very broad sense, including: CTOs, Technical Directors, Software Architects, Consultants, Developers, Researchers.

I have asked 5 questions:

Q1. Please explain briefly what are your application domains and your role in the enterprise.

Q2. When the data models used to persistently store data (whether file systems or database management systems) and the data models used to write programs against the data (C++, Smalltalk, Visual Basic, Java, C#) are different, this is referred to as the “impedance mismatch” problem. Do you have an “impedance mismatch” problem?

Q3. What solution(s) do you use for storing and managing persistence objects? What experience do you have in using the various options available for persistence for new projects? What are the lessons learned in using such solution(s)?

Q4. Do you believe that Object Database systems are a suitable solution to the “object persistence” problem? If yes why? If not, why?

Q5. What would you wish as new research/development in the area of Object Persistence in the next 12-24 months?

The first series of interviews I published in ODBMS.ORG include:

ODBMS.ORG User Report No. 1/08
Editor Roberto V. Zicari- ODBMS.ORG www.odbms.org
July 2008.
Category: Industry
Domain: Automation System Solutions for Postal Processes.
User Name: Gerd Klevesaat
Title: Software Architect
Organization: – Siemens AG- Industry Sector, Germany

ODBMS.ORG User Report No.2/08
Editor Roberto V. Zicari- www.odbms.org
July 2008.
Category: Academia
Domain: Research/Education
User Name: Pieter van Zyl
Title: Researcher
Organization: Meraka Institute of South Africa’s Council for
Scientific and IndustrialResearch (CSIR) and University of
Pretoria, South Africa.

ODBMS.ORG User Report No.3/08
Editor Roberto V. Zicari- www.odbms.org
July 2008.
Category: Academia
Domain: Research/Education
User Name: Philippe Roose
Title: Associate Professor / Researcher
Organization: LIUPPA/IUT de Bayonne, France.

ODBMS.ORG User Report No.4/08
Editor Roberto V. Zicari- ODBMS.ORG www.odbms.org
July 2008.
Category: Industry
Domain: Various
User Name: William W. Westlake
Title: Principal Systems Engineer
Organization: Science Applications International Corporation, USA

ODBMS.ORG User Report No.5/08
Editor Roberto V. Zicari- ODBMS.ORG www.odbms.org
July 2008.
Category: Academia
Domain: Research/Education
User Name: Stefan Edlich
Title: Professor
Organization: TFH-Berlin, Germany

ODBMS.ORG User Report No. 6/08
Editor Roberto V. Zicari- ODBMS.ORG www.odbms.org
July 2008.
Category: Industry
Domain: Various.
User Name: Udayan Banerjee
Title: CTO
Organization: NIIT Technologies, India.

ODBMS.ORG User Report No. 7/08
Editor Roberto V. Zicari- ODBMS.ORG www.odbms.org
July 2008.
Category: Industry
Domain: Robotics.
User Name: NISHIO Shuichi
Title: Senior Researcher
Organization: JARA/ATR, Japan.

ODBMS.ORG User Report No.8/08
Editor Roberto V. Zicari- ODBMS.ORG www.odbms.org
July 2008.
Category: Industry
Domain: Financial Services
User Name: John Davies
Title: Technical Director
Organization: Iona, UK

ODBMS.ORG User Report No.9/08
Editor Roberto V. Zicari- ODBMS.ORG www.odbms.org
July 2008.
Category: Industry
Domain: Various
User Name: Scott W. Ambler
Title: Practice Leader Agile Development
Organization: IBM Rational, Canada

ODBMS.ORG User Report No. 10/08
Editor Roberto V. Zicari- ODBMS.ORG www.odbms.org
June 2008.
Category: Industry
Domain: Defense/intelligence area.
User Name: Mike Card
Title: Principal engineer
Organization: Syracuse Research Corporation (SRC), USA

ODBMS.ORG User Report No. 11/08
Editor Roberto V. Zicari- ODBMS.ORG www.odbms.org
July 2008.
Category: Industry
Domain: Finance
User Name: Richard Ahrens
Title: Director
Organization: Merrill Lynch, US

All user reports are available for free download (PDF)

Hope you`ll find them interesting. More to come….I plan to publish user reports in ODBMS.ORG on a regular base.

RVZ

]]>
http://www.odbms.org/blog/2008/07/do-you-have-impedance-mismatch-problem/feed/ 0