Sunday, December 4, 2011

Migrating oVirt from MS-SQL to PostgreSQL

This is a second post in the series of changing our application from Microsoft-based technologies to open sourced technologies.

The first post was about switching from C# to Java, this post is about changing the DBMS
from MS-SQL to PostgreSQL.

When we initially thought how to introduce this fundamental change to the application without affecting stability/maturity, we thought it would be right to:
  • Limit the code changes to the DB access layer code
  • Leave the DB schema and other components in the application untouched
By following these guidelines we can run the test suits we have, before and after the changes, as well as test the application on exiting databases with real data to find regressions.

At first we thought of using JPA and abstracting the DBMS layer with no tight coupling to Postgres. While looking into using JPA we realized that we do not have 'object-oriented' data base schema and for getting that we had to introduce changes both to the DB schema and the Java entities.
Changing the Java entities meant changing code all over the application, so although JPA seems like the long-term solution we strive for, to ease the migration path we were looking for a stepping stone.

The application's DB-access-layer was based on MS-SQL Stored Procedures (a set of pre-compiled SQL statements). In parallel to JPA we looked into migrating the SP to Postgres functions.
We found a tool named SQLWays for doing that automatically, after a few cycles with the tool we got about 70% of the work done, it handled both the SP translation and the db schema creation scripts.
In addition to that we wrote some sed scripts to manipulate the generated Postgres functions and do manual adaptations for specific more complicated SP, table views etc.
Most of our SP are simple ANSI-SQL and the db layer does not change often. This helped us getting the work done relatively fast. Using Postgres functions was starting to feel like a real alternative.

The next step was executing the functions from within the code, which was straight forward except for one change that is worth mentioning.

Cursors:

In MSSQL we used global cursor to return the SP results, the global cursor was translated by the tool to REFCURSORS. The problem was that we had code in the application that accessed the SP-results after the DB transaction was closed. In MSSQL it worked because the global cursor enables accessing the results after the transaction is committed.
The REFCURSORS in Postgres, on the other hand, are closed when the transaction is committed and we failed retrieving the data.
PostgreSQL offers another way to handle that - SETOF, it returns a set of rows which are treated by PostgreSQL as a table and available regardless of the DB transaction state.

After handling the cursors we got the application running on Postgres, you can imagine the big smiles it put on our faces, only it did not last long...
We soon realized we have a huge problem which we did not foresee coming - "read uncommitted" ?!?

Read uncommitted

The ANSI SQL standard defines 4 transaction isolation levels: serializable, repeatable-reads, read-committed and read-uncommitted.
The isolation level controls the degree of locking that occurs when selecting data.
Our application used MSSQL 'Read Uncommitted', which stands in the spec for data that have been updated but not yet committed and may be read by other transactions.

I am not going to defend the choice of using this isolation level as I have
never supported it, however I will describe the motivations to use it.
One motivation was to have a more responsive UI. Reflecting the current application status as fast as possible, even at the 'price' of occasionally exposing the user to a temporary state.
Another motivation was to reduce the possibility of deadlock, in MSSQL row locks
can be escalated to page or table locks for large read operations [see 1] thus,
by using read-uncommitted, table locks are less likely to occur.

Regardless if using read-uncommitted is the right solution for those issues, the fact is that the application is indeed very responsive and rarely experiences dead-locks.

When we got the application running on Postgres we noticed that the application behaves differently, the UI is less responsive and we stumbled into dead-locks quite often. Looking into it a little we found that in Postgres when you specify working with
read-uncommitted isolation level you de-facto get read-committed [see 2].
That 'forced' us to find an alternative solution for getting good UI responsiveness and avoid table locking. Then we introduced the compensation mechanism.

Compensation

After digging around we figured that we can improve system behaviour by reducing the locking scopes to shorter code segments, or in other words deal with the relatively long transactions in the application.

The idea was to break flows in the application into short segments, each segment is executed within an independent transaction.
By breaking the flow into segments we solved both issues:
- The UI responsiveness improved because the data was committed earlier and was 'available' for other transactions.
- The DB locking scope was shorter thus reducing the amount of dead-locks.

This solution could have been perfect if we did not have to handle error flows :)

Handling error flow was interesting, as we could encounter an error after some segment/s were committed, that means we need to undo work that was done by previously committed transactions.

For that we introduced the compensation mechanism.
The basic concept of compensation is that in each segment, if a persisted entity changes you take an in-memory snapshot of the original entity before changing it. when you get to the end of a segment and commit the transaction associated with this segment you also persist the in-memory snapshots into the DB (in a dedicated compensation table), then you move to the next segment. If the flow ended successfully you remove the snapshots from the DB.
In case of a failure in the flow we compensate for the previously committed transactions by
recovering the original data according to the snapshots we persisted in the DB.



We also implemented 2 optimizations for the compensation:
1. snapshoting an entity only before the first time it changes (currently compensation always return to the initial state, before the flow started, so if the entity changed a couple of times we only need the original state)
2. snapshotting only part of the entity properties as most of the time the only
property that was changed is the entity status.


The work is done and the application is running on Postgres.
If you want to take a peek at the code, it is open source so go ahead and look.

The project is oVirt - http://www.ovirt.org
The oVirt-engine-core code is available for HTTP browsing.

enjoy,
Livnat

[1] - http://msdn.microsoft.com/en-us/library/ms173763.aspx
[2] - http://www.postgresql.org/docs/8.4/static/transaction-iso.html

1 comment:

  1. Great post...!Keep posting...

    Migrating oVirt from MS-SQL to PostgreSQL

    Well, good to read this, helpful post for the students & developers...

    Top 10 IOS App Development Companies in New York, USA | Hire IOS App Developers New York 2022

    Waiting for future posts...

    ReplyDelete