Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

Writing cross DB applications

By charsplat in News
Thu Jul 20, 2000 at 08:34:38 AM EST
Tags: Help! (Ask Kuro5hin) (all tags)
Help! (Ask Kuro5hin)

Our company is running into an interesting problem. We have some applications written in VB (yes, we hate this!) and our clients each have a RDBMS they prefer to use. i.e.) some of our small clients prefer ms-access while larger clients prefer sql-server or oracle.
We are running up against a wall in that we do not want to maintain 3 or 4 sets of code just to handle each database engine (ie. oracle has a completely different syntax for left/right joins than access). Is there a utility to convert a query to another DB platform either at runtime or before compilation?

ie) In access we've got:
select * from afe left join users on users.userid=afe.managerid;
while in Oracle we have...
select * from afe,users afe.managerID = users.userID (+);


Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure


Related Links
o Also by charsplat

Display: Sort:
Writing cross DB applications | 25 comments (20 topical, 5 editorial, 0 hidden)
use perl; (1.00 / 1) (#2)
by rusty on Wed Jul 19, 2000 at 06:02:31 PM EST

I have to second the suggestion that you use perl. DBI (in perl) is inherently cross-platform, and you can easily further abstract the query writing code to handle radically different SQL syntax with the same application code. Basically, you need a middle layer, somewhere, and perl is really good at that.

Not the real rusty
Re: use perl; (1.00 / 1) (#3)
by eMBee on Wed Jul 19, 2000 at 07:00:15 PM EST

but that is just suggesting a different language, not solving the problem. the question is about the middleware, not the language.
is there such middleware somewhere?

(apple's WebObjects does have such middleware, but it comes with a pricetag)

greetings, eMBee.
Gnu is Not Unix / Linux Is Not UniX
[ Parent ]

Re: use perl; (none / 0) (#7)
by rusty on Wed Jul 19, 2000 at 09:42:46 PM EST

Right you are, now that I read more carefully. In that case, I have no idea what the answer is. :-) Perhaps the comments on how it can be done in another language will help them think along similar lines though? Generally, the game is to abstract your DB access so that the code describes the query you want, and an underlying library actually does the mechanics of building a query and fetching the data. That way, in theory, all you need to do is replace 'oracle_lib' with 'access_lib' and recompile.

Not the real rusty
[ Parent ]
Re: use perl; (none / 0) (#12)
by eann on Thu Jul 20, 2000 at 09:35:46 AM EST

I agree with the idea of encouraging a different development language. But Perl and even apps like ColdFusion (and presumably WebObjects, which, in my professional opinion, sucks) will allow you to pass DB-specific code though. I'm so spoiled from using bitwise and on enum and set columns in MySQL I find working in Access, even for quick prototyping, excruciating.

Welcome back to the site, Rusty. How's the new life on the left coast?

Our scientific power has outrun our spiritual power. We have guided missiles and misguided men. —MLK

$email =~ s/0/o/; # The K5 cabal is out to get you.

[ Parent ]
Don't use non-standard SQL (4.70 / 3) (#8)
by joshv on Wed Jul 19, 2000 at 10:43:55 PM EST

I work on one of the more successful cross-DB applications out there - PeopleSoft.

In general, PeopleSoft avoids non-standard SQL altogether - where they can't, they use meta-SQL that gets translated before being sent to the database - for example:
select name from employees where hire_date <= %currentdate

In Oracle becomes:
select name from employees where hire_date <= sysdate

In DB2 becomes:
select name from employees where hire_date <= current date

The application interprets the meta-sql on the fly before sending it to the database. They have some more complicated meta-sql to add things like 'substring', 'truncate' and so forth.

Being able to do something like this of course assumes your have abstracted your database access layer and don't have VB code called the DB directly all over the place.

For really complex stuff, DDL for example, they use a templating system. For example the application stores templates for the table creation syntax for every database that is supported.


Re: Don't use non-standard SQL (none / 0) (#25)
by Anonymous Hero on Mon Jul 24, 2000 at 12:12:11 AM EST

oh lord.

i knew it wouldn't be long before someone spouted out a "but peoplesoft does it excellently!"

and it also happens to be the #1 worst-to-maintain product suite in existence.

because they cose to work with least-common-denominator instead of using specific features of the RDBMS in question, they do NOTHING well.

i would prefer that they allow for pluggable-modules that throw the actual sql statement syntax rules in for the RDBMS desired... then, a generic standard SQL syntax can be converted into the specific one as needed... (or saved independently, etc, etc.)

[ Parent ]
database independance (3.00 / 2) (#10)
by martin on Thu Jul 20, 2000 at 04:37:18 AM EST

way many ways of doing it - perl/java etc all have ways of doing it. BUT remember the guys got to use VB. Perhaps this is way to convince the PHB that VB sucks for this kind of thing and another development language should be used. Also the Oracle select given completely sucks - using the 'choose' optimiser and analysing the tables provides a much more reabable/portable SQL. Then of course we could get into developing the app using PHP to provide the database independance!!!

Simplify your database design (4.00 / 1) (#11)
by chip on Thu Jul 20, 2000 at 05:09:43 AM EST

I agree that using simpler SQL is a good approach, but to make that practical it is necessary to simplify your database design. I have found on so many projects that I have worked on that a simple, well optimized and in some cases deliberately de-normalized (from third normal form) database with application code that uses standard select, insert, update and delete and doesn't use built-in functions works the best. Those applications that have complex joins and rely on the database to do string processing, math functions, date functions, etc, are using a system that was optimized for data storage, searching and retrieval to run part of the application. Avoiding correlated sub-queries and breaking a complex query into several parts by storing intermediate results in temporary tables or in application memory can also increase performance and allow a simpler database schema to be designed. In general, it is best to design the database to minimize the number of tables that will be needed in any join, trying not to go over three. Also the joins should all be standard. With this kind of approach you can use many different methods for sending the same simple ANSI standard SQL to almost any database and also usually increase the database performance.
-- Like science fiction? Try Cordwainer Smith!
Write a data_module (5.00 / 1) (#13)
by jabber on Thu Jul 20, 2000 at 09:37:22 AM EST

We've got a similar issue where I work.
We've put together a dataModule object to handle all DB interactions.
Once the application code depends solely on the DM, the DM can then be multi-ported to connect to whichever database (of a known set, admitedly) it has available, and communicate with it accordingly.

As another poster has mentioned, avoid non-standard SQL. But sometimes you just can not do so; and sometimes there are DB specifics you have to cater to for one reason or another. By confining this sort of DB specific stuff to a single object, you should be able to only tweak that one thing to bring another DBMS ino the fold.

My app is a pure-java effort, but in VB, it might even be best to stick the DM code into a DLL, and essencially forget about it from the point of view of the application. Hell, design it right and you can forget about the DBMS altogether, and go against a set of files. The app only needs the DLL at this point.

[TINK5C] |"Is K5 my kapusta intellectual teddy bear?"| "Yes"

ODBC (none / 0) (#14)
by opus on Thu Jul 20, 2000 at 11:27:09 AM EST

IANADBG (I am not a database guru), but isn't this the very problem that ODBC is meant to solve? (The Perl DBI is a similar solution to the same problem.)

Write your apps to the ODBC standard, then let the ODBC driver do the work of translating into database native SQL. You should be able to get ODBC drivers for just about any DB.

ADO & OLE-DB (none / 0) (#15)
by Anonymous Hero on Thu Jul 20, 2000 at 12:04:00 PM EST

Locked into VB the choice is simple.

Use ADO, a decent OLE-DB driver(s), and a lot of stored procedures.

There are no magic bullets.

Yes but... (none / 0) (#16)
by charsplat on Thu Jul 20, 2000 at 12:10:10 PM EST

I know that Perl and Java ( and VB too ) have various schemes such as DBI or ODBC or JDBC which provide a common way to initiate queries on a DBMS and return the information. None of these as far as I know handle the daunting task of converting the actual queries between oracles implementation of SQL and Access SQL.

I agree with just trying to keep the SQL very simple but there are places in our database where reports require hugely complex queries with 5-10+ table joins and it's annoying to maintain two copies of these things.

Re: Yes but... (none / 0) (#24)
by chip on Fri Jul 21, 2000 at 05:02:05 AM EST

In general, when you go over a five table join on any database, you are potentially creating a query that may run extremely slowly even for small amounts of data.

The solution to this is to do your joins two or three tables at a time into a temporary table. If your database doesn't support temporary tables you can create the table as a normal part of the schema, but use it only for staging the query. You may need several of these intermediate tables depending on the number of joins. The intermediate result table from the join is then joined with the next two or three tables and so on until the final result is obtained. (This assumes that you have good indexes for the primary and foreign keys of each table.)

Choose the order of the joins carefully to reduce the number of rows as quickly as possible. This can be done by analyzing the query in terms of the general comparative number of rows in the tables to be joined, the characteristics of the data and the goal of the query.

The reason this usually increases the speed of the query is that most databases, even the ones that can generate table statistics and make good choices in index use, are only really safely optimal in joining up to three tables at a time. Everything else could generate very large full and partial multiple Cartesian products that can take forever to process. Most database companies will deny this, but try it for yourself and see.

Also, if the query is too general purpose, it may produce this kind of need for many tables to be joined. Sometimes a re-thinking of the report or user screen can allow the presentation of the information in several distinct chunks which in some cases can be easier for the user to understand and also makes it possible to create several different smaller queries.

Hope this helps.

-- Like science fiction? Try Cordwainer Smith!
[ Parent ]
how we do this... (none / 0) (#17)
by eries on Thu Jul 20, 2000 at 12:28:14 PM EST

We do our work in PHP, but we have approached this problem in two ways. First, we have a SQL Abstraction Layer that allows us to keep all of our SQL statements in one place, encapsulated as objects. These objects can perform various translations depending on the situation.

Second, we have a template library that handles most of our data loading for us. This library examines the database at run-time (via the cross-db PHPLIB) and generates SQL on the fly for the tables it finds there. It would be basically trivial to make this behviour dependent on the database it is connected to.
Promoting open-source OO code reuse on the web: the Enzyme open-source project

look into EOF (none / 0) (#18)
by mbharrin on Thu Jul 20, 2000 at 12:49:51 PM EST

I think MS's ADO does what you want, but if not look into Enterprise Objects Framework (EOF) from NeXT (Apple).


as part of WebObjects, EOF runs on NT, OSX Server, Solaris, and HP-UX. It interfaces with ODBC on NT, Oracle, Informix, Sybase, Frontbase, Openbase, and ERP applications. maybe others also.

since Openstep is no longer a NeXT/Apple product, I don't know about getting EOF as a standalone product.

Re: look into EOF (none / 0) (#19)
by mbharrin on Thu Jul 20, 2000 at 12:51:12 PM EST

it's also only $699 now. EOF is, of course, built into Mac OSX too.

[ Parent ]
Write your own abstraction layer (none / 0) (#20)
by Anonymous Hero on Thu Jul 20, 2000 at 02:08:05 PM EST

Why not write your own abstraction layer? Write a DB interface that is universal, that your VB app talks to. Then write specific DB implementations of you virtual DB interface below the virtual layer. That way, if you write VB code for the virtual DB language. You only need to make sure that the virtual layer to specific DB implementations works.

Kill Access (none / 0) (#21)
by GreyLightning on Thu Jul 20, 2000 at 06:26:38 PM EST

Try to avoid access DB's as much as possible, unless your data set is *really* small. We've had tables with just a few hundred text entries that grew to several hundred MB's in access format. Stored on the SQL backend, they were only a few megs. Regarding the format issue, I would second the notion of writing your own abstraction layer that transparently allows re-formats your query for the requested backend.

Modular code perhaps? (none / 0) (#22)
by pdcruze on Thu Jul 20, 2000 at 06:40:01 PM EST

I'm not aware of any middleware tools that will allow you to write, say, SQL-92 and then translate that it into various SQL dialects. Sounds like maybe the ideal solution.

How often does the SQL statements in your code change? You could abstract it and the necessary support code by placing it into modules/libraries for example. eg, have a FetchCustomersByDate subroutine for both Access and Oracle and just link in the different modules at compile time. Don't know whether this is feasible.

You might also want to look at using stored procedures. I don't know bupkiss about Access but Oracle and other professional databases support these. Write the stored procedures once for each database and you shouldn't have to change any of your code. You simply call a stored procedure, pass it your arguments, and it returns the data.

Just a couple of quick ideas.

Multi-DB development (none / 0) (#23)
by AbMan on Fri Jul 21, 2000 at 12:31:36 AM EST

As many have pointed out, there are a number of offerings *designed* to allow access to multiple RDBMS's. Perl DBI, ODBC, JDBC, ADO, OLEDB, Borland Database Engine, etc. etc. However, these only deal with your problem on a fairly simplistic level. It's been a while since I've done multi-RDMBS target development, but in my experience doing your joins, etc. using ADO rather than server specific RDBMS' SQL leads to queries which run glacially slow.

While it's all very well to say "use Perl" and "use PHP/WebObjects/...", such answers assume you have the time and inclination to convert your app from VB to the magic bullet development environment. And it also assumes you know or have the time to learn the new language. Few employers are keen to subsidise such efforts in my experience. Also, people seem keen to put forward PHP and other web development tools, but this is of course only pertinent if you are developing such applications (and not everybody does, of course).

It seems to me you want to have both the abstraction (and resultant ease and speed of application development) of an ADO or similar, but still gain the power and efficiency of having tweakable server-specific SQL. While such a product would be wonderful (and sell like crazy), no one's developed it yet to my knowledge. Maybe the experience (and the pain) you get from working with these apps could inspire you to write such a thing.

Access does not have stored procedures (someone didn't know). While it has its limitations, it can be successfully used for applications which have a limited number of users and do not have critical response time requirements. You require regular compaction and good backups. The experience of the "Kill Access" poster does not match mine (though if I've got Oracle on hand I'd choose that in an NY second). Access is going to be everywhere because it's bundled with Office Pro, and you as a supplier are obligated to come up with sound reasons to convince a customer why he should spend more money on something else, particularly something which requires a full or part time DBA to administer it.

I think the suggestion of encapsulating your various server specific queries inside an object, module, or DLL is going to be your best bet, as several others have suggested. A multi-tier application model (with a "database abstraction" tier) may be a good choice or architectures.
But if you want your app to be as efficient as possible, you've got to write your own SQL IMHO.

Writing cross DB applications | 25 comments (20 topical, 5 editorial, 0 hidden)
Display: Sort:


All trademarks and copyrights on this page are owned by their respective companies. The Rest 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!