Common Brushtail Possum Possum Manual
SourceForge.net Logo
v0.9 8/5/2006

Table of contents
  1. What is it ?
  2. How does it work ?
  3. The data access object
    1. Hierarchy
      1. Multiple datasources
      2. Transactions
    2. Data mapping
    3. Enumerated types
  4. The annotation tags
    1. @TableDefinition
    2. @ColumnDefinition
    3. @IsPrimaryKey
    4. @IsEnumDataField
  5. Custom Data Types.
  6. Beyond the obviously obvious
    1. Joins
    2. Many to many
    3. Auto increment / Identity columns
    4. Setting the datasource for a dao
    5. Beans without a primary key
  7. Logging
  8. Database support
    1. Database specific handlers
    2. What databases are supported ?
    3. Writing your own database handler
  9. Miscellaneous
    1. Creating a datasource

What is it ?

Possum is a simple tool that makes it easy to handle bean database persistence, while still using straight forward JDBC. Possum has been designed to work with beans, it can write and read beans to and from the database. It takes away the nitty gritty stuff in dealing with JDBC while still allowing direct access to the actual sql statements.

How does it work ?

You use annotation to inform possum about the object to relational mapping. In your bean java code, you define the table name on the class level, and each column name on the field level. You use the @TableDefinition annotation tag for tables and the @ColumnDefinition tag for columns. Primary keys are marked with the @IsPrimaryKey tag. Let's look at an example. Suppose we have a database table that contains information about persons. The table has the following layout:


create table persons (
    person_id int primary key,
    first_name char(32),
    last_name char(32),
    age int
);

Now we can create a bean that represents a row in this table. The class level annotation tag TableDefinition defines the name of the table. The bean has a number of fields that should be mapped to database columns. These columns are all marked with the tag ColumnDefinition. Last but not least, we need a data access object (dao) to manage the bean. One dao will do for all beans of this type, so we can make the dao static. The dao is using generics, it needs the bean type so you will have access to typed dao methods. For the dao constructor, you can pass in a factory that produces your beans or (like in our case here), you pass in a class type. The dao will then build its own little factory that will produce fresh beans using your beans zero argument constructor.


@TableDefinition(name = "persons")
public class Person {
    public final static DataSourceDao<Person> dao = new DataSourceDao<Person>(Person.class);

    @ColumnDefinition(name = "first_name")
    private String firstName;

    @ColumnDefinition(name = "last_name")
    private String lastName;

    @ColumnDefinition(name = "age")
    private int age;

    @IsPrimaryKey
    private PersonPK primaryKey;

    // Getters, setters...
}



The bean can have a primary key, this key is defined as a separate class. The primary key is specific to a particular bean class. The primary key class has its own column definitions. It is usually better to define the primary key class as a static inner class to the bean but for clarity, the two have been separated here.

Note that it is not required for a bean to have a primary key, you can also create beans that do not have a primary key. However, you should realize that you can never update these beans, since there would be no way to locate them in the database.


public class PersonPK {
    @ColumnDefinition(name = "person_id")
    private int id;

    public int getId() {
        return id;
    }

    public void setId(int id) {
       this.id = id;
    }
}

All that is left now is to tell the dao where to go for its database connections. You do this by setting the data source for the dao. If you are unfamiliar with data sources, check out creating a datasource.


    Person.dao.setDataSource( Resources.getInstance().getDataSource() );

Now you can use the class to read and write to and from the database.

Create a new person bean and write it to the database.


    // Create a new person class and populate it
    Person person = new Person();
    person.setFirstName( "AAAA");
    person.setLastName( "BBBB" );
    person.setAge( 34 );
    person.setPrimaryKey( new PersonPK() );
    person.getPrimaryKey().setId( 2 );

    // Save to database
    Person.dao.save( person );

    // Load a specific person from the database
    PersonPK pk = new PersonPK();
    pk.setId( 3 );
    person = Person.dao.load( pk );

Get all persons in the database.


    List<Person> persons = Person.dao.load();

Now what if we want to load all persons with an age greater than 40 ? In plain sql, this is a simple problem, you just add to the where clause. However, when dealing with beans, we want the sql to be hidden from us so we do not have to refactor all our sql statements when we add a field/column to a bean. Possum deals with this problem through the concept of sql templates. The data access object provides a method buildSql() that accepts sql templates and will produce standard sql statements. A sql template is just a string with the potential occurence of six keywords. These keywords are:

$t The name of the database table.
$c The list of column names ("C1,C2" as in "select C1,C2 from ...").
$uv The list of update values ("C1=123, C2='joe'" as in "update persons set C1=123, C2='joe' where...").
$ic The list of insert column names ("C1, C2" as  in "insert into persons(C1,C2) values...").
$iv The list of insert values ("123, 4.321" as in "insert into persons(C1,C2) values(123, 4.321)").
$pk An expression listing the primary key in a where clause (C1=2 and C2='today').

In possum sql templates, the statement that loaded all beans actually looks like this:

String sql = Person.dao.buildSql( "select * from $t" );

Translates to: "select * from persons"

Another example. The statement that saved the Person bean, will actually first do an update. If this update did not touch any rows, it will conclude that the bean is not in the database yet and will do an insert. This is how it looks in sql templates:

String sql = Person.dao.buildSql("update $t set $uv where $pk", e, getPrimaryKey(e));  // update

Translates to: "update persons set first_name='AAAA', last_name='BBBB', age=34 where person_id=2"


String sql = Person.dao.buildSql("buildSql("insert into $t($ic) values($iv)", e)); // insert

Translates to: "insert into persons(first_name, last_name, age) values( 'AAAA', 'BBBB', 34 )"
 
Sql templating is a very simple but powerful mechanism. It allows you to define plain sql at the lowest level while still maintaining a fair isolation from bean column changes, column additions, primary key changes etc. Back to our problem now, we wanted to load all persons with an age greater than 40. Here is how to do this:


    List<Person> persons =  Person.dao.load( "select * from $t where age > 40" );


But what if the cutoff age is not a fixed number but a variable ? In this case, we will simply build a regular prepared statement. You have to realize that the dao sql builder always builds strings, it does not use jdbc placeholders internally. This is great since you are now free to use those placeholders in your application code.


    PreparedStatement ps = Person.dao.buildSql( "select * from $t where age > ?" );
    ps.setInt( 1, cutOffAge );
    List<Person> persons = Person.dao.parse( ps.executeQuery() );


You can follow the same strategy when joining against other tables:


    PreparedStatement ps = Person.dao.buildSql( "select p.* from $t p, sometable s where p.age > ? and p.person_id = s.person_id" );
    ps.setInt( 1, cutOffAge );
    List<Person> persons = Person.dao.parse( ps.executeQuery() );


The data access object

Hierarchy

The data access object is layered, it consists of a three class hierarchy. You can use the dao on all three levels.


Inheritance diagram

  1. On the lowest level, there is the BaseDao object. The BaseDao does a discovery of the bean it supports. During this discovery phase, the dao will create the field to column mapping, will discover the primary key class etc. After this initial work, the main task of the BaseDao is to build sql statements from sql templates and to parse result sets into lists of beans.
  2. Next, there is the ConnectionDao. This data access object provides basic bean services (saving and loading beans) on a connection basis. Each call expects a java.sql.Connection object to be passed in.
  3. The third layer is the DataSourceDao. This dao hides the connections neede by the middle layer by retrieving them from a connection pool. 
What layer do you choose ? It all depends on what you are doing. Stick with the DataSourceDao layer if you can, it is the simplest interface. However, if you need to perform multiple operations within one single transaction, you will haver to revert to the ConnectionDao layer. When you are down to straight forward JDBC, you will use the BaseDao layer, you will build sql statements from sql templates and you will have the BaseDao convert result sets into lists of beans. Time for an example. Here are three ways of loading a list of beans from the database, each operating at a different layer:


    // DataSource layer - layer provides connection itself
    List<Person> persons = Person.dao.load();

    // ConnectionDao layer - you provide the connection
    Connection c = ....
    List<Person> persons = Person.dao.load( c );

    // BaseDao layer - You provide the connection, you build and execute the query
    Connection c = ....
    PreparedStatement ps = c.prepareStatement( Person.dao.buildSql( "select * from $t" ) );
    List<Person> persons = Person.dao.parse( ps.executeQuery() );


To create a dao, you will always have to create a DataSourceDao. You cannot create a ConnectionDao or a BaseDao directly. Reason for this is that the dao needs to do some discovery work at startup. It needs to talk to the database briefly, to figure out what database it is connected to. In order to do that, the discovery process needs a connection and currently it can only get that from the DataSourceDao. If you do not have a data source object, you should consider creating one, see creating a data source.

Multiple datasources

It is easy to build a possum based system that uses multiple data sources. Each dao has its own specific data source, so a mix of data sources can be created easily. It is even possible to create multiple dao's per bean type.

Transactions

So how about transactions ? Possum does not provide a transaction mechanism, instead, it allows you to hook up to the existing JDBC transaction mechanism. This should allow a seamless integration with transactional applications. To use possum in a transactional way, simply use the ConnectionDao layer. Here is a simple example that saves multiple Person beans in one transaction.


    Connection c = ....

    try {
        c.setAutoCommit( false );

        // Save multiple persons in one transaction
        for( Person person : persons ) {
            Person.dao.save( c, person );
        }

        c.commit();
    } catch( SQLException se ) {
        c.rollback();
    }

Data mapping

Possum maps database types to their direct java types. Simply said, whatever you would read from the database using jdbc, that is the type that possum is expecting in your bean class field. There is are two exceptions to this rule:
  1. Jdbc does not support the java primary type char directly. To store a char in a database, using jdbc, you will have to convert it to a string. Possum will do this conversion for you automatically, allowing you to map a char(1) column in the database to a simple char field.
  2. Possum supports the use of enumerated types as fields. 

Enumerated types

You can specify an enumerated type as a bean field as long as you mark a field in your enumerated type that possum can use to get the value to store into the database for each value of the enumeration. You use the annotation tag @IsEnumDataField to do this. Here is an example:


public enum Color {
    RED( 'R' ),
    GREEN( 'G' ),
    BLUE( 'B' );

    @IsEnumDataField
    private char code;

    Color( char code ) {
        this.code = code;
    }

    public char getCode() {
       return( code );
    }
}

public class Cabinet {

    @ColumnDefinition( name = 'color' )
    private Color color;

    ....

Note that in this example, the field contains a char. That is kind of similar to having a char field in your bean. You are free to choose the field type, as long as it maps correctly to the database column in your database.

A two way conversion will be performed with the enumerated type and the use of the field marked with @IsEnumDataField:
  1. When a beans is written to the database, the actual database value of the enumerated type can be found easily by simply extracting the value of the field marked with @IsEnumDataField.. 
  2. When data is read from the database, possum will need to figure out which enumeration fits a certain value. At initialization, possum will call the values() method on the enumerated type. This will provide an iteration of all enum values. This information is used to build a hash map that translates from database type to enum value. This provides a quick lookup of enum values.
Note that when no enum value can be found for a certain database value, possum will set the enumeration field to null.

Here is a further illustration of the use of enumerated types with possum. You could define an enhanced color class that adds more information per color type. Using this enum, you could directly retrieve the red, green and blue values from your bean, right after loading it from the database.


public enum Color {
    RED( 'R' , 255, 0, 0),
    GREEN( 'G' , 0, 255, 0),
    BLUE( 'B', 0, 0, 255 );

    @IsEnumDataField
    private char code;

    private int r;
    private int g;
    private int b;

    Color( char code, int r, int g, int b ) {
        this.code = code;
        this.r = r;
        this.g = g;
        this.b = b;
    }

    public char getCode() {
       return( code );
    }

    public int getRed() {
        return( r );
    }

    public int getGreen() {
        return( g );
    }

    public int getBlue() {
        return( b );
    }
}

The annotation tags

Currently, there are three annotation tags. There is the table tag, the column tag and a marker tag that marks a primary key field. You will need all three of them for a dao to work.

@TableDefinition

The table definition tag simply states the name of the table. There is no additional functionality for this tag at the moment. The tag should be positioned right in front of the bean class definition.


@TableDefinition(name = "persons")
public class Person {
    ....
}


@ColumnDefinition

The column definition tag defines the name of the column that is mapped to a particular field of the bean class. The tag should appear right in front of the field in the bean class. The field itself can be private. Note that possum does not rely on setters and getters to access the field, possum will access the field itself directly. There is an additional proeprty in the column tag that can be set. This is the isIdentity property. If a column is an identity column (a column whose value is determined by the database), this column should not be part of inserts and updates. Setting the isIdentity property to true informs possum that it should not include this column in inserts and updates.


@TableDefinition(name = "persons")

public class Person {
    @ColumnDefinition(name = "first_name")
    private String firstName;

    ....
}


@IsPrimaryKey

Each bean should have a primary key class. The IsPrimaryKey tag tells possum which class is the primary key class. Possum will then inspect the primary key class and will discover the columns that have been specified in this class. It is fairly elegant to define the primary key as a static inner class inside the bean class. This clearly defines and exposes the relationship between the bean and its primary key class. Note that in the code below, we defined the primary key field "id" to be mapped to an identity column.


@TableDefinition(name = "persons")
public class Person {

    @ColumnDefinition(name = "first_name")
    private String firstName;

    @ColumnDefinition(name = "last_name")
    private String lastName;

    public static class PK implements PrimaryKey {
        @ColumnDefinition(name = "person_id", isIdentity = true)
        private int id;

        public int getId() {
            return id;
        }

        public void setId(int id) {
            this.id = id;
        }
    }

    @IsPrimaryKey
    private PK primaryKey;

    ....
}

@IsEnumDataField

In the case one of your bean field is an enumeration, you will have to inform possum what field of the enumerated type should be stored in the database. You use the market tag @IsEnumDataField to do this. See enumerated types.

Custom Data Types.

Standard jdbc will map database data types to a fixed set of java data types. This is sufficient for most applications. However, sometimes it would be convenient if we could customize this mapping. For example, a very popular open source library for time keeping is joda time. Joda supports a DateTime class that provides more functionality than the standard java.util.Date and java.util.Calendar classes. If we want to persist a joda DateTime class, we would have to convert it to a java.sql.Timestamp class. Reading from the database would require some plumbing code that converts from the java.sql.Timestamp type to DateTime.

Possum can deal with all this plumbing for you, and it does so in a tranparent way. Let's first take a look at how possum converts from jdbc data types to bean data types. In most cases, the jdbc data type matches the bean data type and no additional conversions are needed. In some cases, a conversion is needed from one data type to another. Possum has no fixed converters set up for any of these conversions. Instead, it has a mechanism that can build converter chains from converter building blocks. Possum has a collection of two-way converters, each of these converters can convert data from some class A to some class B and back. If possum needs to convert a data type from X to Y, it will try to combine one or more converters in a converter chain to accomplish the correct conversion. Since all converters are two-way converters (can convert from A to B and from B to A), a converter chain can convert from X to Y and from Y to X.

During the initial discovery phase, for each bean, possum will collect meta data on its database table and on its fields. For every single fields, possum will build a converter chain that converts from the field type to the database type and back. A good example is reading a one character field from the database into a char bean field. When reading the field from the database, a String will be returned. The field that needs to be populated with the data has the type char. Possum will now create a converter chain that can convert from Character to String and back.

Back to our joda DateTime example. So what do you need to do to use a joda DateTime field directly in your bean, and have it persisted to the database without any plumbing ? All you need to do is writing a little converter that can convert a joda DateTime to a java.sql.Timestamp. Choose to convert to a java.sql.Timestamp class and not a java.sql.Date class so you can apply the converter to DATE and to DATETIME fields. Here is a sample converter:

public class JodaDateTimeConverter implements Converter<DateTime, java.sql.Timestamp> {
    /**
     * Get the encoded type for this converter
     *
     * @return The encoded type
     */
    public Class getEncodedType() {
        return (java.sql.Timestamp.class);
    }

    /**
     * Get the decoded type for this converter
     *
     * @return The decoded type
     */
    public Class getDecodedType() {
        return (DateTime.class);
    }

    /**
     * Encode an object.
     *
     * @param decoded
     * @return The encoded value.
     */
    public java.sql.Timestamp encode(DateTime decoded) {
        return (new java.sql.Timestamp(decoded.getMillis()));
    }

    /**
     * Decode an object.
     *
     * @param encoded
     * @return The decoded value.
     */
    public DateTime decode(java.sql.Timestamp encoded) {
        return (new DateTime(encoded.getTime()));
    }
}

As you can see, the Converter interface expects you to implement two methods that tell possum about the data types that you can convert between (getDecodedType() and getEncodedType()). Then there are two methods for the conversion, one to encode, one to decode. Once you have written your converter, all that is left is to tell possum about the converter. You will have to register the converter with the data access object of your bean.


        Person.dao.registerConverter( new JodaDateTimeConverter() );
 
Once you have registered the converter, you can use the joda DateTime type in your bean fields. Note that in this example, we are overriding the customize() method to add out custom initialization to the data access object. The customize() method will be called during the initialization of the data access object. This is the perfect place to add any customization of the data access object.

@TableDefinition(name = "persons")
public class Person {

    public final static DataSourceDao<Person> dao = new DataSourceDao<Person>(Person.class) {
        public void customize() {
            registerConverter(new JodaDateTimeConverter());
        }
    };

    @ColumnDefinition(name = "birth_date")
    private DateTime birthDate;


Beyond the obviously obvious

Joins

In possum, joins are simply specified on the sql level. No hidden logic behind the screens that builds join statements, it is all up to you.  Here is a simple example:


    PreparedStatement ps = Person.dao.buildSql( "select p.* from $t p, othertable t where p.age > ? and p.person_id = t.person_id" );
    ps.setInt( 1, cutOffAge );
    List<Person> persons = Person.dao.parse( ps.executeQuery() );

Many to many

Quite often you will encounter the situation in which a table is linked to another table through a link table (many-to-many). Let's look at the case where the people in our persons table can be members of one or more groups. To define the person-group relationship, a third table is created that holds the person id and the group id.

create table persons (
    person_id int primary key,
    first_name char(32),
    last_name char(32),
    age int
);
create table linktable (
    person_id int,
    group_id int,
);
create table groups (
    group_id int primary key,
    group_name char(32)
);

This structure allows a person to be member of multiple groups and a group can have multiple persons. We just need to keep the link table up to date. When we add a person to a group, we need to insert a row into the link table. When we remove a group, all link table rows that point to that group need to be removed. When we remove a person, all rows in the link table that point to that person need to be removed.

To deal with these situations, you need the following:
  1. You want to know when a row is deleted from persons or from groups so you can update the link table.
  2. You want your link table modifications to happen in the same transaction as the delete.
On your dao, you can install a DaoListener. The DaoListener has methods that gets invoked whenever possum deletes a row, loads a bean etc. The listener also gets the connection object that is currently in use. This allows you to add on to the ongoing transaction.

public interface DaoListener<E> {
    /**
     * Called whenever a new bean has been loaded with data. This is an ideal
     * hookup for post-load operations, such as loading dependent data etc.
     *
     * @param c The connection that was used to load the bean
     * @param e The loaded bean
     * @throws SQLException
     */
    public void afterLoad(Connection c, E e) throws SQLException;

    /**
     * Called whenever a bean has been deleted. This is an ideal
     * hookup for post-delete operations, such as removing entries
     * from link tables etc.
     *
     * @param c          The connection that was used to delete the bean
     * @param primaryKey The primary key of the bean
     * @throws SQLException
     */
    public void afterDelete(Connection c, Object primaryKey) throws SQLException;
}

To delete all rows from the link table when a person is removed:


    Person.dao.addDaoListener( new DefaultDaoListener<Person>() {
        public void afterDelete(Connection c, Object primaryKey) throws SQLException {
            PreparedStatement ps = null;

            ps = c.prepareStatement( LinkTable.dao.buildSql( "delete $t where person_id=?" ) );
            ps.setInt( 1, primaryKey );
            ps.executeUpdate();
            ps.close();
        }
    } );

Note that we are not implementing the DaoListener interface directly. Instead, we are extending a default DaoListener. The default listener offers dummy implementations of all methods in the DaoListener interface. By extending this class, we only have to implement the method that we are interested in. Also, eventual extensions of the DaoListener interface in future possum versions will not break our code.

Auto increment / Identity columns

If you are using an identity column, you can tell possum by setting isIdentity to true for that particular ColumnDefinition tag. To possum, this means that:
  1. This columns should never be inserted into the database.
  2. This column should never be updated.
  3. If the identity column is part of a primary key and this is the only column in the primary key, you do not need to set the primary key class when doing an insert of a new bean. You can just leave it null.

Setting the datasource for a dao

You cannot use a dao until you have assigned a datasource to that dao. This is preferably an action that you only want to do once. Where you perform this initialization depends heavily on your environment. If your application is stand alone, the initialization should be no problem, you have a clear initialization path from your main method. It gets a little trickier when you are dealing with web applications. A simple solution for web applications is to implement a context listener:


package org.yourname;

import javax.servlet.ServletContextEvent;
import javax.servlet.ServletContextListener;
import javax.sql.DataSource;

public class MyContextListener implements ServletContextListener {

  public void contextInitialized(ServletContextEvent servletContextEvent) {
    DataSource dataSource = Resources.getInstance().getDataSource();

    Person.dao.setDataSource(dataSource);
  }

  public void contextDestroyed(ServletContextEvent servletContextEvent) {
  }
}

The contextInitialized method will be called by the container whenever the context is created. To register the listener, add the listener to your web.xml:

<!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/dtd/web-app_2_3.dtd">
<web-app>
  <!--
    Define application events listeners
  -->
  <listener>
    <listener-class>
      org.yourname.MyContextListener
    </listener-class>
  </listener>

   ....

Beans without a primary key

It is very well possible to create beans that do not have a primary key. Note however that there is a big difference between bean with and bean without a primary key. If you do not have a primary key, you can never locate a bean in the database, meaning that you can never update the bean. For beans without a primary key, a save() always translates to an insert. This can still be a useful mechanism, for instance when you want to load non-unique data.

Logging

Possum uses the apache commons-logging library. Commons-logging will hook up to any installed logging system. In addition to commons-logging, possum also supports specific loggers for sql statements and result sets. Every time a piece of sql is built or when a result set is parsed, possum will inform a list of SqlLogger listeners. You can add a SqlLogger by adding it to the dao, there is an addSqlLogger method. The SqlLogger interface looks like this:


    public interface SqlLogger {
        /**
         * Log a sql statement that just has been built by buildSql.
         *
         * @param sql The sql statement
         */
        public void logSql(String sql);

        /**
         * Log a result set that is about to be parsed.
         *
         * @param rs The result set
         * @throws SQLException
         */
        public void logResultSet(ResultSet rs) throws SQLException;
    }

There are two parts to a SqlLogger:
  1. The logSql method allows you to log the full sql statements right after they have been built.
  2. The logResultSet method allows you to inspect a ResultSet just before it gets parsed and converted into beans.
Possum comes with two implementations of the SqlLogger interface. The first implementation is StdoutSqlLogger, which logs sql statements and result set meta data to standard out. The second implementation is CommonsSqlLogger, which does the same thing as the StdoutSqlLogger but logs the results using commons-logging.

Database support

Database specific handlers

Possum relies on database specific handlers for proper functioning. These handlers perform the following actions:
  1. Possum's sql templating mechanism builds sql statement strings. Possum is not using jdbc place holders internally, so all values in the sql statement need to be written out in full to the string. For integers or strings, this is not a problem, but when it comes to more complex data structures, it quickly becomes database dependent. String formatting can and is different from one database to another. The database specific handler knows how to convert a java object of a particular type to a database specific string.
  2. When dealing with tables that have identity columns, inserting a new row into such a table will set the value of that identity column. To retrieve the new identity column value, a certain database specific statement needs to be submitted. For HSQLDB it is "call identity()", for SqlServer, it is "select ident_current('persons')" and so on.
A possum dao detects the database type at startup. If the database type cannot be determined, a generic handler will be installed. A message in the logs will notify you of this event.

What databases are supported ?

Currently, only a few databases are supported:
  1. MySql
  2. SqlServer
  3. HSQLDB
Not exactly an impressive list. But do not let this slow you down, it is very easy to add your own database handler. Will probably just take you ten minutes.

Writing your own database handler

All you need to do is:
  1. Write your own implementation of the DatabaseHandler interface. 
  2. Before setting the data source on any of your dao's, add your DatabaseHandler implementation to the DatabaseRegistry. The registry is a singleton that keeps a list of all available DatabaseHandler implementations. It has a register method that allows you to add your own implementation.

    DatabaseRegistry getInstance().register( new MyDatabaseHandlerImpl() );

Once you have registered your DatabaseHandler implementation, possum should recognize the database and should install the correct handler. You will see something like this in the logs:


07/08/2006 11:12: DatabaseRegistry:92: Installed database handler for 'HSQL Database Engine'


Here is the interface you will need to implement:


public interface DatabaseHandler {

    /**
     * Get the product name for this database. This is the
     * string that needs to match the return value of the
     * standard JDBC call  DatabaseMetaData.getDatabaseProductName().
     *
     * @return A string indicating the type of database
     */
    String getProductName();

    /**
     * Convert an object to a string in order to include the object in a sql statement.
     *
     * @param object The object to be converted
     * @return A string representation of this object, usable in a sql statement
     */
    String toString(Object object);

    /**
     * Get the value of a just inserted identity column
     *
     * @param tableName The name of the table
     * @param c         The connection
     * @return The new value of the identity column
     * @throws SQLException
     */
    int getIdentityValue(String tableName, Connection c) throws SQLException;
}


For examples, see the possum database handlers, take a look at the GenericDatabaseHandler.

Miscellaneous

Creating a datasource

Here is a little sample singleton that creates a datasource once and then hands it out to its customers. You will need the apache commons libraries commons-dbcp and commons-pool. This particular example is for the HSQLDB database.


public class Resources() {
    private final static theInstance = new Resources();
    private DataSource dataSource;

    private Resources() {
        this.dataSource = createDataSource();
    }

    public static Resources getInstance() {
        return( theInstance );
    }

    public DataSource getDataSource() {
        return( dataSource );
    }

    public DataSource createDataSource() {
        BasicDataSource ds = new BasicDataSource();

        ds.setUrl("jdbc:hsqldb:hsql://localhost/test");
        ds.setDriverClassName("org.hsqldb.jdbcDriver");
        ds.setUsername("sa");
        ds.setPassword("");

        return (ds);
    }
}