Friday, December 2, 2011

SmartFox Server + Unity3d + Hibernate + JPA

This is intended as a post for anyone who's familiar with Hibernate & JPA, as its not very obvious as to where everything goes, you can find all the info, but its spread across numerous posts that don't seem to include all of the relevant information.

So.. here's the quick and dirty of how to set this up to get it to work in SmartFox 2x.

1. Track down the following jars and place them in your SFS2X/lib directory.

antlr-2.7.6.jar
dom4j-1.6.1.jar
hibernate-jpa-2.0-api-1.0.0.Final.jar
hibernate3.jar
javassist-3.12.0.GA.jar
jta-1.1.jar

2. Add your jdbc driver to the SFS2X/lib directory

3. Create a new project to hold your entities and add your entities to it.

4. Export the Entity project as a jar to the SFS2X/lib directory.


5. Create a META-INF/Persistence.xml file under the SFS2X directory

6. Ensure that you add the jar-file attribute to the persistence.xml. It is referenced from theSFS2X root.. not the META-INF directory, so you simply need to reference the lib/foo.jar file in the persistence.xml in order for Hibernate to scan your classes.

<jar-file>lib/gameEntities.jar</jar-file>

7. Add your game project jar to the SFS2X/extensions/{project} directory.

8. Add an appender for hibernate to the log4j properties file at SFS2X/config/

log4j.category.org.hibernate=INFO,consoleAppender

It's really that simple... Hopefully, this will save a few others some time when working with this configuration.

If you notice a magic number exception.. you are probably running RC3, which seems to have a classfile conflict. Use RC1b and that should solve the problem..

Friday, August 12, 2011

JDBC - Default ResultSet Fetching Strategy

While working on a section of our companies application that handles migration between various database types, I ran into an interesting issue that I had not really noticed before. There has been a very wide desparity between how various jdbc driver implementations handle the fetching size in a result set.

For example..

MySQL will always return all rows for the query, while DB2 will default to returning exactly 32 rows. I found this odd and as I worked on the application, I noticed that the results seemed to be all across the board. One of the other major considerations when working with the fetching size was the type of data that I was migrating. Migrating smaller varchar based tables wasn't really an issue, however, I found that I had to check each table to ensure that there were not BLOB types present, as the size of those columns could vary greatly. (Between 2-3mb upwards of 1gb Thus, the data type for each column would greatly affect the fetching strategy for each table.

Then I hit some of the big kickers.. since I had to support older drivers, I found myself fighting the older MySQL driver. In order to do any type of streaming for this driver, you must set the fetchSize to Integer.MIN_VALUE, this seemed a bit of an odd selection and was very far off from the jdbc specification. Later builds did improve by allowing you to pass a useCursorFetch hint in the connetion string.

Ultimately, it was a bit of a struggle to make five very different database types migrate between one another in a seemless manner. However, it was a rewarding and challenging experience. In a world where I usually play thru abstractions such as Hibernate.. understanding what it really takes to make a database seamless/brandless/neutral is a very painful task.

Helpful Links:

Good advice for large result sets: http://nullpointer.ph/questions/1103/iterating-over-a-large-db-resultset
Lists a few brands defaults: http://webmoli.com/2009/02/01/jdbc-performance-tuning-with-optimal-fetch-size/

Thursday, July 7, 2011

Relational Database Quirks with Auto Increment and Identity Columns

Recently, I was given the task of implementing a software solution to allow various databases to be migrated between one another. Under most circumstances, I would have opted for a third party solution to handle this issue, as numerous vendors exist. However, the project I was on did not allow for this option and I was forced to roll my own.

This first seemed rather simple.. read the data, insert the data, add the constraints. One of the first issues that I ran into was in how auto incrementing is handled between systems.

  • Oracle: Key generation is handled via a sequence and trigger method (others exist.. I stuck with the trigger before insert option)

  • MySQL: Supports AUTO_INCREMENT as a column property

  • SQL Server: Uses the Identity property

  • Postgres: Similar to oracle.. uses a sequence.

  • Derby: Supports auto increment better than most others. (Shocking.. didn't see that one coming)

    While moving between these systems, I noticed a few quirks in how things are handled.. Most seem to be post insert operations. Thus, the auto increment constraint is added after the data had been migrated into the new database. But, in Microsoft had dropped the ball in this category and requires that you use a feature called IDENTITY_INSERT, therfore you can define your column as an identity and before you being your inserts, you call: SET IDENTITY_INSERT ON, then turn it off once your inserts have completed. This allows you to force the value of an identity column - which preserves key integrity when migrating between database types.

    Another quirk that I ran into that was of note was in MySQL. MySQL seems to have an issue with an auto_incrementing value when it is zero. Therefore, if you have a column that is not defined as auto incrementing and add a key with the value of zero. Then, modify the column to make it auto incrementing. It will try to reorder the zero value into a one. If you already have a key value of one, it will create a duplicate key exception.

    I honestly think this is a bug in MySQL's handling of keys and auto_increment as it should never modify an existing value due to adding a constraint. I tried several options to resolve the issue so that any clients whom had primary key values of zero would not have their system hosed, including modifying the auto_increment value on the table before adding it to the column. But, the one solution that finally did work was to not key the column until you were ready to add the auto_increment. So, when the table is created, you omit the primary key flag, insert the data, then alter the column adding both the primary key and auto increment flags. This seemed to cause MySQL to behave correctly, it did not modify the existing values (Preserving the zero pk value) and allow future inserts to correctly increment off of the highest column value.