10/6/10

SciSpike Launched on Google App Engine

Today we have launched the site of SciSpike, my new company. The site is using some cutting edge technology and we went up from a wireframe prototype to the first version of the functional site in a very short time.

Creating a web site for our company posed some interesting architectural and technological challenges. For implementation, we have considered the usual suspects, such as the various popular Content Management Systems, and they all provided a lot of initial, very useful functionality, but customizing and extending these solutions further would be difficult and is likely to be an obstacle in the long term.

We decided to go with our own design. Architecting and designing software is something we know how to do and enjoy immensely. We debated the technologies we could use: Scala  with its exciting Lift framework, the Play Framework, Ruby on Rails, some more conservative solutions based on JSPs with Spring or just JEE. For the interactive part in the web pages, the choices were equally exciting. There were the Ajax frameworks: the comprehensive Dojo (of which we are fond of), the widely used jQuery, and a special contender: GWT.

The winner on the front end for dynamic and interactive content is GWT, plus two lines of jQuery, for some neat visual elements on the home page. We used GWT before, and in general  had very good experiences. For the non-interactive parts of the site, we create HTML using our own generator. Creating our own generator was an easy choice, as we are doing model driven and generative development since the 90's and we were creating DSLs for us and our clients. For an example of a dynamic GWT control, check out the "Course Search" widget that suggests the courses as the user is typing. I guess you could call this a Domain Specific Search Engine.

Here is the fragment from the site showing the control for searching the courses.









As the user starts typing, the control suggests the title of courses on this topic:










Play with it live on the SciSpike site. This is just the first version, we are planning to add couple of improvements soon.

The choice on the back end was equally exciting, as we were inspired to move the IT into the cloud. In my old company, we were running a set of machines in a noisy server room full of wires and equipment accumulated during years. Every now and then something happens. A disk crashes. A CPU dies. A cooling fan stops spinning. A router quits. It is annoying, it costs time, and it is distracting. Today, SciSpike is running on Google App Engine and so far we were pleased with deployment and administration features.

The persistence is handled by the JPA. Our data volume is low and the JPA performance was adequate for the current needs. JPA is also a relatively easy technology to port, in the case we experience difficulties with GAE. So far everything worked out fine.

I am happy that we don't need the server room anymore!

8/31/10

Eclipse Day 2010 at Google

This year the Eclipse day attracted about 150 people who gathered at Google Headquarters. As you can imagine, the day was full of interesting presentations. The Eclipse system is growing in its size (with all the projects: 33 million lines of code) and complexity, but also in its capabilities.

For someone who is both using and teaching about Eclipse Modeling Framework (EMF) and  Google Web Toolkit (GWT), the recent inclusion of GWT as the target for code generation in EMF is a welcome addition. More about this in the presentation by Ed Merks. It is a great start, and I can imagine its potential when the integration becomes more mature!

For all of you who could not attend, presentation slides have been made available here.

8/27/10

Why attend the IOD Conference?

Do you often ask yourself "Should I attend the X conference"? The major question is "is it worth it?". There are two aspects of this decision: spending (time and money) and gaining. In these days of reduced budgets the "spending part" is often pushed in the foreground, but you should view a conference or any other event as an investment. At the last IBM Information on Demand conference, a camera crew captured some of my reasons why I attend IOD.

5/16/10

Best Practices of Data Modeling with InfoSphere Data Architect

How do you become a more effective modeler? It is not enough to be proficient with theory and principles, you also need to know your tools. In a web technical briefing I was sharing my experiences with InfoSphere Data Architect, focusing on tool specific best practices. We had over a hundred participants in the live meeting, and here is what IBM newsletter says about it:

"And a second huge thank you to Dr. Vladimir Bacvanski for his fantastic job on the Best Practices in Data Modeling using InfoSphere Data Architect tech briefing."

 And here is from the IBM Optim twitter:

"@IBM_Optim: Awesome tech briefing on best practices in data modeling with #infosphere data architect. Thanks to Vladimir Bacvanski!"

Wow! It was a pleasure to share; we also had some great questions from the audience.

If you have missed the talk, here are the slides:

5/8/10

Tech Briefing Preview: Data Modeling Best Practices with InfoSphere Data Architect

We are approaching the tech briefing on data modeling best practices on May 13! We will do a number of demos showing best practices that enable you to get most of the InfoSphere Data Architect. IBM developerWorks did a short podcast interview on the subject in preparation for the event:

Here is the link for the podcast.

The best practices for InfoSphere Data Architect are all covered in detail in the course Mastering Data Modeling with IBM InfoSphere Data Architect.

4/27/10

Data Modeling: Interview at the IBM Silicon Valley Lab

Here is a short interview on data modeling recorded during my visit at the IBM Silicon Valley Lab. We talk about data modeling and how it fits in with current software development processes and approaches  such as agile development. I also comment on InfoSphere Data Architect, which provides data modeling integration with other software development and design tools in the IBM Rational, InfoSphere, and Optim product families.

3/15/10

pureQuery Training: A Success Story from the Trenches

Recently I trained another group of developers and DBAs on pureQuery, IBM’s new data access platform for Java.  Here I want to share my experiences from teaching the course Developing Database Applications with Optim Development Studio and pureQuery and reflect on reception of pureQuery among experienced developers whose daily job is developing mission critical applications. In their environment, data amounts to petabytes, and performance and throughput are essential. The database is DB2 and the computer is IBM System z. Sometimes, a big machine is what you really need.  An army of scooters may not always be replacement for a forty-ton truck if you need to haul a lot. 

JDBC and ORM Experiences

Everyone in the student group had their share of Java enterprise applications under the belt and they had ups and downs with Java persistence technologies.  They have experienced the tediousness of writing JDBC code.  They have seen the initial productivity gain provided by object relational mapping (ORM) frameworks, just to be disillusioned by poor performance, difficulties in debugging and SQL tuning, and poor traceability. While they provide productive way of navigating persistent object associations, to dismay of DBAs, ORM solutions typically leave sophisticated data access and optimization features of modern databases untouched. Buried under to many layers of abstraction, the database languishes, its capabilities underutilized. At the end, performance and control suffer.


Enter pureQuery: The new Data Access Platform

pureQuery data access platform is different from ORM solutions.  It does not try to hide the database. Instead, pureQuery gives developers full access to the SQL, end achieves high productivity not only through developer friendly API design, but also through tooling provided by the Optim Development Studio, an Eclipse based IDE for development of data centric applications.

It was interesting to observe the students as they were learning about the different capabilities of pureQuery and its tooling. We prepared a set of labs and many hands on exercises where students could learn about pureQuery through a set of realistic scenarios. While there are many interesting and productive features of pureQuery, here are some that participants in my training classes liked the most.

The Code and the API

The real winner is the pureQuery API which enables application code that is significantly shorter and simpler than JDBC. A big advantage is that the pureQuery will map the Java bean properties to database columns, while the developers can write arbitrarily complex SQL. Annotated interfaces give another, more disciplined option for defining the queries for the data interfaces. Just check out this code fragment: this is all you need to do to get all employees of a company:



  

The Editor and the Tooling

Tedious database development tasks are greatly simplified.  The way how it is done resonates well with participants.  While the tools provide arbitration and guidance, what is generated is simple, easy to understand, and easy to modify. The editors provide auto-completion on SQL statements in Java strings.



A particularly interesting feature for many participants is that you can start with an arbitrary SQL query, and the tool will create everything needed to call that query, and transport the values in and out of the query or stored procedure. In many organizations where data is a critical asset, this fits the normal workflow where the data developers create the SQL.  In addition, you can even create a web service out of a query – a welcome feature for data centric services in SOA.

SQL Outline


This is a view in the IDE that provides insight into relationship between Java and SQL.  While developers liked this feature, it is the DBAs who are thrilled by it – you can trace the SQL to Java code, and vice versa.  Tracing from Java to the database:



And here is the tracing from database to Java:



This feature works when SQL is visible in the application, as a string or in the annotation. But it also works when optimizing JPA or Hibernate.  Even though the code is generated on the fly, at runtime, pureQuery runtime can capture the SQL sent to the database, and trace it to the line of code that emitted it in the user’s program. 


This traceability makes the work of developers and DBAs much easier.  It eliminates guesswork.  The developers and DBAs can now sit together and troubleshoot or optimize the code or tune the queries. 

When you run the application using pureQuery, one option is to collect performance metrics.  This immediately enables the developers to spot inefficient queries. Here are the metrics from the application run:






Static SQL

If DB2 DBAs have been missing one thing from Java persistence applications, this is it.  With static SQL, you send your SQL to the DB2 during development.  It gets prepared at a high optimization level, and runs faster than the SQL that is sent to the DB at development time.  Security is also enhanced, as the security is set up for a package – a group of SQL statements, and not just at the table level. Here is the representation of DB2 packages created from our Java application:




 

 

 

 

 

Conclusion

These are just some highlights - the features that immediately caught attention of my students since they directly translate to savings in time and effort.

Who is the pureQuery for?  One thing during training that was always well received is that pureQuery does not hide the database, yet it enables high productivity.  Is pureQuery an ideal solution for everyone?  Certainly not.  We know that there are no silver bullets.  If you do not care about your database, if you have a handful of users or performance is not an issue, then ORM frameworks and JPA may work out just fine. On the other hand, if your database is an essential part of your business, and you have strict and demanding performance requirements, you need to look into pureQuery.  And if you are running on System z, the performance gain translates directly into money savings due to lower utilization of the CPU. You can find some telling benchmarks in a blog post by Simon Harris on performance results.


While developers and DBAs obviously liked the technical advantages, it was the managers who smiled when they heard about the savings.

If you want to learn more about how to productively apply pureQuery in your team, check out the Developing Database Applications with Optim Development Studio and pureQuery  course description. For many clients, we customize the course to include the client’s coding standards, frameworks, and other specifics.

Happy persistence!

1/1/10

2010, Scala, and the Quiet Joy of Programming

It is the morning of the first day of 2010. The smell of freshly baked croissants and coffee spreads through the house. A view through the window: hummingbirds are flying in the garden. Scala nightly build just finished downloading. With a quick git init the repository is created. The cursor blinks, expecting. 2010: it looks like a good year!