Most of the advice is tool agnostic. However, some of the advice is tool specific, reflecting the features of the tools I used. The tools I was using were:
- IBM InfoSphere Data Architect: An Eclipse based data modeling and transformation tool.
- IBM Optim Development Studio: A development environment for data centric Java applications. Its notable feature is its support for pureQuery, a new high performance Java data access platform.
- IBM Optim Data Administrator: A database administration and refactoring tool.
These tools recently changed names. A handy reference for old and new names is here.
What follows are the top issues and advice how to apply these tools to resolve them. Why is it good to demonstrate the tools? It is because experiencing the automatization and enhanced productivity turn out to be important issues in achieving acceptance of modeling, as just preaching about the benefits does not help. One needs to see the tools in action in order to believe.
Misconception: Models as “Pretty Pictures”
- Models are viewed as pictures that do not relate to real systems.
- Tradition of modeling only for documentation.
- Often done with Visio or other drawing tools.
- Show Data Architect features for generation of physical models and then DDL.
- Reverse engineer a database.
- Transform models.
- Show integration with Optim Development Studio and Optim Data Administrator. Exercise continuity in development from data models, to Java application, refactor the database and analyze impact of the database changes to the SQL in the application.
Misconception: Modeling is Hard
- Modeling is avoided, instead DDL is written by hand or generated by object-relational mapping tools.
- Lack of training in modeling + familiarity with existing tools.
- If the problem domain is hard, models are hard to make.
- Avoiding modeling creates a (temporary) feeling of progress.
- Demonstrate the benefits by showing the continuity from logical to physical models and DDL.
- Demonstrated benefits motivate learning and inclusion of modeling in the development process.
- Visualize the models created by hand or by an object-relational mapping tool and point out the problems related to maintenance and scaleability.
Misconception: “Big Upfront Design”
- In some agile projects, modeling is rejected on the grounds that complete data models cannot be created up front, as all the application needs are not known.
- Lack of knowledge on how to do agile data modeling.
- Lack of understanding what modern tools can do.
- Demonstrate the iterative modeling with Data Architect and refactorings using Optim Data Administrator.
- In code: show impact analysis with Optim Development Studio.
- Database can be refactored and data migrated.
- Educate how to do agile data development.
Problem: Analysis Paralysis
- Modelers don’t know how to start, no models are created and project descends into coding chaos.
- Lack of knowledge of pragmatic modeling techniques.
- Start from concrete examples, then generalize into models.
- Work with a domain expert to find and discuss examples. Concrete examples will be needed anyway later for testing. This is a proven technique from domain (or business) modeling approaches. It is best done in a workshop with a small team.
- Consider prototyping. In our experience we have found Eclipse Modeling Framework (EMF) to be an excellent prototyping tool. We would create ECore models in Rational Software Architect and then use the EMF generator to create the prototype application. The generated application is then used to validate the understanding of data structure by interacting with the prototype.
Many of our experiences in introducing modeling and making modelers more productive went into our new course: Mastering Data Modeling with IBM InfoSphere Data Architect. Check out the outline here.
I will elaborate on some of the data modeling techniques in future posts.