OAT: One Admirable Tool
One first warning: I'm still lacking time to write the kind of articles I like, which are focused, technical and relatively deep.
Having said that I feel better proceeding with yet another "chat like" article. And in this I would like to write about Open Admin Tool, or simply OAT.
I had two great sessions in IIUG 2010 conference, which although not specifically about OAT, were somehow related to it. I'm talking about John Miller session about Auto Update Stats, and Hyun-Ju Vega session covering SQLTRACE. Unfortunately I had to miss the session about OAT itself from Erika Von Bargen (i was presenting at the same time). Now, why did I like this sessions so much? The speakers were great but they were not the reason. The reason is fairly simple: They changed the way I thought about some things and they made me look into OAT again. I don't recall having written many things about OAT, but to be honest I though it was nice, but maybe more directed to the new people starting with Informix. Long time Informix DBAs obviously have a pack of scripts for the trivial stuff: running update statistics (I wrote my own scripts based on previous work from Informix staff and obviously we all know dostats from Art Kagel), check some performance issues (sessions with more CPU usage, tables with full scans, tables with more read/writes etc.), historical data from the instance (profile counters, table access stats etc.).
So, a nice tool, much better than previous attempts like Informix Server Administrator (ISA), nice looking but maybe for "newbies". Honestly this was more or less in my mind, even though I did not have full conscience about it.
Then I assisted this two sessions. And during the sessions we were guided through lots of "accessories". John showed several tasks, the task scheduler, the intelligence behind the auto update stats, and eventually even without noticing it, he effectively showed the audience how easy it is to work with OAT. Then Hyun-Ju, guided us through a real case scenario where SQLTRACE was key to solve a customer issue. Again, OAT was not the main focus, but the ease of usage and how we can take advantage of SQLTRACE in OAT SQL Explorer again surprised me.
Then a real life scenario came... Today, while working on a customer, I was asked to help the DBA team to understand a big memory consumption from some sessions in a development instance. The developer team really didn't noticed anything strange, since the engine was doing what was expected. But the DBAs noticed some sessions occasionally grabbed around 8-10 times the maximum memory usually consumed by most of their sessions. The more or less strange thing was that the session allocated the memory but when we got to see that, the memory was mostly free (although still allocated to the session). So, no strange pool usage was evident. Since my job their is also to try to pass some knowledge, I showed them how to setup SQLTRACE for one specific user and we asked that user to run the job(s). After lunch we had everything explained in the SQLTRACE buffers. The onstat -g his could be used to get evidences, but I decided to give OAT a try. And there it was, in a graphical manner all the session steps. We noticed that they had a terrible amount of recursive procedures (actually proc_a calls proc_b and proc_b calls proc_a) inside a foreach loop. So a lot of stack was necessary to hold the context during execution and so the memory was allocated and then became free.
Could I have made this with other tools? Probably. Lots of onstats, scripts etc. Would it be so easy? Not even in my best dreams!
The true magic about OAT is not OAT itself. It's how OAT glues several new features and makes them usable. Think about some things we can do in OAT:
- Manage and use SQL history
- Manage all the engine programmable tasks
- Generate performance and usage reports
- Change the engine configuration
- Optimize storage
- Manage your cluster and replication
- SQLTRACE
- Database scheduler
- Automatic tasks and internal engine profiling
- SQL API
- Compression
- MACH 11
Is it perfect? Probably nothing is... I think one area or aspect could be improved. And here I'm just echoing a customer thoughts. OAT should be able to provide a view and working mode for enterprise wide environments. Meaning some of it's features should be aware of all the instances configured in a customer environment. Some simple examples:
- A single alarm viewer page for all the instances (maybe "all belonging to one group")
- The monitors could collect/centralize into a common repository
- The monitors should have self conscience of the instances in a MACH-11 cluster
- Replicate each instance captured data to a common repository (either by ER or simple tasks)
- Create a plugin for showing data in that centralized repository
- Change some of the monitors to INSERT also the server name and direct the data to the primary server (for read only secondaries) using for example the connection manager
I'd like to end the article sending my compliments and appreciation to the OAT development team and leave here a few links to help you understand what's behind OAT:
- Open Admin Tool home
http://www.openadmintool.com/ - Manage your Informix database with the IDS OpenAdmin Tool
http://www.ibm.com/developerworks/data/library/techarticle/dm-0807kudgavkar/ - Build plug-ins for IBM's OpenAdmin Tool for Informix Dynamic Server
http://www.ibm.com/developerworks/data/library/techarticle/dm-0808vonbargen/
1 comment:
Thanks you Fernando for this approach to OAT.
In fact, it would be great to see some info gathered of all instances or OAT group in a consolidated views.
We already have one view of this kind, the Google Maps view where we can see the status of the instances on a OAT group.
About the MACH11, i think collect sensors and OAT should be reviewed, OAT charts and other views are design on a instance perspective,
How can we monitor RSS, HDR, SDS (read-only) instances profile with OAT and built in sensors? The idea is to continue using the OAT "Performance History".
Post a Comment