//   News & Articles
News & Articles

Managing Applications Is Like Managing In Baseball

We're three months into the Major League Baseball season and the Philadelphia Phillies have the best record in baseball. The Houston Astros have the worst. Even an average fan knows the home team's league standing. And any fan who owns a team-branded article of clothing can report the home run tally of the team's slugger and the ERA of the pitching staff.

The sport of baseball is replete with metrics on its KPIs (key performance indicators). Some stats date back a century, which is the context for one oft-quoted declaration by catching great and legend of the malapropism, Yogi Berra: "I knew the record would stand until it was broken."

IT organizations can learn a valuable lesson from Major League Baseball. IT should publish its "application management box score." For every critical transaction or process, IT should be able to report the AVR (average response time), the AAR (aggregate adoption rate) and the RTQA (run time quality average).

Such clear, undisputed metrics would provide a 'lingua franca' on the level of service being delivered for critical business applications -- which is fundamental for IT/Business alignment. Of course that is the goal driving the estimated $26 billion invested in IT Management tools each year.

But, if you ask the average business stakeholder if the IT service levels are improving steadily, year over year, you are not likely to see a lot of smiles and nodding heads. IT Execs could benefit from the wisdom of Yogi Berra as to how their money is being spent.


Theory and Practice -- the Importance of Real Metrics

Yogi reportedly once said, "In theory there is no difference between theory and practice. In practice there is." There is a takeaway message for IT execs in this statement. Far too many of the application management tools, which are still broadly deployed in IT shops today, are based on simulations of reality or rely on proxies for real metrics.

For example, consider the first generation of application performance monitoring tools -- still the predominant tools in use today. The seminal approach to measuring application performance is to measure the resources used by the application, and the processing times, at each of the tier of the back-end infrastructure. The theory is that if the execution of the application was not causing a resource constraint at the database server, or the network server, or the application server (e.g., IBM CICS, J2EE, or .NET), then the application must be performing well.

In practice, this monitoring approach often results in the condition where "all systems are green" on the back-end but the business constituency is complaining that the application is slow or non-responsive. So how do you fix the end-user's perceived issue in a situation like this?

That lack of visibility into the real user experience of critical business applications led to the first generation of end-user experience monitoring tools - synthetic transaction engines. These tools leverage transaction scripts -- which represent how end-users would, in theory, execute a transaction -- to execute key application functionality on a desktop in the end-user environment (e.g. a bank branch office, a remote manufacturing site).

The response time measured in the simulated transaction is taken as a proxy for end-user experience. In theory, that might be a good idea. In practice, where end-users display any number of unscript-able behaviors, reality is often quite different than the simulated transaction.

Both of these problems highlight why measuring application performance as experienced by real end-users, using real applications, executing real transactions is the better practice. That is part of the reason why, according to a major analyst firm "end-user experience monitoring lies at the center of most Global 2000 enterprise buying decisions."


Page 1 of 2