Even experienced IT professionals can confuse traditional monitoring tools and the newer generation of transaction monitoring tools. In today's market, all of the marketing messages are the same so it is very hard to differentiate between the old and the new. The following provides insight into the difference which needs to be understood before considering the purchase of any monitoring tool.
Disclaimer: There is nothing wrong with traditional tools, they do their job well and all of the vendors that have been listed are good competent vendors. As with any technology, newer generations tend to improve on the old ones. Additionally, BTM has yet to fully mature and vendors of traditional tools are bound to jump to the newer generation at some point.
Traditional Monitoring:
Traditional Tools monitor the performance of each component individually and display all of these metrics on a "single pane of glass".
"End to end" performance monitoring means that you can see the performance of every component in a single centralized console. So for example you see the resource consumption of your servers, the threads that your application is running, the throughput of your network components and the calls to your database all displayed in their own section.
When traditional tools monitor transactions what they do is pick up various segments of transactions throughout the data center without stitching them together into one full transaction flow.
For example: The database monitor picks up all of the SQL statements that it sees and displays them on the central dashboard along with their response times, while the real user monitor picks up all of the requests that are sent out to the datacenter and displays them on the same dashboard along with their response times. Now say that an application slow down occurs and both monitors (including the application server monitor which was not mentioned) are showing erratic response times for various "transactions" – the real user measurements only show that the user is experiencing a problem but cannot show where the problem is within the datacenter – and the silo specific tools do not have the context of the CICS program names, SQL statements, web service calls which are showing erratic performance. The IT professional is stuck with a glut of confusing information on the dashboard that is not connected.
How to identify traditional monitoring tools: Vendors
that provide these tools typically sell them as suites. What they will do is develop or acquire separate server monitoring tools, network monitoring tools, application performance management tools and real user measurement tools - and then offer them bundled as an end to end package. These tools tend to be pricey and hard to implement not to mention their limited visibility due to the lack of correlation between tiers. On the up side they can provide more thorough metrics within the tier – that is why the new generation seeks to complement the traditional tools as opposed to replacing them.
Some examples of vendors: Opnet, Compuware, Quest, NetScout, CA, IBM
The New Generation of Monitoring Tools:
The new tools connect every single process within the datacenter to a click of a user within the application that initiated all of the activity within the datacenter.
"End to End" means that the user request and the related activity within the proxy, web server, app server, database server, MQ and mainframe are all connected as a single transaction instance.
The resource consumption at each component can still be seen – but at the granularity of a single transaction segment.
For Example: If service levels are starting to degrade, this new generation of tools not only pick up the performance degradation that the user is experiencing but they also immediately know what is causing the specific degradation down the line.
Some examples of vendors: Optier, Correlsense, HP Transaction Vision, SeaNet