“Synthetic monitoring (also known as active monitoring) is website monitoring that is done using a web browser emulation or scripted recordings of web transactions. Behavioral scripts (or paths) are created to simulate an action or path that a customer or end-user would take on a site. Those paths are then continuously monitored at specified intervals for performance, such as: functionality, availability, and response time measures.”
In a recent blog, we discussed how customers’s IT Service Management (ITSM) needs represent a great opportunity for carriers and the cross-over functionality between OSS and ITSM tools is an important ingredient.
One of the other differences between traditional CSP monitoring strategies and the ITSM needs of enterprise customers is what objects they’re monitoring. Traditional OSS monitored devices, ports and sometimes circuits to ensure the health of their customers’s services. But IT-reliant organisations are more interested in whether their applications are accessible than whether certain nodes are available. In the ITSM scenario, the nodes are simply a means to an end and it’s the applications that need to be monitored and assured because they tend to be what generates revenues for the customers (directly or indirectly).
The end-to-end application performance may have many contributing factors though, such as servers, web services, databases, active directory, load balancers, network links, etc. Theoretically, it’s possible for all these contributors to be showing as active if monitored in isolation, yet the customer can’t reach their login page because the components / services are failing collectively.
Luckily, synthetic transactions offer a means of simulating the overall user experience and application performance. When combined with monitoring of the performance / health of each of the layers that contribute to the service, you have a powerful fault identification tool.
Synthetic transactions are a slightly different concept for traditional OSS tools, but OSS tools generally have the fundamentals upon which this type of simulation could be retro-fitted. Virtualised networks represent a further progression on this evolutionary path as they are built on IT-style building blocks (eg VMs), but also provide the ability for applications to manipulate the network.
Multi-layer application monitoring, end-to-end application checks and the associated multi-dimensional fault identification capabilities are clearly going to become increasingly important for OSS tools to provide. Does your OSS stack up? (Sorry, really bad pun there!)