When there’s an urgent need to fix a problem, all efforts are directed at an immediate solution. You get the casualty to hospital first, and then start investigating how the accident happened afterwards.
The diagnostic information from a monitoring system is extremely valuable in identifying and isolating the problem so that technicians can fix it, but in the pressure of the moment, the fix is the only priority. With an advanced monitoring system, engineers can have a wealth of information to help them get the service running again, but what happens once the crisis is over and there’s a need to understand the root causes, to learn lessons and implement some preventive measures for the future?
Without the same depth and breadth of information from the system that was available at the time of the error, a full insight into causes that may be deeply buried becomes more difficult after the event. But to borrow a well-worn maxim: without understanding history, we are doomed to repeat it. And when technical staff are overstretched anyway, allowing the repetition of errors that led to outages would be a very unproductive scenario, with disastrous effects on service quality to the customer.
Ideally, a high quality monitoring system would provide exactly the support needed to pinpoint and resolve errors very quickly when they happen, while also giving fully detailed information at any time after the event to assist a more considered and fundamental investigation into what happened.
To do this, it would be necessary to record the data output from the monitoring system, and be able to access it at leisure in full detail. But with the capability of looking at historical data, some new possibilities arise; principally the ability to observe a larger window of the data, and search for patterns and correlations based on that expanded view. If for example, there is a view of the data over six months, a year or more, it may be possible to detect patterns that are invisible in 48 hours’ worth of data.
Implicit in this idea is that technical staff should be able to zoom in and out of the view, so that if a pattern is visible over a longer period, detail can be expanded for any moment within that period to observe the minutiae of interactions in the data. The ability to easily relate the overview to the microscopic view is a key requirement.
But how is the mass of data accumulated from months of output to be presented in a way that doesn’t overwhelm the ability to comprehend it? Part of the answer comes from the existing visual metaphor used in non-linear editing programs for video and sound. This is the innovation introduced by Bridge Technologies at IBC, as an analysis tool for recorded historical monitoring data.
By being able to play through the data on a timeline, with all the data types displayed as separate tracks, as if they were separate instruments on a music production and editing application, technical staff have the means to re-play what happened at any point in the recorded data archive. By scaling the timeline up and down, it’s possible to see patterns over up to two years of data, or to fill the timeline view with a short moment in time, expanded to display every bit of minute detail.
In the increasingly onerous regulatory climate for broadcasters media service providers, the ability to play through up to two years of recorded data and generate reports from it is valuable for verifying loudness compliance, closed caption conformance, SCTE35 signalling, RF trending and other key parameters.
Best of all, this is an instantly familiar metaphor and a completely intuitive visual way of navigating through a complex volume of data. The visual metaphor means that operators who may not have a high degree of technical knowledge can go back and explore, understand, verify and document in complete detail what led up to errors that caused impairment or outages to the service.