Year in Review by Simen Frostad, Chairman at Bridge Technologies
2015 has been a year in which Bridge Technologies launched more products than ever before, as the culmination of research and development programs in many areas. At the same time the pace of our R&D has increased too – as it must, because the media industry is undergoing such profound change.
It’s difficult to pin down a single theme for a year in the broadcast industry, in the way a fashion commentator can do (“this year, be seen in lime green…”). At every major trade show, manufacturers vie with each other to carve out the headlines with some line that is meant to define the moment, but in reality life in this industry is a lot more complex than that. However, some underlying themes persist while the fads come and go. And usually the underlying themes are the more important ones, the more problematic issues, less easily encapsulated in a snappy tagline.
So it is that in 2015, chickens have come home to roost for many broadcasters and media operators as they have begun to realise just how complicated it is to grasp the full benefits of IP in a media delivery context. Partly this is because the interaction between broadcast and IP technologies throws up a lot of phenomena that are unfamiliar to specialists in one field or the other; those complexities are inherent. But it’s also the expertise gap that media operators are struggling with, because it’s difficult to hire and retain staff with deep and thorough knowledge of both domains, and the reality of media delivery now is that engineering staff have to become generalists. Pity the old school broadcast engineer thrown into the deep end with OTT, IP distribution and the rest. Likewise the IP specialist suddenly confronted with the peculiarities of broadcast technology. The more broadcasters use IP and the more they launch OTT services, the sharper the focus will become on this expertise gap.
We work hard at Bridge to develop products that don’t assume massive expertise on the part of the user, and many of the developments we launched in 2015 make it far easier for generalists to understand the data and act on it with confidence. Our Remote Data Wall is an enormously configurable tool that allows control room staff and field engineers to bring together exactly the grouping of data types that they need in any situation, and see the data presented in a friendly form that is easy to understand at a glance. This lowered threshold for understanding is vitally important because the technology is so fluid now that a single engineer can’t hope to comprehend it all, or keep up with the rapid pace of evolution.
At least some things remain relatively simple. The old computer science maxim Garbage In, Garbage Out can certainly be applied at the beginning of the distribution chain, although many media providers don’t know how to exclude the garbage effectively. So one of our launches in 2015 focused on improved satellite ingress monitoring. A lot of development work went into our ingress cards and the list of metrics now monitored by the solution is huge. But once more, the key is to provide a tool that can be easily understood by the operator monitoring the incoming feeds from the satellite – without possessing arcane expertise.
The same principal underpins Timeline, a new graphical data analysis technology we launched in September that allows users to play through recorded monitoring data in an NLE-style timeline display to observe correlations and patterns of errors occurring over any time period.
Users can scrub through the data at any point in the recorded archive, opening and collapsing data tracks, and zooming in to observe fine detail on all the visible tracks. The Timeline shows content thumbnails, alarm markers and all the metrics familiar from the MediaWindow displays, making visual navigation through the data simple and quick. Engineers can search through the chain of events that led up to service failures, and generate reports for remedial action or fulfilment of regulatory SLA obligations.
During service operations, when an error or failure occurs, engineers prioritise fixing the fault, and often in the heat of the moment there is not the time for them to understand the root causes. The Timeline functionality enables operators without a high degree of technical knowledge go back and explore, understand, verify and document in complete detail what happened at any given time, or look for patterns over longer periods of time, and take action to forestall any recurrence of a similar problem.
With so many distribution networks becoming regional or global, the ability to implement infrastructure changes remotely is increasingly important and for customers with large server-based infrastructures we introduced a range of virtualised monitoring probes to complement our hardware line. These virtualised probes have exactly the same APIs, functionality and behaviour, but instead of requiring a slot in a rack, they can be deployed anywhere on the planet in a couple of minutes. The demand for this type of virtualised device is another example of the way broadcasting and media distribution is changing radically, and will continue to change through 2016 and beyond.
The glittering promise of file-based workflows and packet-based transport rather belies the critical importance of timing, and this if anything is likely to become a theme in the future. That’s ironic, because timing has always been critical to broadcast; without it there would be no coherent distribution, but OTT distribution for content on demand seems simple because it’s packet based. However, a typical content distribution chain might send from a central store to the first cache in England, then a second cache in Germany, a third in Denmark, a fourth in Norway, and then a fifth at the individual ISP in Oslo. If you have segments of say 10 seconds long, and three packets are missed, they can always be retransmitted within the ten seconds, although as soon as you have to retransmit you eat bandwidth. But then in real time streams, with two second segments, you only have two seconds that has to cached in five places before it reaches the player, which will itself cache probably three segments in order to have some playback buffer. If you ask for the next segment just a little off-beat you will miss it and then a segment will be dropped. The timing issues become very complex if you are dealing with a global distribution from one place – which is often the case. But don’t expect ‘It’s All About Timing’ to be the next big tradeshow theme, even though it really should be. It’s too complex an issue to become a fad.