by Simen Frostad, Chairman of Bridge Technologies
Being a hardware manufacturer is a great thing. It allows you to design the totality of the product and the user’s experience of it, to create a plug-and-play appliance that is uniquely fitted to its task. And there are plenty of tasks that are best done with the help of a piece of dedicated hardware, for many reasons. An ‘appliance’ can save the user the trouble of installing and setting up software to run on a general purpose platform; it can provide all the required interfaces, ready for easy integration; it can be far more robust physically, and able to tolerate extreme conditions; it can be almost maintenance-free; and it can be much more efficient with energy consumption.
With all these advantages, it’s not surprising that hardware manufacturers can sometimes appear over-zealous in their advocacy of dedicated equipment. In some circumstances, another approach might be more appropriate. Sometimes, it’s necessary to think outside the box.
For many years, the trend in our industry has been towards using a general-purpose computing platform as the engine for much of the workload in producing and distributing media content. From the early development of broadcast graphics and editing solutions using personal computers, to today’s massive data-centre hosted regional and global media operations, the computer has carved out more and more territory for itself as a tool for broadcasters.
That’s not to say that everything can or should be done by servers. For the reasons mentioned above, dedicated hardware can get the job done more effectively in many parts of the production and delivery chain. And parts of that chain are still very much in the broadcast domain, staffed by engineers who are in their comfort zone cabling racks of discrete, dedicated pieces of broadcast technology.
But with the widespread and growing use of IP in the media industry, other parts of the chain are no longer a natural environment for broadcast kit – and neither for dedicated hardware appliances. This poses a potential monitoring problem for media organisations in that ‘horses for courses’ may be a good idea in principle, but so is ‘the end-to-end solution’, and the two might seem incompatible at first. It’s not ideal, in other words, to have dedicated monitoring products in a broadcast rack, and specialised IP monitoring products elsewhere in the chain – because they don’t talk to each other, and the end-to-end collection and correlation of data therefore becomes impossible.
The logical way to resolve this problem is to have fundamentally the same technology available in both hardware form and in software, so that there is a coherent monitoring solution encompassing the broadcast production centre, the (possibly outsourced) data centre, the headends, the transmitter sites, the viewer’s home network and the individual mobile devices used to consume OTT streams. Obviously some of this monitoring is best done with dedicated hardware probes – nobody really wants to maintain servers at a sub-zero transmitter site on top of a mountain. Conversely, a hardware probe is pretty much an alien presence in a room full of 400 servers.
The essence of this integrated approach is that all the parts – hardware probes or virtualised software probes – talk to each other as if identical, and where appropriate, offer exactly the same functionality and performance. The overall monitoring environment can then provide a completely coherent picture from end to end, with no blank spots or ‘language barriers’ caused by incompatible monitoring equipment. Engineering staff can track status and data from the satellite ingest, the IP transport streams at the headend, the RF performance at the transmitter, and the OTT service quality – all within a consistent graphical display.
By virtualising the functionality of the VB330 core network probe, Bridge Technologies has made it possible for media organisations to implement this kind of monitoring environment, and to make large-scale installations of virtualised probes almost instantly when scaling up server-based capacity to launch new services or extend existing ones.
One of the key attractions of data centre computing is the ability to scale up and reconfigure capacity at very short notice, so for media organisations competing for territory with new audiences and new markets, virtualised functionality is a vital aid. The ability to roll out services in a rapid reaction to some new market opportunity is increasingly important, but in this very contested field, where much of the content may be similar from one service to another, the point of differentiation could well be the service quality available to the subscriber. The virtualisation of functionality in production and in monitoring is an important development for the continued competitiveness of media businesses now and into the future.