Remote production: back where we started
“I think there is a world market for maybe five computers,” famously said Thomas Watson, president of IBM. In fairness, that was back in 1943, when computing was very much in its infancy – and computers were the size of a house.
For a long time, however, the only kind of computing was mainframe computing. Input to these behemoths was mostly via a deck of laboriously punched cards and output was, at best, reams of fanfold printout. But, slowly, things changed. Gradually, mainframe power spread its tentacles. We had printer-based terminals, and then printer-based terminals with keyboards. Then we had display-based terminals. And after that, we had intelligent terminals – display and printing devices capable of a very limited amount of local processing.
And thus, the era of distributed computing began. No longer was all the computing horsepower located in one ivory tower: computing was becoming available where it was needed. Increasingly capable network technology and the infrastructure that it enabled was at its heart.
The late 1970s was the age of the so-called minicomputer, heralded by the launch by DEC of the VAX-11/780. Data General was another major player in the computer market (and if you’re interested to find out more about what it took to bring a new computer to market in the early 1980s, I can thoroughly recommend the Pulitzer Prize-winning The Soul of a New Machine by Tracy Kidder which chronicles how the Data General Eclipse MV/8000 came to be).
The era of distributed computing
With the advent of the minicomputer, computers could truly be wherever you wanted them to be. We had entered the era of distributed computing. Back then, however, who could have imagined that huge amounts of computing power – and access to almost all the knowledge in the world – would be in the palm of your hand? The mobile phone is perhaps the ultimate expression of what it means to distribute computing. The typical smart phone features far more processing power than Thomas Watson could ever have dreamed of – and there are now some five billion of them in use around the world. So: about a billion times more than five… And there are only around 7.5 billion of us on the planet.
The ability to put as much computing capacity as necessary exactly where it’s needed – or wanted, at any rate – has been perhaps the defining characteristic of the development of technology over the past 80 years. That’s certainly the case for networking technology, and what IP is allowing us to achieve. If it makes sense to centralise it: we can do that. If it makes sense to distribute it: we can do that too.
Today, the broadcast industry is a key beneficiary of those developments. No longer are program makers and content producers chained to a central location. And: that’s just as well. In the face of the growing threat from SVOD, and in response to declining advertising revenues, broadcasters are increasingly looking to live events to bolster viewing numbers and income streams.
A big deal
Coverage of live events is, of course, nothing new. The world’s first outside broadcast took place as far back as 1937, when the BBC sallied forth from Alexandra Palace to cover the coronation of King George VI. Such were the inherent limitations of broadcast technology back then, it’s hard to overestimate how big a deal that was.
And, for many decades, it remained a big deal. Enabling technologies made outside broadcasting easier – but the need to send out one or more fully-equipped OB vans didn’t make it much more affordable. That meant that broadcasters were constrained in terms of what they could afford to do – and they wanted to do much more. Going somewhat counter to the concept of distributed computing, they ideally wanted to leverage the central resource, deploying as little compute power remotely as possible.
From that point of view, IP networking was truly a game-changer. Almost overnight, remote production became possible – leaving all the expensive resources back home, and deploying minimal numbers of people and equipment to the location. It’s been claimed that all this can represent a 40% cost saving on how outside broadcasting was previously done. No wonder broadcasters are rubbing their hands.
Inevitably, doing things that way creates its own challenges – not least in the key area of ensuring quality. That’s where we at Bridge Technologies come in. A VB440 IP Probe – or multiple VB440s – can be deployed anywhere in the network, allowing the quality of what’s being transmitted to be monitored, verified and, if necessary, corrected. But: here’s the key thing. What the VB440 is discovering on the network can be viewed and analysed from anywhere in the network – back at base, for example, which is where the experts reside.
In effect, IP technology has enabled broadcasters to revert to the old, mainframe-based computing strategy of centralising their technology capability – and deploying as little of it as possible. When it comes to remote production, that’s clearly the optimum strategy. There is, as they say, nothing new under the sun. Or, as the French say: “Plus ça change, plus c’est la même chose.”