Data driven decision making – what does it really mean?
What is a decision?
It seems like a simple question, but it’s one worth considering. What separates the terms ‘decision’ from ‘choice’? The general consensus is: judgement and deliberation. Whilst a choice can be entirely arbitrary in nature, decision making is ‘process orientated’, linking behavior with performance and consequence. In essence, decisions act on inputs to drive outcomes. So it follows that all decisions are ‘data driven’ – in the sense that ‘data’, most widely defined, constitutes information – the inputs upon which the decision making process is exercised.
The key ingredients of a good decision
Thus, the quality of a decision is really impacted by three central factors: the extent of information available, i.e. the number of variables for which information exists, the quality of that data – with quantitative generally preferable to qualitative, and the extent to which the decision maker is able to make use of that data – both by virtue of how comprehensible it is, and how effective and knowledgeable the decision-maker is themselves.
Once upon a time, these three elements were entirely lacking in a broadcast environment. Your network either worked or it didn’t, and often the first you’d know about it was Mr. Jones on the phone complaining he couldn’t watch the Snooker. Trouble shooting was long, tedious and laborious. In essence, the data needed was lacking in quantity, quality, responsiveness and usability.
Perhaps less of a problem when infrastructure costs once meant that only a few big players dominated the market, and Mr. Jones had nowhere else to watch his snooker. But now, in an environment characterized by endless choice and next to no switching costs for audiences? A very different story.
Decision after decision
Troubleshooting decisions constitute just one type of decision you might make in a broadcast environment, and – moreover – one you’d ideally like to avoid; if everything has gone well in your organisational decision making chain, you shouldn’t be getting to the point where trouble-shooting decisions are even required.
In reality, decisions in the broadcast industry exist on a vast spectrum; they constitute the high level, boardroom decisions made regarding organizational strategy, and they constitute the millisecond decisions made regarding data paths through complex networks. Of course, at every point – regardless of scale – these decisions need effective metrics to support them.
Of course, with this sheer scope and scale of decision making going on at the micro- and macro- level, it can be easy to become ‘decision blind’ –becoming entrenched in a process that uses the same outdated data points and unconscious biases, in an uncritical manner.
But critically revisiting these questions is key. Really – how much do you really know about your network, and how much are you assuming? About your audience and their experience? What data points are you currently getting to inform your decision; how relevant, timely and usable are they? Are you sure you’re making decisions, rather than arbitrary choices?
Fundamental questions perhaps – but it’s surprising how many people aren’t asking them – instead laboring on with outdated information and unchallenged assumptions. And that’s really the main thrust of what we’re driving at here: When was the last time you considered something as fundamental as ‘what constitutes a decision’?
Probing the depths
It will come as no surprise to hear that technology is offering at least partial answers to many of the challenges we’ve outlined above. Broadcast network monitoring has never been more sophisticated or comprehensive than it is now. When it comes to the issue of quantity and quality of data points, it’s now possible to use probes to gain real-time, continuous insight across the full broadcast chain – from ingest to contribution to playout – giving metrics on every component of technical network performance. From the camera painter in a remote OB van, to a network technician working on a headend, to a boardroom executive ruling on next year’s budget, probes have the potential to give the data needed to make effective decisions: able to generate the kind of data that facilitates millisecond operations on audio and video transfer across networks, accommodating the widest range of operational protocols, standards and configurations – compressed and uncompressed, whilst also generating historical reports that give a strategic, birds-eye overview of operational performance for management and decision makers.
But does this increase in the quantity and quality of data points result in better decision making?
Indeed, an overload of information can be as limiting as it can be enabling in the wrong hands. It’s only if this data is packaged and presented in a meaningful way that it can form the basis of effective decision making, even for those not necessarily versed in the intricacies and technicalities of network performance and IP packet transfer. It’s here where an effective monitoring system will really set itself apart: putting eyes at every stage of the broadcast chain to produce not just data, but information which is meaningful, usable and presented in a way that is intuitive to understand.
Joined up thinking
All of these concepts become even more complex in complex environments (hardly a shock, right?). It’s all very well gaining insight along your beautiful, shiny, newly installed IP network. But who has the budget for such a revolutionary upgrade? The truth is, the majority of broadcasters are working with piecemeal, hybrid and legacy systems, where the data available to harvest is wildly different in type, meaning and significance.
It’s this which has informed Bridge’s evolving approach to the world of monitoring. We’ve backed the IP revolution for years now, and always focused on making data intuitive and usable (or ‘making the complex simple’ as we like to say). But increasingly, our focus has been on ‘Integrated Services Monitoring’ (ISM), an approach that harmonises the data gathering process – across both IP and ‘traditional’ networks – to give both broad and deep understanding across all components of the media chain; production to signal acquisition, contribution streams, OTT/streaming media, and traditional broadcast distribution across DTT or Satellite. Furthermore, by evolving our three types of probe; embedded, appliance and software-based – to all run from the same v6 coding, then we’ve dramatically simplified the data analytics process across the chain; therefore making it more reliable, efficient and usable than ever before.
Because ultimately, in decision making, whilst you want as much data as possible, you want as little variation as possible when it comes to source, type and nature. It needs to be harmonized and consistent in its base if it is going to form part of an effective decision making process. ISM achieves this.
The future of AI needs solid foundations
We won’t get into a debate here about what constitutes ML, AI, and what’s simply good old fashion if/then coding; – regardless, as networks become increasingly sophisticated, they also become increasingly automated – take for instance, the automatic configuration of IP addresses or the triggering of events when threshold alarms are reached. But as we stressed at the beginning, if you haven’t critically considered how you’re making decisions and what data you’re using to make them, you’re already backing a losing horse. AI and automation can only be as strong as the processes they’re built on. And those processes are only as strong as they data they make use of.
So whilst data-driven, AI decision making may feel like a progressive, new-fangled solution in the world of broadcast, the truth is it’s built on concepts of good old-fashioned decision making – a potentially useful add-on that is dependent on the building-blocks of understanding already being in place. Effective monitoring is the hearthstone of those building blocks.