Streaming giants like Netflix, Amazon Prime and Disney+ are pouring billions of dollars into new content and raking up millions of new subscribers during the pandemic. How can traditional broadcast and media companies keep up?

TV isn’t what it used to be. Source: National Archives and Records Administration

Is the curtain drawing to a close on traditional television? Recent headlines certainly seem to think so. It’s no secret that cable television was in trouble before 2020 rolled around, with a list of problems that included fierce competition from streaming platforms and millions of subscriber defections as a result. The current pandemic has only made things worse.

About three million people cut the cord in 2018, but given the latest numbers, industry executives are no doubt looking back wistfully. Variety noted recently that the industry lost a record number of subscribers in the first quarter of 2020. There are now as many non-paying US households (46 million) as there were subscribers in 1988.

Streaming platforms have taken notice and are spending billions of dollars to court disaffected media consumers. Amazon Prime Video announced it was setting aside $7 billion for new content and Netflix countered with $16 billion. Hulu, Disney+ and Apple TV have earmarked more modest amounts, but it all adds up and it’s going to cost cable television dearly.

Quarantine woes
The current pandemic has forced a number of the old guard to confront devastating faultlines: Big Meat, beset with production problems because slaughterhouses are every germ’s dream petri dish, is rapidly losing market share to plant-based companies, Big Oil has run out of gas and Big TV is surveying production sets that have gone eerily silent in the span of a few short weeks and realizing that ‘the show must go on’ is an inappropriate rallying cry during a catastrophic global pandemic.

Meanwhile, people in quarantine are quickly running out of ways to spend time productively and wondering what’s on television. The answer, according to Time Magazine, is reality TV that “traffics in public humiliation or manages to offend through sheer dullness alone”. And there’s only so much of that people are willing to watch, even when there’s little else to do. After taking stock, the ones suffering the most are smaller broadcast and media companies without the resources to pump billions of dollars into new content or enter the increasingly competitive OTT space.

Time’s TV critic was not impressed. Source: Fox Flash –  Celebrity Watch Party


But all is not lost yet. As it turns out, these companies own something streaming platforms would pay top dollar for: a huge library of archival footage. So why aren’t they capitalizing on it? The main reason: archives are notoriously hard to maintain and expensive to update.

The trouble with archives
Every film and television buff should read about the history of film archival, if only to get a sense of how much of our shared cultural history we have lost — and stand to lose — because archiving footage is such a complicated and difficult process.

Film studios began to take it seriously in the 1950s. Before that, all footage was shot on nitrate film stock, which was notoriously flammable. So much of it went up in flames — either accidentally or because studios didn’t want to pay for storage — that experts estimate we have lost 50 percent of all films shot before 1950 and a whopping 90 percent of those shot in the silent era.

In the 1950s, studios switched to cellulose acetate and polyester film, which were far more stable and made it practical to start storing archives. Additionally, the advent of television around that time created a whole new market for films. Studios quickly realized that there was money to be made by saving old films and re-airing them on TV. However, storing thousands of reels of delicate filmstock in temperature-controlled environments wasn’t easy or cheap.

Sadly, things haven’t gotten much better since. While the current gold standard in film and TV preservation, magnetic tape storage or LTO, has made it easy to store large quantities of footage, new versions are released every few years and old ones quickly become obsolete. Transferring footage between the two is time-consuming and expensive. All these issues illustrate how badly the industry needs a long-term digital storage solution. Sadly, no one’s cracked the code yet.

And that’s not all — even when films are successfully archived, there are significant hurdles to overcome before they can be put back on air. Better broadcasting standards and higher video quality expectations mean that most archival footage is not fit for today’s screens.

The upscale(d) viewing experience

Putting old footage back on air without updating it is a bad idea. Verizon’s 2016 viewer expectation survey found that doing so can hurt viewer engagement. On average, viewing time fell 77 percent when video quality dropped. That’s a risk that broadcast and media companies struggling to keep up simply cannot take.

But as things stand today, they don’t have much of a choice. Enhancing older footage typically requires hours of expensive manual labor and a lot of processing power. Generally a task left to post-production teams who use expensive equipment and spend hours tweaking software to get things just right, video upscaling and enhancement is a process of trial and error that requires an expert eye. So far, it just hasn’t seemed worth the effort to most broadcasters.

That may soon change with the advent of AI- and ML-powered video enhancement.

AI and ML-powered video restoration

Using artificial intelligence and machine learning to enhance and upscale older footage is a relatively recent advance. Researchers began using super-resolution, the process of using low-resolution observations to obtain high-resolution images, to improve the quality of footage captured by long-distance observation systems in the 2000s. Research in the field has taken off since, with more than 600 papers published.

2016 was a watershed year for super-resolution: Google began developing its RAISR (Rapid and Accurate Image Super-Resolution) algorithm on still images to save transmission bandwidth on Google+ for mobile devices. The speed/image quality/versatility trade-off made RAISR an interesting candidate for video super-resolution. Meanwhile, Twitter bought Magic Pony Technology, whose founders were one of the first to test neural networks and super-resolution on enhancing and upscaling images and video.

Today, AI- and ML-powered video enhancement has a wide range of applications, including dejittering, denoising, deinterlacing and upscaling. The problem is that very few companies offer the service commercially. Only two spring to mind: Recuro in Sweden and Pixop in Denmark (full disclosure: I am CEO and co-founder of Pixop).

Comparing Different Video Upscaling Methods on Stock Footage. Source: Pixop

Both these companies have automated the process and made it cloud-based to minimize the complexity involved in enhancing and upscaling sub-standard footage on native software. Pixop just launched a REST API, which is designed to make it seamless for broadcast and media companies to make full use of its services on their systems.

And broadcasters are finally coming around to the idea: Pixop recently worked with TV 2 Denmark to enhance clips from the Danish Monarch’s old birthday celebrations. It’s also working with Screenbound Pictures to enhance and upscale CI5: The Professionals. Both of these projects provided valuable monetization opportunities and allowed the companies involved to solve some of the production challenges the current pandemic has caused.

As more and more broadcast and media companies without the resources to start their own streaming services are seeing the value in licensing their content, my hope is that they’ll soon see the value in enhancing and upscaling it too. Today’s viewers expect nothing less than pixel-perfection when it comes to video quality. Given the fact that AI- and ML-powered video enhancement has made the process faster and cheaper than ever before, there’s no excuse not to give them exactly what they want.

Morten Kolle is CEO and co-founder of Pixop, a cloud-based AI/ML-powered video enhancement solution that is designed to help production companies, TV stations and independent creators update and monetize their digital archives. If you’re interested in learning more, write to help@pixop.com. This article was adapted from one originally published on Linkedin.

By Morten Kolle