Written by
Published on 28 Nov 2018
Share

By Shakunt Malhotra, VP Operations at Globecast in Asia

Industry talk about Artificial intelligence – more often referred to as AI – has been growing in volume for some time now. But what’s the reality and what are the potential industry benefits?

Artificial Intelligence is about the performing of complex tasks using systems intelligently and independently. So what AI features do I keep in mind while writing applications?

There’s symbolic learning: speech recognition, image recognition, object recognition and NLP (Natural Language Programming).

Artificial General Intelligence is based on machine learning: supervised, unsupervised and reinforced, applied in deep learning, neural networks and recurrent neural networks.

As AI use grows across multiple points in modern society, it’s worth taking a step back and listing the main application for the broadcast (in the widest sense) industry. Here are 10 potential AI applications.

1. QC

Quality checking is a key task before content enters the transmission or distribution stage. Video must go through a series of essential checks for both technical compatibility with various devices and visual checks for anomalies that could lower the quality of the viewing experience.

Traditionally, tasks performed by systems find fault in a file’s technical standards while humans are used to find flaws in the viewing experience, going through a five-point check. However, with the increase in the number of devices on which people view video, checking only five points is not a fool-proof method in ensuring QC is 100 per cent achieved. This content viewing by humans, and checking compatibility for various devices, can also take a huge amount of time and can be a tiring and repetitive job.

AI’s ability to handle symbolic learning and machine learning comes in handy in performing QC tasks. The system database can be filled with information about conforming to technical video standards for various devices and image recognition can help in finding flaws in the actual video viewing experience.

2. Search

Searching content across large libraries can be a laborious task. Content is marked properly in an archive to make it searchable with the task of marking content achieved through entering metadata manually. But even then, if the search criteria change, it’s easy to miss the most relevant content.

AI takes metadata marking to the next level. Based on image recognition and symbolic learning, a large inventory of metadata can be created. AI can help in the classification of content; whether a moment is happy or sad in a video, for example. Another example is being able to identify brand logos in sports events, which will help in successful promotion of those events. Processing using AI systems increases both the speed of searches and increases the accuracy of the search response.

3. Metadata

Metadata is known to enhance the value of content: in terms of monetizing and re-monetizing, metadata is crucial.

Symbolic AI, through speech and image recognition, can create metadata information associated with any content. But AI takes metadata to the next level through machine learning, providing classification or groupings of content. This can be further improved by creating trends using neural networks; for example, associating content with its popularity among age groups.

[If you doubt the ability of AI in this field, remember the UK Royal Wedding where each celebrity was tagged in real time based on AI]

4. Compliance

Compliance is the process of identifying events/scenes in a video that may restrict transmission or distribution in specific territories due to regulatory requirements.

AI, through supervised learning, can be used for identifying such scenes within a given piece of video and present “time-in” and “time-out” points to an editing system to perform further edits.

Through neural networks and deep learning, AI can help rating agencies to quickly suggest ratings for a particular program or movie. The whole process can be stored in the memory of a system using a Recurrent Neural Network.

5. Editing

Editing is very much a human skill and requires complex decision making based on the creativity of the editor and what they consider to be the best viewing experience.

But there are editorial decisions that are routine, as with the compliance example above. For example, AI can help in identifying coarse language that needs to be beeped out, or the blurring of certain frames or positions within in video, using advanced transcoding or editing system functions.

6. Highlights

Sports event highlights are most sought after once a competition finishes with audiences very much interested in the key events. Currently manual editing creates highlights.

AI symbolic learning can help to more quickly identify the key highlights of sports events and, by using the aforementioned advanced transcoding and editing systems, can help create highlights.

I’m not talking about the distant future: Ferrari announced that in partnership with Intel, they are working on a way to create personal feeds when watching a race. A drone follows a particular car and AI will be able to mix and cut to deliver the relevant feed. It will be interesting to see how this develops.

AI can then improve it through search functionality and can quickly provide details of similar events in the past and create or embed links to make the highlights even more interesting. Such statistical analysis alongside video archives will add value to content.

Please accept statistics, marketing cookies to watch this video.

7. Break Structure or Advertising

Identifying truly relevant advert placement alongside content can be tricky. If an advert appears at an oddly timed moment in a program, it might irritate the viewer.  But if it appears during a scene switch, it may well engage the consumer and encourage them to continue watching the next part of the program.

AI, through image recognition, can identify such scene changes and provide sweet spots to place advertisements. It can take it to the next level by providing relevant adverts based on metadata associated with a given scene or scenes. Deep learning and neural networks can help in identifying the mood of a scene and provide an opportunity to insert relevant adverts.

[The recent trial by Channel 4 using AI to identify TV scenes and place contextual ads lead to an increase in purchasing intent of 13%. But this is just the beginning.]

8. Subtitling and Close captioning

Subtitling systems for video are clearly not a new thing. However, subtitling is complex and there are too often flaws in sentence construction or punctuation. Regional/local accents add further complexity.

Through NLP (Natural Learning Process) and RNN (Recurrent Neural Networks), it will be  possible to generate subtitles with the correct punctuation and sentence construction.

9. Supervision

While the distribution or transmission of content to a wide audience is straightforward, maintaining the quality of experience for the viewer and quickly identifying any issues can be tricky.

Using reinforced machine learning in AI can help improve our supervision practices. While of course able to detect faults and identify any issues, AI takes it to next level in predicting faults through deep learning.

10. Presenting the News

Robotics is the part of AI that is related to the physical movements of a system and that can be used in everyday life. Driverless cars are one such application. In the broadcasting world, we could use robots to present news of the day. A humanoid AI can present the news based on script and present visuals from remote locations. It can also react to breaking news.

And again, this is in the very near future. Or even the present!

Please accept statistics, marketing cookies to watch this video.

I’ve described 10 applications specific to video and broadcast for AI. There are certainly more and every month brings new potential uses.

Are you using AI in your day-to-day business? If not, do you plan to?