Content is king but data is the key success for media & entertainment companies.

In-Video Intelligence
unlocks data that is inside the video box

Learn how AI/ML and data is reshuffling the cards of the Media & Entertainment Industry.

Choose the version to watch:

TRANSCRIPT

 

Introduction

 

[Ophélie Boucaud – Dataxis] – Thank you for joining us today. We will be talking about in-video intelligence for the next 50 minutes or so. We have four speakers with us today to discuss this new concept. So what lies behind it and how it will, and in several cases how it is already, changing the OTT landscape. So I’m really happy to welcome among us Christopher Kuthan who’s the head worldwide business development direct to consumer in streaming for the media and entertainment industry at Amazon Web Services. He’s working with media companies and advertisers to build and deploy their D2C offerings on the Amazon platform. Thanks for joining us Chris.

[Christopher Kuthan – AWS] –  Thanks. Great to be here.

[Ophélie Boucaud – Dataxis] – Hi. Then we also have with us Paul Moore who has been in the media industry for over 15 years now and also has a very long experience in the IT field, notably in terms of video streaming, 3D, digital preservation, social media, video analytics and recommender systems. He’s a member of the scientific community and is also responsible for the media industry in the Atos expert community. Thanks Paul for joining us today.

[Paul Moore – Atos] – Thanks for having me.

[Ophélie Boucaud – Dataxis] – We also have Thomas Bidet who’s the chief product officer for e-TF1. So he’s developing the biggest private french broadcasters digital products strategy and roadmap on the B2C segment and across all screens, and he will be giving us insights on how broadcasters are being challenged when moving towards the OTT world lately. Hi Thomas.

[Thomas Bidet – TF1] – Nice to meet you. Thanks.

[Ophélie Boucaud – Dataxis] – Thank you. And finally we have Steve Sklepowich who’s the senior strategy advisor for Synchronized. He’s based in the US and he’s working on new partnerships and growth. He’s been advising technology companies on product categories and AI strategies for the last 10 years and he will be talking today about Synchronized media intelligent solution, how they transform linear video into smart video and how they collect, treat and centralize metadata objects and assets in the OTT environment. Hi Steve.

[Steve Sklepowich – Synchronized] –  Hi nice to meet you. Nice to be here.

 

What is in-video intelligence (IVI) ?

[Ophélie Boucaud – Dataxis] – Thanks. So Steve, I would like to start with you. If you could give us a few words by introducing the concept of in-video intelligence, which is going to be the core of our discussion today.

[Steve Sklepowich – Synchronized] – Sure. So in-video intelligence or IVI is actually a new market category, and while today’s MAM systems have been around, offer all kinds of rich data about what’s around the video, and video data analytics systems track how consumers consume video. Unfortunately the data inside the video is not visible, so the essence and contents of the video are essentially hidden and sealed inside the video box, and so in-video intelligence or IVI is here to help solve that problem as a new category of solutions. 

 

What is the data challenge fot OTT & broadcasters ?

[Christopher Kuthan – AWS] – If you understand what the consumers are clicking at and what they’re watching, what they are abandoning and what they are interacting with, what they are sharing and so forth. That’s great but if you don’t know what specifically is in the content, it’s kind of what i’d like to say one hand clapping right. You need to understand what’s actually in the content that triggers these interactions.

[Thomas Bidet – TF1] – The goal we aim, currently at TF1 is around this concept of choice paradox. This idea of data in videos and the way we can have more content and more description around video content is a key issue. For example we recently add a catalog of nine thousand hours of AVOD content on my TF1 and the way we promote this content and we index this content in my TF1 is just a problem, it’s just a pain point for us, and that’s why we work with Synchronized for two years now. It’s a good way for us to have this data and to put this data in front of the user value, and the user experience is a key in our strategy. That’s why Synchronized helped us to do so.

[Paul Moore – Atos] – For media companies, especially the national media companies that are competing against the big tech, its personalization, its engagement. It’s taking advantage of their archive. So if we can get better information about, on the one hand, the audience, but on the other, about the content itself, then we can do a much better job. We can help the media companies do a much better job with increasing the personalization, with creating more engagement, with creating more intimacy with the viewers. We can help them to better understand their audience but also what’s specific with the in-video intelligence, how to better understand the content

How IVI optimizes CX and automates workflows ?

 

[Steve Sklepowich – Synchronized] – So I think the consumers aren’t getting the most optimized experience and so this is helping the monetization model offering more personalized content. You know the segments that people want to watch, when they want to watch it. And I think also just on the productivity of the workflow it’s a huge drain on a team to manually tag thousands of hours of video, and so they could do that in much more productive ways. So there’s a way to automate and assist the editorial process, so those creative people can be more creative and not spend their time on mundane and repetitive tasks. There’s a lot that can be done right in the editorial workflow by using the in-video intelligence. 

[Thomas Bidet – TF1] – Yes the issue of productivity of our content team is a key issue too. I spoke about front and UX assets because it’s more valuable for us, in terms of business. And regarding the cost of content editing of course there is a problem because the ROI is not easy, when you add metadatas you have a lot of work during all the workflow, all the process of ingest videos, and after that you have the CMS, and after that you have the exploitation on the front side. But what’s interesting is the way we put metadata, it creates values at each step of this workflow with automatization in the CMS but also in front : it’s interesting for google because in terms of search engine optimization this metadata could be really useful for us to be relevant in a google algorithm. It’s interesting for example on the show page, when you want to see the show “the voice” and you want to have the information around “the voice” with the anchorman’s characters, the candidates, etc… All this is a lot of work for our team to put this information in the CMS and Synchronized help us to create this metadata and to extract it from the video content, and that’s really useful. And another example that could be relevant for you is what we did with the smart thumbnail. that extracts the good and the most relevant picture of the content and we use it on the front side. it and it’s really interesting to have this quality of thumbnail and of images on the website or the application. So these are different ways to create value around metadata and machine learning, and it’s complementary to the team. And that’s important for me to explain to you because we could see this kind of project as only a cost project but it’s not the case for us. It’s a way to gain some time and some bandwidth in the team to do other things, to create articles for example or to choose some new contents. This is really helpful, really.  

[Christopher Kuthan – AWS] – Yeah it’s really great to hear Thomas say that because that’s what we see really as a key across the globe, that customers use specifically that intelligence that you can gain with machine learning services out of the content. So that the data you can get out of the content for building then the UX experiences, but so the on the back end you can use AI/ML to get more of this data out and to accelerate processes, 100% see that help free up the employees from mundane tasks and really have them focus on very added value tasks. And then on the front end this data informs them, for example recommenders and personalization and so forth in a much deeper way. So maybe one word that I like to have. You know the recommenders of videos, you know what kind of videos do I have that I can recommend. So this metadata can get better information about what I have in my archive or what I have available. But also it can help me for example to personalize and tailor the ad experience. So it gives really personalized ads for more tailored ads. And I really like what thomas said and I really like the projects they did there when it comes for example to  use that information together with AI/ML to even improve the artwork or personalize their artwork. I mean they did it with thumbnails right, so you can even go further and say select the right thumbnails and then personalize the thumbnails for the viewer of the platform. So that’s maybe three categories of personalization that are being informed by the data. And to generate that data you need AI/ML and things like in-video intelligence to feed that data pipeline.

[Steve Sklepowich – Synchronized] – So I have one other. Just add there which is the ability to, you know, if you have an hour-long broadcast, you only have five minutes to watch it, to be able to identify the segments that you’re most interested in. And so the concept of a smart segment where you can actually present, or the system can just present to you, maybe six minutes of content because that’s all the only time you have, it’s the most relevant stuff that you’ve identified for you. So it’s the ability to personalize based on the ability to look inside the video and see the segments that are relevant at that moment for the user. 

[Paul Moore – Atos] – We see with our customers, with our clients, that they’re being required to produce more and more content. Every year more channels, more content, whatever. And at the same time they’re at best maintaining the same number of employees, so they need to find ways to automate these tasks. so anything that can be done to to increase the automation level, maybe not to 100% but at least making it easier for the media company employees to organize that information is a huge help, and probably also increasing their job satisfaction, get rid of the mundane boring tasks : instead of choosing between 100 thumbnails, choosing between two or three is, you know, more satisfying. Just these are the three best, which is the best rather than going through a whole bunch of… whatever. 

How IVI maximises monetization ?

 

[Steve Sklepowich – Synchronized] – Auto detecting ad breaks, which is going to be relatively easy to do, it’s, I think, being contextual about looking before the ad spot and trying to make the ad relative to the relevance of the content. Can you get the ad into the right place that matches, that has some relevance to the content ? An example being a car ad that’s followed by a car chase. Is there some meaning, some context, that you can pull out of the stream that ultimately gets you an ad at the right time ? So as opposed to being based on user data, can it be based on contextual information in the stream, as the user is experiencing it ? I think it will be really interesting. And that’s it’s really a future speak for advertising : more contextual advertising drives potentially more engagement and revenue.

[Christopher Kuthan – AWS] – If you don’t have that deep understanding of the videos and of the ads, you cannot match these things. By the way you can also go the other way obviously. Just the last comment, since you have now this information, you can also go and create this kind of add moments right. You can say : “Okay in the video before you have palm trees and music and cocktails and whatever, so maybe I sell that ad moment to a brand for an increased cpm or whatever.” So you need to be at the operational level with a lot of data in order to create these kinds of experiences, and for that the AI/ML is being used a lot.

 

What opportunities does IVI offer for the future ?

 

[Paul Moore – Atos] – Bringing together properly deep audience insights together with deep insights about the video and moment by moment in the video, I think there are a lot of things that are not yet being considered in terms of understanding video that we will be seeing in the future. And when you bring those two together. 

[Christopher Kuthan – AWS] –  On the interactivity side. I like the implicit interactivity, which is like modes, you know like what kind of content do I serve up at what time of the day, or how I’m feeling, or whatever, almost like a podcast that’s already been done to a degree. So that’s definitely something that’s very interesting. One more sentence may be to the monetization side. Making that return on investment on that ad spot or the ad in general bringing that higher up, and then on the other side reducing the ad load. So there’s that spectrum, there’s a lot we can do. you have a personalized product place. You have a video and you understand who is watching it and what the preferences are, and you overlay somewhere a certain product. But also just making the more traditional ad spots, just much more personalized, and tailored, and so forth, is very interesting. 

[Thomas Bidet – TF1] –  You are in a mood optimistic or you want something more dark and you have this sensor of the machine learning that can generate a recommendation not only based on your aid, on your profile, on your previous videos, but on your mood, on your emotion of the moment. That’s really interesting.

[Steve Sklepowich – Synchronized] – I also think that the merchandising linked deep into the video is interesting. I mean we’ve talked about that a long time in IPTV and just seeing experiences that could be more directly in your face, around hot spot generation or identification of products for sale. We’re looking at something called natural cinema language technology to pull out a movie trailer, being able to extract a summary of that. You have natural language processing on text and in voice, and so can you pull out a preview of a video. There’s all kinds of interesting use cases that you can think about in terms of repurposing, aggregating and segmenting for whatever purpose. That might be whether it’s localized or all kinds of options, two interesting things that I think about.

Conclusion

[Christopher Kuthan – AWS] – There’s a lot of innovation in that field also, quite frankly not only out of the media space, but just from companies that had to build engaging interactive experiences because otherwise they are out of business. So there’s a lot of things and that accelerated a lot of the thinking as well in that area. So I just wanted to add that.

[Paul Moore – Atos] – Multi-channel experiences, that exists obviously now, but the media companies are not really taking advantage of it. The twitters and the facebooks and whoever are getting all of that whereas the media companies, when it comes from perhaps people watching tv and then things happening in parallel or people having multi-channel experiences in parallel, aren’t taking advantage of it. If they understood, I mean it takes more than just this, but if they understood their video a little bit more, because for that you need to know exactly what’s happening right now, when somebody reacts on twitter it’s mostly because what’s happening right now, so they can take advantage of that much better with the in video intelligence.

[Christopher Kuthan – AWS] – There’s definitely a strong trend to move archives now, and again a little bit accelerated as well due to what happened the last 18 months. So people couldn’t really access a lot of the assets if it wouldn’t be tagged, with the data centers and so forth.

 
[Paul Moore – Atos] – I’m sure the globals have more but I would argue that the local players probably have more need for the data. Somebody like Amazon or Netflix can throw 10 million euros a show, they can spend billions on content whereas the local players need to lean on long tail content, you know, the niche content, and also they need to create, or they are able hopefully, to create a level of intimacy and engagement with their audience that the globals can’t, just because they know them better : they’re closer to them. And data is absolutely vital for that. So I would say it’s more important for the local guys, it can be their superpower.