Content is king but data is the key success for media & entertainment companies.

In-Video Intelligence
unlocks data that is inside the video box

Learn how AI/ML and data is reshuffling the cards of the Media & Entertainment Industry.

Choose the version to watch:

TRANSCRIPT

 

Introduction

 

[Ophélie Boucaud – Dataxis] – Thank you for joining us today. We will be talking about in-video intelligence for the next 50 minutes or so. We have four speakers with us today to discuss this new concept. So what lies behind it and how it will, and in several cases how it is already, changing the OTT landscape. I’m gonna start by introducing the speakers. So I’m really happy to welcome among us Christopher Kuthan who’s the head worldwide business development direct to consumer in streaming for the media and entertainment industry at Amazon Web Services. He’s working with media companies and advertisers to build and deploy their D2C offerings on the Amazon platform. Thanks for joining us Chris.

[Christopher Kuthan – AWS] – Thanks. Great to be here.

[Ophélie Boucaud – Dataxis] – Hi. Then we also have with us Paul Moore who has been in the media industry for over 15 years now and also has a very long experience in the IT field, notably in terms of video streaming, 3D, digital preservation, social media, video analytics and recommender systems. He’s a member of the scientific community and is also responsible for the media industry in the Atos expert community. Thanks Paul for joining us today.

[Paul Moore – Atos] – Thanks for having me.

[Ophélie Boucaud – Dataxis] – We also have Thomas Bidet who’s the chief product officer for e-TF1. So he’s developing the biggest private french broadcasters digital products strategy and roadmap on the B2C segment and across all screens, and he will be giving us insights on how broadcasters are being challenged when moving towards the OTT world lately. Hi Thomas.

[Thomas Bidet – TF1] – Nice to meet you. Thanks.

[Ophélie Boucaud – Dataxis] – Thank you. And finally we have Steve Sklepowich who’s the senior strategy advisor for Synchronized. He’s based in the US and he’s working on new partnerships and growth. He’s been advising technology companies on product categories and AI strategies for the last 10 years and he will be talking today about Synchronized media intelligent solution, how they transform linear video into smart video and how they collect, treat and centralize metadata objects and assets in the OTT environment. Hi Steve.

[Steve Sklepowich – Synchronized] – Hi nice to meet you. Nice to be here.

What is in-video intelligence (IVI) ?

[Ophélie Boucaud – Dataxis] – Thanks. So Steve, I would like to start with you. If you could give us a few words by introducing the concept of in-video intelligence, which is going to be the core of our discussion today, before we kick up the panel session with the rest of the speakers.

[Steve Sklepowich – Synchronized] – Sure. So in-video intelligence or IVI is actually a new market category, and while today’s MAM systems have been around, offer all kinds of rich data about what’s around the video, and video data analytics systems track how consumers consume video. Unfortunately the data inside the video is not visible, so the essence and contents of the video are essentially hidden and sealed inside the video box, and so in-video intelligence or IVI is here to help solve that problem as a new category of solutions. It’s essentially enabling access to that data, making it more transparent, interactive and intelligent. So it’s a new category of product and we’re happy to be talking about it today.

[Ophélie Boucaud – Dataxis] – Thank you very much. So before we start the discussion I would like to remind our audience that you can drop your questions for all of our speakers in the Q&A section and we will take time at the end of this panel to answer. Also this webinar is being recorded so you can watch it again very soon and also share it with some colleagues if you’d like.

 

The big challenge for M&E

 

[Ophélie Boucaud – Dataxis] – So to start the panel discussion I would like each of you to give me some insights about the industry and what are the big challenges that you see in the OTT ecosystem. We’ve seen in the last years that the market is very fragmented, we have a lot of OTT players which creates challenges in terms of differentiation, but there’s also another big challenge which is the abundance of content available on those digital platforms, with very large libraries being offered to viewers and a lot of content being aggregated on those platforms, which creates also challenges in terms of viewer experience and content discovery improvements. I would like to start with Thomas maybe, as you’re working for a traditional TV broadcaster moving towards OTT. How can content publishers compete with global giants in the OTT environment ? What are the main challenges that you’ve seen ?

[Thomas Bidet – TF1] – Good question. You have two hours for me ? It would be long. I think the idea I aim, the goal we aim, currently at TF1 is around this concept of choice paradox. I don’t know exactly the expression in english but “le paradoxe du choix” in french. And I saw and I wrote some articles very recently about the Netflix issue and pain points around this choice paradox, and I think this idea of data in videos and the way we can have more content and more description around video content is a key issue for us to address this problem of paradox of choice.. And you spoke about the abundance of content, and that’s exactly the case for us. For example we recently add a catalog of nine thousand hours of AVOD content on my TF1 and the way we promote this content and we index this content in my TF1 is just a problem, it’s just a pain point for us, and that’s why we work with Synchronized for two years now. It’s a good way for us to have this data and to put this data in front of the user value, and the user experience is a key in our strategy. That’s why Synchronized helped us to do so.

[Ophélie Boucaud – Dataxis] – Thank you Thomas. Chris I would like to ask you a similar question: what are the key differentiators on the OTT market nowadays ?

[Christopher Kuthan – AWS] – Yeah I mean it’s very similar to what Thomas said and I would also want to start really with the data, and both obviously on the consumer data, interaction data, and kind of building the right data structure to capture what are the consumer doing, but also really understanding how they’re interacting with the content in every moment. That’s like if you understand what the consumers are clicking at and what they’re watching, what they are abandoning and what they are interacting with, what they are sharing and so forth. That’s great but if you don’t know what specifically is in the content, it’s kind of what i’d like to say one hand clapping right. You need to understand what’s actually in the content that triggers these interactions. And that’s why suppose on the user data but on the other side also the commentators are very important. And out of that deeper understanding there come differentiation opportunities. So this understanding can form inputs into recommenders that can inform inputs into content decision, even green lighting decisions that can go into innovation, you know like trying out maybe a/b testing some new interactive, maybe innovation formats right, then you’re able to interact with this view like you’re able to measure the interaction of the viewers and see how your innovation, how you’re offering and so forth resumes. So again really around data mostly and what you can do with it.

[Ophélie Boucaud – Dataxis] – Paul, now I’m turning to you. How can tech partners enable OTT players to maximize the value of their content of their digital assets.

[Paul Moore – Atos] – Well I mean we’ve known for years the issues for media companies, especially the national media companies that are competing against the big tech, its personalization, its engagement. It’s taking advantage of their archive. So if we can get better information about, on the one hand, the audience, but on the other, about the content itself, then we can do a much better job. We can help the media companies do a much better job with increasing the personalization, with creating more engagement, with creating more intimacy with the viewers. So how can we help ? Well, we can help them to better understand their audience but also what’s specific with the in-video intelligence, how to better understand the content.

[Ophélie Boucaud – Dataxis] – Thanks. And Steve now also your take on this. What’s the role of in-video intelligence in the OTT landscape ?

[Steve Sklepowich – Synchronized] – Yeah. So I think the consumers aren’t getting the most optimized experience and so this is helping the monetization model offering more personalized content. You know the segments that people want to watch, when they want to watch it. And I think also just on the productivity of the workflow it’s a huge drain on a team to manually tag thousands of hours of video, and so they could do that in much more productive ways. So there’s a way to automate and assist the editorial process, so those creative people can be more creative and not spend their time on mundane and repetitive tasks. There’s a lot that can be done right in the editorial workflow by using the in-video intelligence.

 

How IVI optimizes CX and automates workflows ?

 

[Ophélie Boucaud – Dataxis] – Thank you Steve. So you mentioned several big challenges that OTT players are facing now and the solutions that IVI can help them to optimize their assets. I would like to start with the first one you mentioned in terms of optimizing consumer experience. How can we improve UX and viewer satisfaction, especially when it comes to access to content and content discovery ? We’ve seen that OTT players have more and more content available on their platforms. What can I be doing to facilitate a viewer’s journey throughout this content ? Maybe Thomas if you want to start with that, as you mentioned you integrated the very large library on my TF1. What were the challenges you faced and what were the solutions that you implemented ?

[Thomas Bidet – TF1] – Yes the issue of productivity of our content team is a key issue too. I spoke about front and UX assets because it’s more valuable for us, in terms of business. And regarding the cost of content editing of course there is a problem because the ROI is not easy, when you add metadatas you have a lot of work during all the workflow, all the process of ingest videos, and after that you have the CMS, and after that you have the exploitation on the front side. But what’s interesting is the way we put metadata, it creates values at each step of this workflow with automatization in the CMS but also in front : it’s interesting for google because in terms of search engine optimization this metadata could be really useful for us to be relevant in a google algorithm. It’s interesting for example on the show page, when you want to see the show “the voice” and you want to have the information around “the voice” with the anchorman’s characters, the candidates, etc… All this is a lot of work for our team to put this information in the CMS and Synchronized help us to create this metadata and to extract it from the video content, and that’s really useful.

And another example that could be relevant for you is what we did with the smart thumbnail. We created machine learning on a synchronized side that extracts the good and the most relevant picture of the content and we use it on the front side. Sometimes humans are better so you have the possibility to change this proposition, this machine proposition, but generally we keep it and it’s really interesting to have this quality of thumbnail and of images on the website or the application. So these are different ways to create value around metadata and machine learning, so that’s cool and it’s complementary to the team. And that’s important for me to explain to you because we could see this kind of project as only a cost project but it’s not the case for us it’s a way to gain some time and some bandwidth in the team to do other things, to create articles for example or to choose some new contents. This is really helpful, really.

[Ophélie Boucaud – Dataxis] – Thank you very much Thomas for your insights. Chris do you also have things to share about this ? Because I’ve seen you reacting quite positively to those comments.

[Christopher Kuthan – AWS] – Yeah it’s really great to hear Thomas say that because that’s what we see really as a key across the globe, that customers use specifically that intelligence that you can gain with machine learning services out of the content. So that the data you can get out of the content for building then the UX experiences, but so the on the back end you can use AI/ML to get more of this data out and to accelerate processes, 100% see that help free up the employees from mundane tasks and really have them focus on very added value tasks. And then on the front end this data informs them, for example recommenders and personalization and so forth in a much deeper way. So maybe one word that I like to have. You know the recommenders of videos, you know what kind of videos do I have that I can recommend. So this metadata can get better information about what I have in my archive or what I have available. But also it can help me for example to personalize and tailor the ad experience. So it gives really personalized ads for more tailored ads. And I really like what thomas said and I really like the projects they did there when it comes for example to use that information together with AI/ML to even improve the artwork or personalize their artwork. I mean they did it with thumbnails right, so you can even go further and say select the right thumbnails and then personalize the thumbnails for the viewer of the platform. So that’s maybe three categories of personalization that are being informed by the data. And to generate that data you need AI/ML and things like in-video intelligence to feed that data pipeline.

[Steve Sklepowich – Synchronized] – So I have one other. Just add there which is the ability to, you know, if you have an hour-long broadcast, you only have five minutes to watch it, to be able to identify the segments that you’re most interested in. And so the concept of a smart segment where you can actually present, or the system can just present to you, maybe six minutes of content because that’s all the only time you have, it’s the most relevant stuff that you’ve identified for you. So it’s the ability to personalize based on the ability to look inside the video and see the segments that are relevant at that moment for the user.

[Christopher Kuthan – AWS] – You could even create something like a catch-up service. You’re jumping into a game or something like that at a certain point, and you make even a kind of personalized summary of what you missed and then, you know, catch up from where the game starts again. But you can only do that when you know what’s going on in the video right, so that’s an example.

[Ophélie Boucaud – Dataxis] – Thank you Paul, do you want to add something also ? Maybe some insights or business cases regarding recommenders ?

[Paul Moore – Atos] – Yeah well just in general, we see with our customers, with our clients, that they’re being required to produce more and more content. Every year more channels, more content, whatever. And at the same time they’re at best maintaining the same number of employees, so they need to find ways to automate these tasks. So on the one hand the paradox of choice that was being talked about before. There’s more and more information but there’s fewer or the same number of people to somehow organize that information. so anything that can be done to to increase the automation level, maybe not to 100% but at least making it easier for the media company employees to organize that information is a huge help, and probably also increasing their job satisfaction, get rid of the mundane boring tasks : instead of choosing between 100 thumbnails, choosing between two or three is, you know, more satisfying. Just these are the three best, which is the best rather than going through a whole bunch of… whatever. And then there’s the other side. Maybe I’ll talk not from the point of view of a tech company but from the point of view of an end user. Recommender engines just don’t work all that well. You know when I connect to these OTT platforms I spend a lot of time trying to find what I want. I seem to recall reading an average of, I can’t remember, 15 minutes or something like people spend looking for content and that’s ridiculous. You know if it’s your first five times going on a platform then that’s fine but if you’ve been on that platform hundreds of times, if you’ve watched hundreds of movies and series, how can they not know what you like ? And every time I connect I get the feeling they don’t know what I like. They’ve seen what I’ve watched. So if they know what I’ve watched, if they understand better what that content is, then they should be able to provide me with a much more satisfying recommender experience. And right now I’m not getting that satisfactory recommender experience, I think. Sorry Thomas.

[Ophélie Boucaud – Dataxis] – Where do you think the key is to improve the recommender indeed ?

[Paul Moore – Atos] – Well just better understanding the content. Right now, there are very few parameters and if we understand the content better, whether it’s who the actors are, whether it’s not just genre, you can be a lot more specific about that. So there could be all kinds of proof.

[Ophélie Boucaud – Dataxis] – maybe you can give us some insight since you’re working from a linear stream towards the ott world and moving into digital platforms of course we get a much more granular insights on what audiences watch what they what their behavior is on the platform can you see maybe different patterns what how did it help you to to change maybe your your strategy online

[Thomas Bidet – TF1] – Very interesting what you said Paul. I totally agree with everything. Netflix speaks of 19 minutes on average for the choice of content. So that’s interesting to see how our work is to reduce this time of choice. Another point regarding your question Ophelie is around the evolution of the usage. And I’m sure between linear and catch up, between linear and extracts or scenes, there is a lot of work and improvement to do in the way we promote our content and we give access to this content. It was Christopher who spoke of the extract of scenes of the content, which I call point of interest. We have a feature in our player that gives you the possibility to access a point of interest, for example a goal during a match or a scene during a movie or a serie. And I remember two or three years ago I was totally horrified by the speed watching feature in Youtube or every player, and I’m surprised today because I use it for example for documentaries, for movies it’s more difficult, but for example for documentaries it’s totally relevant. But two years or three years ago I thought it was just impossible to watch something with speed watching. So we see that usages are going further and we need to give an answer in terms of UX in myTF1, and we do now offer this functionality too. And we have problems with producers, that’s another question, that’s another issue, but because of the integrity of the content of course, but this is for me another real issue regarding discovery on one hand, access to content, and after that the way I consume this content. Maybe I want to consume it in a different way : speed watching, extract scenes, with the sound, without the sound, with subtitles, without subtitles, it’s just infinite. So I think all the possibilities are on the table and now we need to keep it simple because when we have a player with a lot of functionalities, with too technical image or feeling, I don’t think it’s a good way to do entertainment. And that’s why I don’t want to have the strategy for example of a Youtube that is a more technical player.We keep it simple but with flexibility to consumers.

[Paul Moore – Atos] – Also I think younger viewers view most content differently. They’re probably not going to sit for two hours. For most content you need to give it. To me, broadcasters such as yourselves are losing younger viewers. You need to adapt to this different way of consuming, whether it’s content itself that is shorter, whether it’s segmenting and choosing, as you were talking, about whether it’s be viewing. I mean that’s absolutely vital to keep and hopefully get back to these younger viewers. You know most of us in this call seem to be bald guys, so we’re probably not the 20 year old TikTok viewer. But you know you need to get those people back.

[Thomas Bidet – TF1] – The vertical content is another aspect of our work today in terms of innovation. Because we know young people when they are for example outside, at home, they want to keep their smartphone like this [vertically] and not like this [horizontally]. And that’s just usage, and 80 percent of young people just want it. So the TikTok generation as you said. And we need to adapt to it.

[Ophélie Boucaud – Dataxis] – Chris, do you have things to share ?

[Christopher Kuthan – AWS] – Maybe we want to add on the TikTok generation. So it all plays again back into what we already discussed. So if you build an experience like that, even if you can’t even think of it, like taking it further as potentially very interactive experiences. I’ve seen a lot of interest with interactive experiences enabling creators to create communities with followers around certain content, a lot of user generated content. I mean I’m opening up here a big thing, but obviously this is the consumption pattern at the moment. Not to stick with tiktok, Twitch for example is enabling a viewer pattern. For example my kids, that’s what they watch, they’re short and so forth. But again underneath, when you understand what’s in, if you want to even think of going to user generated content, you need something like in-video intelligence, because you need to understand what’s being uploaded to the platforms. So there’s like a supply chain thing there. You want to provide safety for content that’s being uploaded. But beside that, when you understand what’s in those pieces of content and how viewers are interacting with that, you can obviously inform a lot of recommendations underneath. You can again take that info and see what kind of content I shall be creating. So taking the streaming level to the interactive level is very much supported as well by understanding what is in the videos and what the viewers are doing with the videos.

[Steve Sklepowich – Synchronized] – I also think when you talk about today’s metadata versus what’s possible with looking at the data inside the video, there’s just going to be so much more data you know. So thinking about what can be possible once you get inside and really start looking at that data would be interesting I think.

[Christopher Kuthan – AWS] – That’s underneath a big reason for the success of platforms. So you see one video, as Thomas said, in front of you and at that point x somebody does something : he likes it, or shares it, or goes to the next, swipes it or whatever. I mean that’s like a recommendation. It’s like a perfect situation for what we’re discussing here. You need to understand what’s going on at the moment and that way you can build an experience that is based on top of that.

 

How IVI maximises monetization ?

 

[Ophélie Boucaud – Dataxis] – Yeah you mentioned earlier what triggers the interaction with the content, what triggers the viewers one topic. Also I would like to have your insights on monetization. Because when it comes to OTT platforms, we’ve seen many different types of business models, whether it is AVOD, SVOD or TVOD. How can data utilization help us to create new opportunities for monetization and just maximize the quality of this monetization ? Steve if you want to start ?

[Steve Sklepowich – Synchronized] – Yeah. I think just at the base level of auto detecting ad breaks, which is going to be relatively easy to do, it’s, I think, being contextual about looking before the ad spot and trying to make the ad relative to the relevance of the content. Can you get the ad into the right place that matches, that has some relevance to the content ? An example being a car ad that’s followed by a car chase. Is there some meaning, some context, that you can pull out of the stream that ultimately gets you an ad at the right time ? So as opposed to being based on user data, can it be based on contextual information in the stream, as the user is experiencing it ? I think it will be really interesting. And that’s it’s really a future speak for advertising : more contextual advertising drives potentially more engagement and revenue.

[Ophélie Boucaud – Dataxis] – Maybe Thomas ? Did the move towards OTT also change the way that broadcasters are selling advertisements in the digital environment ? How is that being shaped differently and how is the data input from the OTT environment helping with head sales ?

[Thomas Bidet – TF1] – Yeah of course, advertising is our main model for the moment. We try to increase the value of data (very large data behavior or qualification during the checkout process, etc). We are not going very deep in the way we kept, we catch this data during the video consumption. I think there is a field to be explored on our side. We are just at the beginning. It’s more before, during the onboarding, the checkout authentication process, but during the video consumption we don’t do a lot. Of course there is the ad break for example, with a feature very simple. For example we have an ad break, an ad pause : if you put your video on pause you have a display. Very simple, but for the moment I’m sure we can catch this data and use it for more and more processes of sales. But keep in mind that in Europe especially we have the RGPD. We have this constraint and kneel, and so on. And that’s a bit difficult for us for the moment to have a good level of your usage of this data. So maybe in the future it could be better than today.

[Paul Moore – Atos] – I mean the other side is better utilization of the archive. It’s providing long tail content to users but it can also be reutilization of content, especially for documentaries. And if you don’t have a deep understanding of what is in the archive then to find what you’re looking for is really hard for a media company.

[Thomas Bidet – TF1] – For sure, but it’s not in our DNA because we have 80% of our consumption during the three or four days after the broadcasting. We are very close to the live process and the linear process, but of course the long tail could be something really useful for us to have. In terms of SEO it could be really useful to link contents between them but I think we are beginners on this.

[Paul Moore – Atos] – When something happens, somebody gets married, somebody dies or whatever, to find relevant content also is not all that easy if you don’t have a properly annotated archive.

[Thomas Bidet – TF1] – Yeah for sure.

[Christopher Kuthan – AWS] – Yeah that avoids the sneak connection right. I heard that term at some point in my past : when you have a news event and you send somebody down three blocks to go to the archive to get the VHS with the title written on it about that person. It’s like a joke but obviously a key use case. So if you have a news event and you need immediate information about the person in the event, or the company or whatever, you can get it out of the archive quickly, so that’s the key use case.

[Thomas Bidet – TF1] – But maybe it’s more relevant for video platforms with this long tail strategy. For short video, for short content on myTF1, most of our consumption is for long video, long content, so maybe you don’t want to have this kind of interruption, with the bigger amount of data, sections and information during the visualization. And for example on Netflix it’s quite the case, it’s not really enriched. I think maybe it’s the difference with Amazon’s x-ray on your side, for me it’s more a long-term long-tail perspective. In some of the metadata usage it’s more advanced but you have IMDb for 25 years if I remember well, so it’s your DNA to have this visa set.

 

How IVI assists human process ?

 

[Ophélie Boucaud – Dataxis] – Steve do you want to add something on that ?

[Steve Sklepowich – Synchronized] – I think we did cover a lot of the points here. I think now it’s more about thinking about productivity and cost saving above money making and how to balance that.

[Ophélie Boucaud – Dataxis] – An interesting point that Thomas made earlier was about how in-video intelligence can be complementary to teamwork and how it can be used as a resource to better allocate the team on new projects.

[Steve Sklepowich – Synchronized] – Absolutely I think productivity is there to assist the human process. I mean more automation can help take care of the mundane tasks and make it more efficient, so that the creative editorial team can focus on being creative. I think that’s really an important point. So you’re not spending hours tagging something, you’re spending much more time focused on the UX and how you can better assume that tagged content. It’s a human plus AI workflow that ultimately results in a better user experience, a better monetized experience. With the factor in the productivity gains you can ultimately push the consumer experience forward.

[Ophélie Boucaud – Dataxis] – Chris can you maybe give some insights also on that, on how IVI can help with automation and also balance with human resources ?

[Christopher Kuthan – AWS] – Yeah we already talked about a lot of the elements but when I add humans in the loop for example, it definitely adds more metadata. I mean it’s just impossible for a purely human workflow to add that level of metadata. I mean when you look into every frame, you know there’s a lot of information in every single frame right, so how you’re gonna attack that manually ? And then also sometimes also the quality right, because there’s obviously levels of quality depending on who is tagging it. So with AI and these services you kind of bring this up to a certain level, and then you have human in the loop improving it. So it definitely adds quality. There are use cases we already mentioned like brand safety (filtering out inappropriate content that user generated) or like monetization with certain branding rules within the ad pods. For example I don’t want to have this brand together with that other brand in the same ad segment, or I don’t want an airline commercial after a bad airline situation in the video. So it’s these kinds of things. But if you don’t have that deep understanding of the videos and of the ads, you cannot match these things. By the way you can also go the other way obviously. Just the last comment, since you have now this information, you can also go and create this kind of add moments right. You can say : “Okay in the video before you have palm trees and music and cocktails and whatever, so maybe I sell that ad moment to a brand for an increased cpm or whatever.” So you need to be at the operational level with a lot of data in order to create these kinds of experiences, and for that the AI/ML is being used a lot.

 

What opportunities does IVI offer for the future ?

 

[Ophélie Boucaud – Dataxis] – Thank you very much. Moving forward now because we don’t have that much time left. So I would remind the audience that you can drop questions in the Q&A box and we’ll take five minutes at the end to respond. I would like to ask all of you what you see in the future. Like where is data utilization gonna go and what are the big new opportunities that you’re looking forward to, that we’re not yet able to to implement on OTT platforms but that you would see coming up soon ? Maybe Paul if you would like to start ?

[Paul Moore – Atos] – Well in a sense it’s not new, but just actually taking, as what I was talking about earlier, content recommendation and all that, bringing together properly deep audience insights together with deep insights about the video and moment by moment in the video, I think there’s there are a lot of things that are not yet being considered in terms of understanding video that we will be seeing in the future. You know all those kinds of things also influence people’s acceptance, how much they enjoy something, so the level of understanding of the video will be much higher. And also as I said the level of understanding of the audience, we will see much higher levels and when you bring those two together it can be really meaningful to go a little bit further out. In the future if we start having more inputs from the audience, haptics, understanding people’s moods and emotions and then joining that together with what’s happening in the video, perhaps recommendations… We could also see some really interesting things happening and that leads also into interactivity, maybe a sort of implicit interactivity, let’s say the video responds to people’s emotions. But you need to understand what is in the video to be able to do that.

[Ophélie Boucaud – Dataxis] – Yeah, Chris do you also have things to share like what’s coming up next ?

[Christopher Kuthan – AWS] – We have a lot been said already. I mean I definitely wanna plus one Paul’s comment on the interactivity side. I like the implicit interactivity, which is like modes, you know like what kind of content do I serve up at what time of the day, or how I’m feeling, or whatever, almost like a podcast that’s already been done to a degree. So that’s definitely something that’s very interesting obviously, taking that to the next level with interactivity. I already mentioned the community builders and the creators, and really following the content that you specifically like and then kind of building monetization models on that. So if I have a creator I love and I follow, and have a community, then I can go and maybe offer an upgraded experience, and kind of sell for example a private session with that creator to explain something I’m interested in, and things like that. So taking it from one to many, to a few, to one and back and forth. So this kind of spectrum is really something I see experimentation with, and it’s really interesting. One more sentence may be to the monetization side. Making that return on investment on that ad spot or the ad in general bringing that higher up, and then on the other side reducing the ad load. So there’s that spectrum, there’s a lot we can do. I don’t want to go to product placement, that’s kind of the extreme, but you have a personalized product place. You have a video and you understand who is watching it and what the preferences are, and you overlay somewhere a certain product. But also just making the more traditional ad spots, just much more personalized, and tailored, and so forth, is very interesting. So I really like these two things, I see a lot of activity around.

[Ophélie Boucaud – Dataxis] – Thank you Chris. Thomas maybe now ? What are you looking forward to ?

[Thomas Bidet – TF1] – Same thing but I think we are not so advanced at myTF1. I’m very humble and modest. I’m really keen, in my personal experience on some platforms, you are in a mood optimistic or you want something more dark and you have this sensor of the machine learning that can generate a recommendation not only based on your aid, on your profile, on your previous videos, but on your mood, on your emotion of the moment. That’s really interesting, and for the moment I don’t see many examples of machine learning tagging moods. If I take a competitor, my canal in France, or canal+, you have tagging but it’s human tagging, it’s the content team that puts it there because they know really well what’s the emotion behind the content. On the machine learning side I don’t know if for the moment it’s relevant, maybe yes, I don’t know, but this is a way to improve the UX around video. And another way we are looking at of course is interactivity but we failed a lot on that I’m really honest with you. I created five or six years ago a second screen experience on myTF1 during “The Voice”, during circle match it was really difficult because AWS was not able, today it could be I think possible with you, but five years ago it was really difficult in some of tech and the usage was not here. But now the co-watching is a feature that can be worked on. We saw initiatives on Disney+ and I don’t remember exactly which platform. So co-watching, this idea of sharing experience with friends and so on. And we saw Facetime in april. WWDC launched the facetime sharing video, that’s quite close to that and I think there’s something around this to work on.

[Steve Sklepowich – Synchronized] – I also think that the merchandising linked deep into the video is interesting. I mean we’ve talked about that a long time in IPTV and just seeing experiences that could be more directly in your face, around hot spot generation or identification of products for sale. We’re looking at something called natural cinema language technology to pull out a movie trailer, being able to extract a summary of that. You have natural language processing on text and in voice, and so can you pull out a preview of a video. There’s all kinds of interesting use cases that you can think about in terms of repurposing, aggregating and segmenting for whatever purpose. That might be whether it’s localized or all kinds of options, two interesting things that I think about.

[Christopher Kuthan – AWS] – Just one comment, through the last 18 months we all went through the global experience. There’s a lot of innovation in that field also, quite frankly not only out of the media space, but just from companies that had to build engaging interactive experiences because otherwise they are out of business. So there’s a lot of things and that accelerated a lot of the thinking as well in that area. So I just wanted to add that.

[Paul Moore – Atos] – Another area is multi-channel experiences, that exists obviously now, but the media companies are not really taking advantage of it. The twitters and the facebooks and whoever are getting all of that whereas the media companies, when it comes from perhaps people watching tv and then things happening in parallel or people having multi-channel experiences in parallel, aren’t taking advantage of it. If they understood, I mean it takes more than just this, but if they understood their video a little bit more, because for that you need to know exactly what’s happening right now, when somebody reacts on twitter it’s mostly because what’s happening right now, so they can take advantage of that much better with the in video intelligence.

 

Q&A

 

[Ophélie Boucaud – Dataxis] – Okay thank you very much all. I’m gonna take a few questions now. We only have five minutes so I don’t think we can cover all of them but maybe to start with one of the questions that’s kind of related to what Steve just talked about, regarding automated trailers. I think you said cinema language technology, so we had a question on how you see the importance of trailers for serious content to capture the audience, for example HBO does not provide series trailers but Netflix does. Steve, do you want to react to that ?

[Steve Sklepowich – Synchronized] – Well from a technical standpoint I don’t know, I mean from a use case perspective maybe that’s more your expertise Thomas. In terms of the service technically it’s feasible to imagine that being a very relatively short-term thing. Is that a useful use case from a service provider perspective ? I don’t know the answer. It’s completely productive, for the team it’s more about what the service provider would think of that. I don’t have the insight.

[Ophélie Boucaud – Dataxis] – Another question was : what is the level of debt and maturity of broadcasters in Europe or in the US ? Maybe Thomas if you want to give some insights on that since you’re a french broadcaster ?

[Thomas Bidet – TF1] – To be honest in France I’m not sure we are the best players regarding broadcasters. But I think our focus during those four or five years was really on the quality of service and being really good in the way we broadcast content on our websites and applications. For example yesterday, I don’t have the possibility to give you the figure, but it was really huge during the match, it was 17 million people. So as you can imagine now the non-linear product is huge. So even if it’s 10%, 20% or 30% we are focused on this. And for example our point of interest feature I talked to you about a few minutes ago was not available yesterday because we were focused on the quality of service of the main content, of the match, and we tried to be as good as possible on this. So I don’t know exactly, maybe in Germany it’s more advanced, UK I’m not sure, but we bench a lot to the german players and of course US and the GAFAM because they have all the assets .

[Ophélie Boucaud – Dataxis] – So Chris, maybe you can react to that from the US market ?

[Christopher Kuthan – AWS] – Yeah I’m just thinking about the general answer here to be quite honest. There’s definitely a strong trend to move archives now, and again a little bit accelerated as well due to what happened the last 18 months. So people couldn’t really access a lot of the assets if it wouldn’t be tagged, with the data centers and so forth. So I just want to +1 what Thomas said on the distribution. That’s a bit better in the US. Very generic and don’t have the time to go through all the details. Within the company there’s even data silos and goods well built. Some companies and then other parts of the companies are not taking advantage of it, so even within their organizations it’s spread across a little bit.

[Thomas Bidet – TF1] – Yeah and keep in mind, I do a bit of political speech now but, our constraint in terms of legal issues with RGPD, CNIL, e-privacy. Of course for end users I think it’s a good way to protect personal data, but for UX it’s not, for me, really relevant and our margin compared with you in the US is lower.

[Ophélie Boucaud – Dataxis] – One last question now because we’re coming to the end of the session. Is data of the same importance for local video players versus global ones ? Paul, do you wanna react to that ?

[Paul Moore – Atos] – Well I mean in terms of capabilities I’m sure the globals have more but I would argue that the local players probably have more need for the data. Somebody like Amazon or Netflix can throw 10 million euros a show, they can spend billions on content whereas the local players need to lean on long tail content, you know, the niche content, and also they need to create, or they are able hopefully, to create a level of intimacy and engagement with their audience that the globals can’t, just because they know them better : they’re closer to them. And data is absolutely vital for that. So I would say it’s more important for the local guys, it can be their superpower, understanding their audience, because they’re closer to them, but they need data for that.

[Ophélie Boucaud – Dataxis] – Thank you very much Paul, I think that said it’s a good conclusion for this session. So thank you everyone for joining us today. As a reminder the webinar has been recorded and will be available probably tomorrow. So that’s it for us today and thank you Chris, thank you Paul Steve and Thomas for being with us. Thank you for sharing your insights on this topic.