The event programme will include talks from invited speakers on the following topics:
- Automatic extraction of (meta)data from TV media assets (visual, audial and/or textual features) and data interoperability with low-level feature data (e.g. MPEG-7) and high-level semantics (e.g. Knowledge Graphs)
- Use of extracted metadata (and its interoperability with external data) in automatic summarization, remixing or repurposing of TV media assets
- Use of extracted metadata (and its interoperability with external data) in in-stream personalisation of TV content (both spatial and temporal modification of text, audio, video)
- Use of extracted metadata (and its interoperability with external data) in the recommendation and scheduling of TV content on different channels and targeting different audiences
We hope to build up the community of academic and industry researchers and practitioners working in the intersection of data and television, with new possibilities for knowledge and technology transfer and promotion of shared data formats and vocabularies.
Programme
15:00 – 15:10 Welcome & Introduction
15:10 – 15:50 UX Design: Personalisation and adaptive presentation of family video stories.
Keynote speaker: Sara Kepplinger (SNIPIN ) Sara’s invited talk concerns the development towards a use case for personalisation and adaptive presentation of family video stories. The application in development makes use of new ways of interactivity with focus on user-centred design. The talk is mainly about the user experience design.
Take a look at the presentation here
15:50 – 16:10 Metadata-driven TV, Content Re-purposing and Re-publication
Lyndon Nixon (ReTV) The ReTV project shows and explains its TV content re-purposing and re-publication tool, which uses prediction to suggest topics and events to focus a future TV/Web/social publication on, content from archives & broadcast to re-use and guides the creation of a publication at an optimal time to maximise the content’s reach and engagement.
Take a look at the presentation here
16:10 – 16:30 Multi-purpose media metadata to fuel applications? Combinations of data and human curation
Lauri Saarikoski (MeMAD) The MeMAD project develops and tests tools for managing media collections and extracting data out of them. What is the user experience of metadata in different media production use cases? How do metadata interchange formats and machine translation help when automated data is collected from multiple sources and applied to different tasks?
Take a look at the presentation here
16:30 – 16:40 Lightning Talks sessions
- The MeMAD Knowledge Graph by Ismail Harrando, EURECOM
Representing and modelling Radio and TV programs, being broadcasted or archived, can present several challenges due to the variety of metadata that could be attached to them as well as the potential applications using this metadata such as automatic content understanding, search, exploration, recommendation, etc. To tackle this challenge, we have developed the so-called MeMAD Knowledge Graph, which integrates and unifies audiovisual content from multiple distributors, producers, channels, genres and languages, and unifies access to their related metadata using the EBUCore ontology promoted by the European Broadcasting Union. Learn more at http://data.memad.eu/
- FaceRec: an AI-based System for Face Recognition in Video Archives by Pasquale Lisena, EURECOM
In video archives, knowing who appears in a video, when and where, is useful for improving search and automatic content understanding such as improving automatic captioning. It can also lead to learning interesting patterns and relationships among individuals. The web is a massive source of textual and visual information, which can be exploited for detecting known people. In this talk, we introduce FaceRec, an AI-based system for automatically detecting faces of known people in a video. FaceRec is available as an open-source library and a RESTful API at GitHub
Take a look at the presentation here
16:40 – 16:50 Break
16:50 – 17:10 Global Identity Graph
Kalev H. Leetaru (The GDELT Project) The GDELT monitors the world’s news media from nearly every corner of every country in print, broadcast, and web formats, in over 100 languages, every moment of every day.
17:10 – 17:30 Steps towards automation in the production of interactive content
Jo Kent (BBC Newslabs) & Mike Armstrong (BBC R&D) BBC R&D are combining their StoryKit object-based media project with the BBC Newslabs Slicer project which uses production metadata to segment content. The aim is to explore ways of creating interactive and personalised content from standard linear productions, initially focusing on the BBC’s News output
17:30 – 17:50 Overview of the immersive media approach in MPEGI
Imed Bouazizi (Qualcomm) An overview on the immersive media approach in MPEG – envisioned work for Scene Description (and standards)
17:50 – 18:00 Lightning Talks sessions
- TV Programmes are Narrative Sequences by Ahti Ahde, YLE
When analysed, sequences are combinatorially explosive, which means that not all possible narratives can be computed. We need to develop a model of understanding media that mimic human cognitive processes of creatively encoding a manuscript to symbolic presentation of media products, that are then decoded to meanings by audience consuming it.
- Eliciting User Preferences for Personalized Explanations for Video Summaries by Oana Inel, Nava Tintarev & Lora Aroyo, TU Delft
Take a look at the presentation here
18:00 – 18:30 Panel and closing