TailoredMedia

Tailored and Agile Enrichment and Linking for Semantic Description of MultiMedia

Background 

Audiovisual media have become a dominant means of communication, in both traditional and social media. However, the established practices to document and index multimedia content, (for example in media monitoring and archiving) do not allow for fine-grained description (e.g., on scene and object level) and do not scale to the ever-increasing amounts of content. 
While content of all modalities (text from newsfeeds, web, production information, etc., audio, images, video) is now available in digital form, the potential benefits of digitisation are not yet fully leveraged: Different content sources are often processed independently, without connecting them with other sources or external information. Contextual information is not exploited for guiding the annotation process and improve its robustness.

Project content

TailoredMedia aims to leverage the recent advances in automatic analysis of visual content: Artificial Intelligence (AI)-based methods are used to support metadata extraction and semantic enrichment for use cases in media monitoring, journalism and archiving. Building on state-of-the art methods for visual analysis tasks (e.g., object detection, scene classification, face recognition, person re-identification), the project researches AI-based methods for multimodal information fusion, and context-aware AI methods. The analysis tools are backed by a knowledge graph, integrating semantic information from different sources (including linked open data) and modalities, and providing an interoperable representation of contextual knowledge. This will enable enriching textual and media content descriptions with semantic metadata, and enable discovery and reasoning using the knowledge graph.

While aiming at scalability, TailoredMedia researches the design of workflows that allow human operators to stay in control of the process. This includes active learning approaches, that will ask for human intervention when automatic methods are not confident about the result, or information from different modalities or contextual knowledge contradict. The methods are designed to provide provenance information that enables assessing the reliability of information and provide explanations on classifications. This will enable operators to understand possible bias in training data and need for further training. The support of active and online learning approaches will also enable few shot learning, i.e. efficiently learning new classes from very limited amounts of labelled data (e.g., 5-10 samples).
In order to enable interoperability and avoid vendor lock-in, the TailoredMedia analysis tools are deployed as microservices. This will enable to use the services on premise and in private or public cloud infrastructures. 

Contribution of ST. Pölten UAS

The Researchers at St. Pölten UAS contribute their expertise in the areas of Solution Co-design, Prototypes and Platform Demonstrator and evaluation of prototypes. 
 

Partners
  • JOANNEUM RESEARCH Forschungsgesellschaft mbH (Lead)
  • Österreichischer Rundfunk
  • Technisches Museum Wien mit Österreichischer Mediathek
  • RedLink GmbH
Funding
FFG – IKT der Zukunft
Runtime
11/01/2020 – 10/31/2022
Status
current
Involved Institutes, Groups and Centers
Institute of Creative\Media/Technologies
Research Group Media Computing