AudioVisualAnalysis - Exploratory data analysis using multimodal representations - Joint Sonification and Visualization

Making data audible and visible by turning it into auditory and visual information.

Background

In our daily lives, we perceive our surroundings in a multimodal way. We can see, hear, taste, smell, and touch. Nevertheless, most data analysis tools are only visual, falling short of tapping into the potential of designs that incorporate input from different sensory channels. Specially, preparing information for the visual and the auditory channel presents promising opportunities. Both sight and hearing come with distinct strengths. Visual representation of data offers advantages in detecting complex patterns, whereas auditory (non-speech) representation of data, known as sonification, is better suited for revealing temporal patterns (i.e., similar behaviours over time).

The research communities dedicated to visualization and sonification for the analysis of data have strikingly similar objectives: to enhance the human comprehension of various types of data. One community accomplishes this by creating visual data representations, while the other relies on auditory (non-speech) data representations (many examples of sonification projects can be found in the Data Sonification Archive: https://sonification.design/). Despite the remarkable overlap in goals, the two communities have largely evolved independently over the past few decades. The aim of this PhD project is to find a common ground for both disciplines and to contribute to the establishment of a unified field of audiovisual analysis. This involves integrating visual and auditory data representations to enhance human understanding of complex datasets.

Project Content and Goals

The visualization and sonification research communities lack a common language for describing and designing their tools. This project aims to bridge this gap and create a theoretical framework that meets the demands of both fields. In addition, the evaluation of integrated designs is a crucial aspect of this project. To achieve this, we leverage:

  • User performance metrics to gain insights into how audiovisual analysis tools can enhance analytical capabilities.
  • User experience metrics to assess how audiovisual displays contribute to user engagement during analysis.
  • Qualitative result inspection that aims to demonstrate to both communities the benefits of using sonification in combination with visualization.

Research Questions and Methods

In this research project, we investigate research questions such as:

  • How can we describe a multimodal design space that showcases the benefits of combining visualization with sonification while also offering guidance for concrete projects?
  • How can we develop data analysis tools that integrate auditory and visual perception?
  • Does combining visualization and sonification offer perceptual advantages?

To study these questions, we employ methodologies rooted in, for example, (1) design science research and (2) psychoacoustics. Design Science Research is concerned with the development of artifacts (i.e., product of software development that assists in describing the architecture, design and function of software) that help solve real-world problems and, at the same time, create new scientific knowledge about those artifacts. Psychoacoustical studies are concerned with psychophysics and how sound is perceived. They reveal to which extent users benefit from a multimodal design when trying to identify meaningful patterns in data.

Results

The project provides a comprehensive state-of-the-art report on academic projects integrating sonification and visualization for data (re)presentation and analysis. This report makes a strong point for intensifying collaboration between experts in data visualization and experts in sonification. Another project outcome is a taxonomy that includes both domains and makes it possible to describe an audiovisual design space. It represents the missing theoretical foundation needed to systematically investigate the possibilities of audiovisual analysis tools. In addition, insights from qualitative and quantitative analyses into the usage of prototypical audiovisual analysis tools informs the community on the potential of integrated designs.

Overall, the project underscores that adopting multimodal designs, combining sonification and visualization, adds significant value to research endeavours from both communities.

You want to know more? Feel free to ask!

Junior Researcher
Media Computing Research Group
Institute of Creative\Media/Technologies
Department of Media and Digital Technologies
Location: A - Campus-Platz 1
Funding
GFF (Dissertationscall)
Runtime
10/01/2021 – 09/30/2024
Status
current
Involved Institutes, Groups and Centers
Institute of Creative\Media/Technologies
Research Group Media Computing