Title:
Auditory graphs from denoising real images using fully symmetric convolutional neural networks

Thumbnail Image
Author(s)
Cádiz, Rodrigo F.
Droppelmann, Lothar
Guzmán, Max
Tejos, Cristian
Authors
Advisor(s)
Advisor(s)
Editor(s)
Associated Organization(s)
Organizational Unit
Collections
Supplementary to
Abstract
Auditory graphs are a very useful way to deliver numerical information to visually impaired users. Several tools have been proposed for chart data sonification, including audible spreadsheets, custom interfaces, interactive tools and automatic models. In the case of the latter, most of these models are aimed towards the extraction of contextual information and not many solutions have been proposed for the generation of an auditory graph directly from the pixels of an image by the automatic extraction of the underlying data. These kind of tools can dramatically augment the availability and usability of auditory graphs for the visually impaired community. We propose a deep learning-based approach for the generation of an automatic sonification of an image containing a bar or a line chart using only pixel information. In particular, we took a denoising approach to this problem, based on a fully symmetric convolutional neural network architecture. Our results show that this approach works as a basis for the automatic sonification of charts directly from the information contained in the pixels of an image..
Sponsor
Date Issued
2021-06
Extent
Resource Type
Text
Resource Subtype
Proceedings
Rights Statement
Licensed under Creative Commons Attribution Non-Commercial 4.0 International License.