Build a neural system that learns a conceptual hierarchy of (sound-)objects, autonomously searches the underlying conceptual space, and presents the retrieved associative concept-sequence audio-visually.
This project aims at developing a neuro-inspired sound analysis, transformation and synthesis system with the ability to extract and learn temporal hierarchical categorical structure from sound inputs. It will create new sound patterns autonomously from its internal representations.
The system will make use of neural networks that pre-process sounds in a bio-inspired way and extract sparse spectro-temporal event streams. The streams will be analysed further for hierarchical patterns that are stored in associative networks. Autonomous associative dynamics in these networks will allow the system to generate novel event-streams and sound patterns that reflect conceptual structures in the training data. Neurally-implemented syntactic rules will be developed to add an additional cognitive level which constrains the creative exploration of the conceptual hierarchical sound space.
Thomas Wennekers, Sue Denham, Jane Grant, John Matthias, Martin Coath (Plymouth University)