TechCrunch reports that Apple has teamed up with researchers from Carnegie Mellon University’s Human-Computer Interaction Institute to develop technology that allows AI smart speakers to learn by listening to their environment. TechCrunch reports:
The system, which they’ve called Listen Learner, relies on acoustic activity recognition to enable a smart device, such as a microphone-equipped speaker, to interpret events taking place in its environment via a process of self-supervised learning with manual labelling done by one-shot user interactions — such as by the speaker asking a person ‘what was that sound?’, after it’s heard the noise enough time to classify in into a cluster.
They note the goal is to allow devices such as Apple’s HomePod to learn sounds by listening to its environment, possibly allowing for users to asl the devices questions about sounds around them. In the report that the researches published, they note that while home speakers are able to identify human speech, they lack,
“contextual sensing capabilities” — with only “minimal understanding of what is happening around them”, which in turn limits “their potential to enable truly assistive computational experiences”.TechCrunch
What makes Listen Learners unique is that the user doesn’t need any direct impact, or doesn’t need to train the model directly, although a pre-trained model could be installed for a few common noises. Apple and the researchers have put together this video in a kitchen environment to demonstrate the technology.
You can read the full report here.