MIT researchers have developed AlterEgo, a computer interface that transcribes words the user internally verbalises, although it does not speak aloud.
Electrodes in the wearable device, which works with an associated computing system, pick up neuro-muscular signals in the jaw and face that are triggered by internal verbalisations i.e. by saying words in your head. A ‘machine-learning system’, trained to interpret particular signals with particular words, picks up the signals.
Bone-conduction headphones transmit vibrations through the bones of the face to the inner ear. The headphones don’t obstruct the ear canal making it possible for the system to send information to the user without interfering with the user’s auditory experience.
The silent-computing system can, for example, be used to silently report an opponent’s chess game moves. The device then silently provides computer-recommended responses to the user.
An Intelligence-Augmentation device
Arnav Kapur, a graduate student at the MIT Media Lab, who led the development of the AlterEgo said: “The motivation for this was to build an IA device — an intelligence-augmentation device. Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?”
Kapur’s thesis advisor, Pattie Maes, a professor of media arts and sciences, explained that “we basically can’t live without our cellphones, our digital devices. But at the moment, the use of those devices is very disruptive. If I want to look something up that’s relevant to a conversation I’m having, I have to find my phone and type in the passcode and open an app and type in some search keyword, and the whole thing requires that I completely shift attention from my environment and the people that I’m with to the phone itself. So, my students and I have for a very long time been experimenting with new form factors and new types of experience that enable people to still benefit from all the wonderful knowledge and services that these devices give us, but do it in a way that lets them remain in the present.”
The MIT researchers are hoping to build applications with more expansive vocabularies. The data being collected is proving positive and Kapur says, “I think we’ll achieve full conversation some day.”