MIT’s AlterEgo might let you listen to the voice in your staff’s head
EVER wondered what the people on your team really think about your new idea? Privacy and workplace laws aside, a new device developed by the Massachusetts Institute of Technology (MIT) might solve some very real problems in the world around us.
Dubbed the AlterEgo project, a team of researchers at the institute have developed a system that is made up of a wearable device and a computing system. MIT’s News Office explained how the device actually works:
Electrodes in the device pick up neuromuscular signals in the jaw and face that are triggered by internal verbalizations — saying words “in your head” — but are undetectable to the human eye. The signals are fed to a machine-learning system that has been trained to correlate particular signals with particular words.
“The motivation for this was to build an IA device — an intelligence-augmentation device. Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?,” said Arnav Kapur, a graduate student at the MIT Media Lab, who led the development of the new system.
Although the idea or the concept behind the device has been around since the 19th century, and was seriously investigated in the 1950s and 1960s, it seems like it needed a little help from modern technology to make the device less obtrusive and quite accurate at “transcribing your thoughts”.
“We basically can’t live without our cellphones, our digital devices,” said Pattie Maes, Professor of Media Arts and Sciences and Kapur’s thesis advisor.
Introducing AlterEgo: a #wearable that interfaces with your smart phone through internal vocalization, or "silent speech," created by @FluidInterfaces researcher Arnav Kapur. Learn more on MIT News: https://t.co/NycAiDDRbv. pic.twitter.com/39ruGceHoX
— MIT Media Lab (@medialab) April 5, 2018
“But at the moment, the use of those devices is very disruptive. If I want to look something up that’s relevant to a conversation I’m having, I have to find my phone and type in the passcode and open an app and type in some search keyword, and the whole thing requires that I completely shift attention from my environment and the people that I’m with to the phone itself. So, my students and I have for a very long time been experimenting with new form factors and new types of experience that enable people to still benefit from all the wonderful knowledge and services that these devices give us, but do it in a way that lets them remain in the present,” elaborated Maes.
Using the prototype wearable interface, the researchers conducted a usability study in which 10 subjects spent about 15 minutes each customizing the arithmetic application to their own neurophysiology, then spent another 90 minutes using it to execute computations, the News Office revealed. In that study, the system achieved an average transcription accuracy of about 92 percent.
However, Kapur believes that the system’s performance will improve with more training as and when the device is used.
Thad Starner, Professor, Georgia Tech’s College of Computing believes that there’s real potential for the new device. Although it might not be able to let you tap into the minds of your colleagues and teammates, Professor Starner expects to see the concept help people in high-noise environments such as printing presses, and in the military in special operations.
- Custom development vs. COTS: Which is better for business growth?
- The dawn of a new era: NVIDIA’s trillion dollar agenda with generative AI and 5G
- Chinese state-sponsored cyber threats are becoming a global menace
- China’s Ant Group expands Alipay+ integrations in Thailand
- Are we ready for the new generation of AI-powered IPAAS?