A new mind-reading device means people can silently type on their computer using nothing but thoughts – and it’s accurate 90 per cent of the time.
Instead of communicating with smart devices by saying ‘Ok Google’ or ‘Hey Siri’, the headset silently interprets what users are thinking, giving them ‘superpowers’, researchers say.
When people think about verbalising something, the brain sends signals to facial muscles – even if nothing is said aloud.
The device has sensors that pick up seven key areas along the cheek, jaw and chin that can recognise words and can even talk back once it has processed them.
Other companies, such as Elon Musk’s Neuralink, are also developing ‘Matrix’ style computer-brain interfaces to give people advanced mental abilities.
Currently the ‘AlterEgo’ device, which was created by researchers from MIT Media Lab, can recognise digits 0 to 9 and has a vocabulary of around 100 words.
‘It gives you superpowers,’ graduate student Arnav Kapur, who created the device with Pattie Maes told the New Scientist.
The system consists of a wearable device and an associated computing system which is directly linked to a program that can query Google.
Electrodes in the device pick up neuromuscular signals in the jaw and face which are triggered when users say words ‘in their head’.
The signals are fed to a machine-learning system that has been trained to correlate particular signals with particular words.
The device also includes a pair of bone-conduction headphones, which transmit vibrations through the bones of the face to the inner ear.
These headphones do not obstruct the ear canal so users can still hear information without their conversations being interrupted.
‘The motivation for this was to build an IA device — an intelligence-augmentation device,’ said Mr Kapur.
‘Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?’
This silent-computing system means users can communicate with Google without being detected by any one else.
To start with, researchers found which part of the face was the source of the most reliable neuromuscular signals.
They did this by asking people to subvocalise the same series of words four times with 16 different electrodes at different facial locations each time.
They found signals from seven particular locations were consistently able to distinguish subvocalised words.
Using this information, MIT researchers created a prototype that wraps around the back of the neck like a telephone handset.
It touches the face in seven locations either side of the mouth and along the jaw.
They then collected data on a few computational tasks with limited vocabularies – around 20 words each.
One was arithmetic and the other was used in a chess game. The prototype device could complete these tasks with 90 per cent accuracy.
In one experiment researchers used the system to report the opponents’ moves in a chess game. In response the device gave recommended responses.
Continue Reading: http://www.dailymail.co.uk