Sign language! We all have used it while playing, communicating things when in hiding or in games that involve guessing work. Using it as a fun part is another thing, what happens when a person cannot talk? Definitely sign language helps but isn’t it better to communicate our thoughts in words rather than actions? Even if actions speak louder than words they cannot replace the importance and use of words.
Researchers have now come up with a novel technique that will communicate the thoughts of a person in words. You think and the proposed system will say it for you. Sounds something like thoughts to speech translation. Basically thought-based communication.
What is thought-based communication?
Thought-based communication is basically a concept under Brain-Computer interface field of research. It communicates thoughts not like the brain to brain communication where thoughts were transferred from one brain to other. But by converting thoughts to words.
Steps of process
1. The person will first read out some words or listen. The signals that will be produced by the brain will form the data for the study. Researchers will measure the brain signals i.e. the data.
2. This data will serve as the trainer set for a neural network. The neural network will learn to interpret the brain signals which will structure the sound that the listeners will recognize.
The output of the above experimentation has been really good. The system has been able to produce 80% successful outcomes. and it is a good achievement for the start. This is an instance of BCI i.e. the brain-computer interface. Researchers have literally brought the seems-to-be-fiction thought of thought based communication to reality.
The research took place in 3 conditions
The researchers of Columbia University and Hofstra Northwell School of medicine (both New York) conducted the first research. They recorded the brain signals of the auditory cortexes of 5 participants. The participants were people suffering from epilepsy which is a chronic neurological disorder. The data was recorded when they were listening to words and numbers.
The neural network analyzed this data and constructed the corresponding audio file. The output audio of this experiment met 75% accuracy.
The University of Bremen (Germany), Maastricht University (Netherlands), Northwestern University (Illinois) and Virginia Commonwealth University (Virginia) conducted the second study together. For this study, the data came from speech planning and motor areas of people undergoing tumor surgeries.
This time the neural network had to interpret signals that were not present in their training dataset. The accuracy came down to 40%.
The University of California, San Francisco conducted the third study in this area. This time the people had to read texts themselves while the researchers recorded brain activity. The data came from the speech and motor area of the brain.
The output was analyzed by 166 people who had to predict the correct output using multiple choice questions. This time the accuracy was 80%.
The neural network that does all the work will have to train for every person that will be using it. This is because the translation made neuron signal pattern varies from person to person.
The accuracy of the outcome depends on how precise is the data received. For this purpose there will have to be direct contact between the electrodes and the brain.
Use of person who really cannot speak will only add to an increase in the difficulty level. Right now the experiments are involving people who can speak.
The actual problem will start when researchers will try figuring out the difference between the thoughts that need to be spoken and just a thought in the brain.
The connection between the brain and the technology isn’t a new egg. It is now an effort to improve the already existing systems that claim to be able to do this. With time things might take a better turn and make a future that holds a place for every person to be loud enough to make his thoughts heard, whether in words or thoughts won’t really matter.