Sydney Researchers Unveil BrainGPT's Thought-to-Text Tech



by FARUK IMAMOVIC

Sydney Researchers Unveil BrainGPT's Thought-to-Text Tech
© Youtube/Thomas Do

Researchers at the University of Technology Sydney's GrapheneX-UTS Human-centric Artificial Intelligence Centre have made a groundbreaking development in the realm of neuroscience and AI. Their latest breakthrough involves transforming thoughts directly into words on a screen, a significant step toward what can be described as literal mind reading.

Pioneering Efforts in Neural Decoding

Distinguished Professor Ching-Ten Lin, Director of the GrapheneX-UTS HAI Centre, explained the novelty of this research.

“This research represents a pioneering effort in translating raw EEG waves directly into language, marking a significant breakthrough in the field,” he stated.

The research, which Lin led, is the first to incorporate discrete encoding techniques in the brain-to-text translation process. This innovative approach integrates large language models, opening new frontiers in both neuroscience and artificial intelligence.

During the study, spotlighted at the NeurIPS conference, an AI model called DeWave projected the words of silently read passages onto a screen by interpreting participants’ brainwaves. This technology stands out in its field as it requires neither brain implants nor a full-on MRI machine to function.

Unlike its predecessors, which often depend on additional inputs like eye-tracking software, DeWave can operate with or without such extras. The practicality of this technology is underscored by its simple requirement: users need only to wear a cap that records brain activity via an electroencephalogram (EEG).

Despite the noisier signal compared to data from implants, the technology showed promising results in trials. The accuracy of the tech, measured by the BLEU algorithm, scored approximately 0.4, indicating a significant potential for improvement.

Future Prospects and Improvements

Yiqun Duan, the first author of the study, acknowledged the current limitations, noting that the model is more adept at matching verbs than nouns.

"When it comes to nouns, we saw a tendency towards synonymous pairs rather than precise translations,” Duan explained.

The researchers believe, however, that these errors can be refined, as semantically similar words might produce similar brain wave patterns. The team is optimistic about enhancing the model's accuracy to a level comparable with traditional language translation programs, potentially up to a score of 0.9.

This optimism is bolstered by the scale of their study, which involved 29 participants – significantly more than many other decoding tech trials.

"Despite the challenges, our model yields meaningful results,” said Duan, highlighting the promising future of this innovative technology in transforming the way we communicate.