Converting EEG signals into audio?

Does anyone know of a method that has been used to convert EEG signals into sound in the human hearing spectrum? I think it would be fascinating to be able to literally “hear” the oscillations of the brain, and it also might be possible — with some training and practice — to use this for a form of neurofeedback, or even for getting an interesting qualitative perception of one’s brain state.

To do this I guess one would need to increase the frequency by around 10x, which would make audible EEG signals between 2 and 200 Hz, according to the standard description of the human hearing range as 20 Hz – 20 kHz.

A crude method might be to take a short sample (say, 1 second), duplicate it 10 times and then downsample to compress the signal horizontally. This would probably produce some nasty artifacts, but perhaps there would be a way to smooth the transitions to avoid producing discontinuous signal.

Has this been done before? Is there some accepted way to increase the frequency of a signal with minimum distortion? Any suggestions or references appreciated!

2 Likes

This paper might be useful: Audio representations of multi-channel EEG: a new tool for diagnosis of brain disorders

Also: Low End Device to Convert EEG waves to MIDI

Hi NeuroNut,

There has been a lot of work done recently into sonification of EEG signals, both for clinical and artistic purposes. Here are a few links:

Review of different sonification methods
Abstract: https://smartech.gatech.edu/handle/1853/51378
Full text: https://smartech.gatech.edu/bitstream/handle/1853/51378/HermannMeinicke2002.pdf?sequence=1

Complicated algorithm for converting EEG into audio using “bump sonification” (the software tries to identify important features of the EEG signal to convert into audio)

Audio representation of a seizure EEG, as created by two Stanford profs. This one is also heavily filtered, and from the description seems to have used only two electrodes, with the amplitude of each mapped to a different midi voice in a different frequency range.

General article on music from brain signals:
Interfacing the Brain Directly with Musical Systems: On Developing Systems for Making Music with Brain Signals
http://cmr.soc.plymouth.ac.uk/publications/mirandabrouse_leonardo_bci.pdf

Good luck, and let us know what you come up with!

Cheers,
Adam

I did some experiments in this area, too. Not much luck. Always sounded bad.

Before trying any fancy algorithms, don’t forget that you can simply play the data back faster…that’ll shift up all the frequencies, at the expense of making the duration much shorter.

If you’re using OpenBCI, you can use my routine “convertToWAV” to convert the OpenBCI file to a WAV file. Then, you can use Audacity, or any other program, to change the sample rate to whatever you desire. I’ve written this all up in this post:

Chip

1 Like

Hi Chip, thanks for your post. That looks like a useful tool for sure, and a clever way of using an audio program’s processing and analysis tools on EEG data. I’ll try this technique before looking into more complicated real-time approaches.

@Chip_Audette: good write up on your blog, as usual.

I ran across a sped-up EEG-as-audio like you’re describing when I was looking around for the links above — an ~8hr EEG sleep recording that was re-scaled and converted into a 6:37 audio file: https://www.youtube.com/watch?v=tSAozBEhJQA

It’s not musical, but the YouTube description says that you can hear the change to and from REM sleep, which is kind of interesting. There is a distinct change in the audio. The spectrogram ( https://i.imgur.com/9T5D5ub.png ) looks a little strange to my untrained eye though — I’m not sure why the higher frequency noise would be attenuated during REM, and the overall pattern of the recording doesn’t look much like the examples of normal sleep cycles I’ve seen.

we`ve been playing with similar stuff at BCI Montreal along with musicmotion

we used muse for signal detection and made our own classifier

frequencies are then sent to ableton for wav production

Our results are going to be domed at an event next month at the SAT

Pass by if you come to Montreal.

1 Like

Sweet! You folks are doing all sorts of interesting things.

What EEG features are you converting to audio? Have you posted your classifier code or descriptions of the system online some place?

1 Like

Well Yeah that’s the fun of it we just hack on small cool and fun projects.

The good thing is that anyone is welcome to join and benefit from the community that is where we merge with @neurobb @AdamM

As for the features, i’m not going into technical details since it is not my job on the team, but it is mostly Alpha, Theta and Beta relative and absolute averages, we also use accelerometer reading to control some stuff…

After the event we will most probably share the classifier with the community…

Meanwhile check some other cool stuff you like the Streaming server Raymundo and Co. built from INRS

you may want to open a channel for this if it doesn’t exist already…

1 Like

Hey!

I just got a Muse and have working to figure out how to get the signal directly as audio.

I also use Ableton as my primary DAW.

Would you plz share a bit more about what you have done?

Thanks

You know this would make display coding so easy with the web audio API - its worth doing for that alone

Related thread here,

I think I wrote under wrong topic few minutes ago (I was on different machine).

One way is to resample signal into audible range (with or without speed preservation), the other way is to use it as modulator (and perhaps via multiband filters) on some carrier. Also, some playaround with signal overlapping from different harmonics. If the signal is present and dense across the spectra - in such case, some FFT peak filtering would do interesting job.

I wouldn’t opt for converting into midi, because it’s like converting audio from symphonic orchestra into a simple midi track. Symbolic, not really functional representation. In my history I created many dedicated soundscapes for various applications, so I tested various approaches and midi is worst of them.