Does anyone know of a method that has been used to convert EEG signals into sound in the human hearing spectrum? I think it would be fascinating to be able to literally “hear” the oscillations of the brain, and it also might be possible — with some training and practice — to use this for a form of neurofeedback, or even for getting an interesting qualitative perception of one’s brain state.
To do this I guess one would need to increase the frequency by around 10x, which would make audible EEG signals between 2 and 200 Hz, according to the standard description of the human hearing range as 20 Hz – 20 kHz.
A crude method might be to take a short sample (say, 1 second), duplicate it 10 times and then downsample to compress the signal horizontally. This would probably produce some nasty artifacts, but perhaps there would be a way to smooth the transitions to avoid producing discontinuous signal.
Has this been done before? Is there some accepted way to increase the frequency of a signal with minimum distortion? Any suggestions or references appreciated!
Complicated algorithm for converting EEG into audio using “bump sonification” (the software tries to identify important features of the EEG signal to convert into audio)
Audio representation of a seizure EEG, as created by two Stanford profs. This one is also heavily filtered, and from the description seems to have used only two electrodes, with the amplitude of each mapped to a different midi voice in a different frequency range.
I did some experiments in this area, too. Not much luck. Always sounded bad.
Before trying any fancy algorithms, don’t forget that you can simply play the data back faster…that’ll shift up all the frequencies, at the expense of making the duration much shorter.
If you’re using OpenBCI, you can use my routine “convertToWAV” to convert the OpenBCI file to a WAV file. Then, you can use Audacity, or any other program, to change the sample rate to whatever you desire. I’ve written this all up in this post:
Hi Chip, thanks for your post. That looks like a useful tool for sure, and a clever way of using an audio program’s processing and analysis tools on EEG data. I’ll try this technique before looking into more complicated real-time approaches.
@Chip_Audette: good write up on your blog, as usual.
I ran across a sped-up EEG-as-audio like you’re describing when I was looking around for the links above — an ~8hr EEG sleep recording that was re-scaled and converted into a 6:37 audio file: https://www.youtube.com/watch?v=tSAozBEhJQA
It’s not musical, but the YouTube description says that you can hear the change to and from REM sleep, which is kind of interesting. There is a distinct change in the audio. The spectrogram ( https://i.imgur.com/9T5D5ub.png ) looks a little strange to my untrained eye though — I’m not sure why the higher frequency noise would be attenuated during REM, and the overall pattern of the recording doesn’t look much like the examples of normal sleep cycles I’ve seen.
Well Yeah that’s the fun of it we just hack on small cool and fun projects.
The good thing is that anyone is welcome to join and benefit from the community that is where we merge with @neurobb@AdamM
As for the features, i’m not going into technical details since it is not my job on the team, but it is mostly Alpha, Theta and Beta relative and absolute averages, we also use accelerometer reading to control some stuff…
After the event we will most probably share the classifier with the community…
Meanwhile check some other cool stuff you like the Streaming server Raymundo and Co. built from INRS
you may want to open a channel for this if it doesn’t exist already…
I think I wrote under wrong topic few minutes ago (I was on different machine).
One way is to resample signal into audible range (with or without speed preservation), the other way is to use it as modulator (and perhaps via multiband filters) on some carrier. Also, some playaround with signal overlapping from different harmonics. If the signal is present and dense across the spectra - in such case, some FFT peak filtering would do interesting job.
I wouldn’t opt for converting into midi, because it’s like converting audio from symphonic orchestra into a simple midi track. Symbolic, not really functional representation. In my history I created many dedicated soundscapes for various applications, so I tested various approaches and midi is worst of them.