-
Notifications
You must be signed in to change notification settings - Fork 3
Brian2Hears meeting notes
Points to be adressed:
- Strategy for buffering/FIR filtering
- Scheduling/clocks
- samplerates
- relevant for control filterbank/feedback (mostly scheduling issues)
- Sounds
- OnlineSounds idea
- multichannel audio (i.e. multichannel TimedArray)
- implement multiple backends for sound playing
- Misc
- HRTF support
- the .process() method
- NotImplementedYet
In brian1hears, Filterbanks operate based on buffers. This means that the result of filtering is always "precomputed" for a given number of samples. This is desirable in FIR filtering, for which the most efficient computation methods are FFT-based and necessarily operate on buffers.
I can see two possible courses of action:
-
Get rid of buffering altogether
This means relying on the current FIR filtering scheme (unefficient shift-register technique) everywhere. A plus side is that this requires no modification of B2, and currently works (incl. in standalone mode)
-
Implement buffering
This means implementing some buffering mechanism. A plus side is that is could be used also in B2, for example in the context of StateMonitors wherein one may want to record a running average that could be computed online with FIR filtering (note that it can also be done with IIR filtering, which is more efficient)
If we choose to implement buffering, then we will need to work a bit to keep compatibility with brian2 standalone and weave. We at least need to provide an implementation of FIR filtering in each language. FFT implementations are easy to find in any language, but maybe operating on complex numbers from within b2 will be hard.
For now all filterbank objects use the dt of the source to set their internal samplerate. This is also the case when the source is a TimedArray (and Marcel advised against doing this).
It may not be so hard to allow Filterbank objects to have a different dt than the other B2 objects. This could be useful for example when using filterbanks that have a fixed samplerate in their formulation (e.g. TanCarney), whilst still being able to freely set other B2 objects' dts.
Scheduling has to be taken care of, and I have not thought about it much. I think that in most instances it is straightforward, but may be more complex when dealing with control/feedback filterbanks. We could just always update filterbanks and _then_ control filterbank. How good of a control on this schedule do we need (Bertrand: any ideas?)?
- OnlineSounds
It may be interesting to have a sound object that derives from Group and computes the sound values at each timestep _online_. This is useful in the case of long simulations where the inputs can be expressed as a function of t (sine waves, etc...) or can be computed online (white noise for example). I can see that this makes a big difference, especially if one wants to perform simulations where neurons in a group receive independent white noise for example.
- multichannel timedarray
I we choose not to mimic the B1H structure, then Sound objects would use the infrastructure of TimedArrays (this is the case now). In this context we would need to have multichannel timedarrays to represent multichannel audio.
- Sound playing
Pygame is a pain in the a**e, so I think that it would make sense to get rid of it altogether. Other options are available for sound playing/recording on multiple platforms: JACK, portaudio. I think that portaudio would be better, yet it may also be more complicated to install on some platforms (which is also the case of pygame on e.g. OSX). Any thoughts?
- HRTF support
Could this maybe be improved?
- the .process() method
This method may not be so easily implemented in brian2. A way to make this feature easy to implement is to change the syntax from fb.process() to process(fb, sound). This would allow to manually create and run a network in the process function.
Here are things that may look like they work but currently don't:
- FIR filtering with multiple IRs (be careful)