Skip to content

Brian2Hears meeting notes

Victor Benichoux edited this page Jun 26, 2014 · 24 revisions

Brian2hears meeting 22/06/2014

Points to be adressed:

  1. Strategy for buffering/FIR filtering
  2. Scheduling/clocks
    • samplerates
    • relevant for control filterbank/feedback (mostly scheduling issues)
  3. Sounds
    • OnlineSounds idea
    • multichannel audio (i.e. multichannel TimedArray)
    • implement multiple backends for sound playing
  4. Misc
    • HRTF support
    • the .process() method
  5. NotImplementedYet

[Dan] Some global issues: do we want B2H to be well integrated into B2 or a separate package? Do we want B2H to use the code generation techniques of B2 unmodified, or have its own separate code generation system? The advantage of having its own codegen system is that we can do things like combining multiple filterbanks into a single code block, which could be much more efficient (for cache and overhead reasons).

[Romain] It would be nice indeed to combine multiple filterbanks, but not critical; so I guess the question is: is it really worth the extra effort?

[Marcel] I think if possible, B2H should use what we have in B2 and not add anything major like its own codegen framework. It's a nice test application in a way.

Buffering

In brian1hears, Filterbanks operate based on buffers. This means that the result of filtering is always "precomputed" for a given number of samples. This is desirable in FIR filtering, for which the most efficient computation methods are FFT-based and necessarily operate on buffers.

I can see two possible courses of action:

  • Get rid of buffering altogether

    This means relying on the current FIR filtering scheme (unefficient shift-register technique) everywhere. A plus side is that this requires no modification of B2, and currently works (incl. in standalone mode)

  • Implement buffering

    This means implementing some buffering mechanism. A plus side is that is could be used also in B2, for example in the context of StateMonitors wherein one may want to record a running average that could be computed online with FIR filtering (note that it can also be done with IIR filtering, which is more efficient)

If we choose to implement buffering, then we will need to work a bit to keep compatibility with brian2 standalone and weave. We at least need to provide an implementation of FIR filtering in each language. FFT implementations are easy to find in any language, but maybe operating on complex numbers from within b2 will be hard.

[Dan] I think that we can't drop support for long FIR filters (e.g. HRTFs) and doing it without FFT-based methods is just too slow, so as far as I can tell we are going to have to have buffering support. My question is: do we (a) implement this only for Brian Hears as in B1.x, or (b) extend the basic mechanisms of B2 to include buffering as a possibility everywhere? The latter option has some potential advantages: for example for feedforward networks it might be more efficient to do a batch of say 100ms of one group, storing all the spike times, and only then run the target group propagating as we run. The main use case here would be GPU where this could potentially be a big win. But I'm not sure if it's worth the effort - something we can discuss.

[Romain] On the other hands there are most often just 2 HRTFs for many more channels or neurons, is it still a limiting factor in this case? About the generalization to other Brian objects, I think there is indeed a case where it would be useful (apart from GPU), which is cluster simulations. This is a trick NEST use for efficient communications, they impose a minimal axonal delay of 1 ms so that communication of spikes can be grouped.

[Marcel] For B2H: I think we should first do some benchmarking to see how terrible the simple solution is. As Romain said, it might not make much of a difference in realistic scenarios.

Scheduling/clock

For now all filterbank objects use the dt of the source to set their internal samplerate. This is also the case when the source is a TimedArray (and Marcel advised against doing this).

It may not be so hard to allow Filterbank objects to have a different dt than the other B2 objects. This could be useful for example when using filterbanks that have a fixed samplerate in their formulation (e.g. TanCarney), whilst still being able to freely set other B2 objects' dts.

[Dan] I think the B1H system is ok here, right? We have a default samplerate and an individual samplerate, and the dt is 1/samplerate.

Scheduling has to be taken care of, and I have not thought about it much. I think that in most instances it is straightforward, but may be more complex when dealing with control/feedback filterbanks. We could just always update filterbanks and _then_ control filterbank. How good of a control on this schedule do we need (Bertrand: any ideas?)?

[Dan] My guess is that we'll want to do something a little bit smart here, but I don't think it should be a problem. The system we used for B1H basically works I think, it's just a matter of fixing it to fit in with B2.

Sounds

  • OnlineSounds

It may be interesting to have a sound object that derives from Group and computes the sound values at each timestep _online_. This is useful in the case of long simulations where the inputs can be expressed as a function of t (sine waves, etc...) or can be computed online (white noise for example). I can see that this makes a big difference, especially if one wants to perform simulations where neurons in a group receive independent white noise for example.

[Dan] Yeah, this was something we wanted to do for B1H but never really implemented - seems like a good idea.

[Romain] I agree!

  • multichannel timedarray

I we choose not to mimic the B1H structure, then Sound objects would use the infrastructure of TimedArrays (this is the case now). In this context we would need to have multichannel timedarrays to represent multichannel audio.

[Dan] Definitely an option!

  • Sound playing

Pygame is a pain in the a**e, so I think that it would make sense to get rid of it altogether. Other options are available for sound playing/recording on multiple platforms: JACK, portaudio. I think that portaudio would be better, yet it may also be more complicated to install on some platforms (which is also the case of pygame on e.g. OSX). Any thoughts?

[Dan] Agreed - we should get rid of pygame if possible. But, we do need something that works on all platforms. Another possibility is to have multiple possible backends for playing the sounds?

[Marcel] I think for the start we don't need anything at all -- if one has pygame installed then playing a sound is only a line of code anyway.

Misc

  • HRTF support

Could this maybe be improved?

  • the .process() method

This method may not be so easily implemented in brian2. A way to make this feature easy to implement is to change the syntax from fb.process() to process(fb, sound). This would allow to manually create and run a network in the process function.

[Dan] I use fb.process() a lot so I want to have this functionality, but I'm happy with replacing it by a function call.

NotImplementedYet

Here are things that may look like they work but currently don't:

  • FIR filtering with multiple IRs (be careful)
  • Chaining filterbanks: it works but using a different structure than B1H. I will change this in the near future, just waiting on linked_variables

Conclusions

  • Buffering

not for now, let us keep it an option, and maybe find out how much time is lost in FIR filtering

  • Scheduling/process/ blabla

a B2H "AuditoryNetwork" kind of thing could take care of this. samplerate: make a default samplerate (44.1 k) for B2H objects

  • Sounds

Before we decide on the "play" library, let us just put in a save as WAV (?) feature Start a document about usecases for operations on sounds (before deciding on inheritance from ndarray)

  • Library

find out a way to include other people's code (matlab/C) for complex models (TanCarney/ZilanyBruce...)?

  • Control

find a simple example/ deal with scheduling and linked variables ()

** Syntax options

Start a document and decide on syntax for AuditoryNetwork, control, onlinesounds? Operations on sounds?