-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sketch out general architecture and first steps #1
Comments
@vine77 awesome! I'm off for Halloween fun atm/tonight but will get some thoughts in tomorrow! 🎉 |
Oooh, this looks fun! Basically a shared sequencer interface? |
Hi @natevw! So glad to see you chime in! 🎉🎉🎉 We're definitely at the 'define what it is stage'! ⚡ |
Basic features:
|
@luciusbono Good call. Adding a couple more here:
Most loop pedals I've worked with have that intuitive start/stop interface, with either no capability for editing the start/stop points, or it's buried in menus somewhere. |
@natevw, yeah I think a "shared sequencer interface" could definitely be an outcome of this work. I even ponder a "collaborative Ableton Live in the cloud" though that's currently outside the scope of this project. As I mentioned in the thread @obensource referenced, there is always the issue of latency in networked music performance (with some citing 30ms as the bound of human perception), but if you can't guarantee a very low latency, it seems like a looping paradigm might present ways to sidestep the issue, e.g. by essentially quickly syncing multiple workspaces even if the transport/play positions may or may not be nearly real-time. @obensource, do you have any sense for the latency in milliseconds you were getting with the WebRTC version of midisocket? |
For this project, I think @abstractmachines framed a good goal:
I like that for a few reasons:
For v0.1, I'm assuming we'd also want to initially limit scope to a self-contained non-collaborative looper and then add the ability for multiple musicians to collaborate in an upcoming version? Thoughts? There really are a lot of things we could do with a project like this, but in the spirit of agile development, starting with the simplest usable product may be beneficial. What do we think is a good order of operations such that we can start from a minimal proof-of-concept and work up from there? I've been thinking of the first iteration as basically a MIDI version of the traditional guitar stompbox looper:
Future features:
I'm just brainstorming. Would love to hear more about what other think about first steps. Questions:
|
@vine77 Yeah, sidestepping the latency issue seems a key win to something like this! That's what caught my eye (besides just liking loopers in general :-) With the right design, jitter should be a complete non-issue, and you might even have time on the order of a whole bar to get everyone in sync. Sounds like for now the goal is a "solo instrument" though, rather than something networked? |
I think something networked would be awesome if y’all are up for it. Just trying to think about how to implement incrementally. |
+1 for networking too, but that doesn't mean it has to be the first lines of code either. My vote would be "React" — preact+redux+reselect compiled from ES2017 — but I'm not opposed to a "raw JS" approach either (maybe with the core of D3?) especially since the browsers with MIDI are the browsers with niceties like I think the main key is to keep the data layer rigorously separate from the DOM rendering. That will make a potential transition from any initial solo experiments to adding network features much easier too. [Clarification: if the interface is just some control buttons, plain JS is probably the way to go. I was imagining more of a sequencer/piano-roll display, which probably isn't needed at first?] |
What's the plan here, crew? I have a synth coming this week that this would be great for — so I'd love to take on something here, but I don't want to just take over either. If there's no objections, I'd like to start a real simple prototype of the core MIDI stuff. It might not have much interface to speak of at all at first: I might just map two "unused" keys on the input device to toggle record and play/pause? If there are objections, I guess I'll race to the first Pull Request then :-P |
Yeah, I agree we need to get some of the core MIDI stuff out of the way. @natevw, feel free to start a prototype. I think as long as we're communicating about it, anyone should feel free to move the project forward!
|
Questions:
|
Hooking up MIDI OUT into another synth isn't necessarily a requirement, but that is the part that will help a remote musician collaborate. Not married to the idea or anything. The test tone may help development move more quickly. Sorry I haven't been involved too much yet, been busy! oh yeah! " i.e. the ability to translate an incoming MIDI stream into a data structure..." my coworker just did that on a project very recently with WebMIDI. Pinging him now about it. |
Oh, awesome. Will be curious what your coworker has to say about it. Add a link if it happens to be on GitHub. I'm curious about the time tracking strategy of incoming MIDI data. Though now that I check, it looks like And to clarify, I do think it should be a requirement that this project include MIDI OUT. I was just questioning whether we also could use a test tone (as a simple and separate component) really just to streamline development in the near term. That makes me curious... can one browser window send MIDI OUT to another browser window's MIDI IN? |
@vine77 will do, getting his permission to share. I think he's currently refactoring. I'll get some tips from him too!! I get it now :) haha, yes, a test tone would be a very useful addition. Piping audio streams from one process to another (likely IPC) is something Soundflower does.... I think that would be available in something like MIDI Monitor or similar freeware? Or maybe something made by the Jitter or Ableton people? There's a couple packages like this one and this one too - the last one mentions it's "duplex," I assume they mean full duplex. |
Just a heads up, I did start something in the "nvw-proto1" branch although it's not as far along as I intended. [For a while last night I was stuck inexplicably not getting any MIDI message events. Restarted Chrome and I think it simultaneously updated to a new version, and finally worked as expected!] I did end up pulling in a couple tiny external frameworks:
These are both used via old-school script tags and without JSX syntax, so my prototype-in-progress is still all simple static files with no compilation needed. That said, I am using any and all fancy new Ecmascript stuff [async/await, classes, arrow functions, destructuring…] that Chrome has available. |
Oh, and as far as architecture that's already a bit of a mess (and it's not even doing anything yet ;-) but:
I'm not entirely sure the best way to bridge between the two yet. To avoid extra abstractions and wrappers, I might just make the MIDI logic aware of the
|
Update: last night I cleaned up the prototype code a bit and added a super simple soft synth [®?] for preview. Was hoping to add some actual looping today and open a pull request, but that's the next step. |
That's awesome @natevw! I'll check it out this weekend. |
Let's sketch out the very basics of a looper based on the Web MIDI API so we can get this project stasrted!
/cc @abstractmachines @luciusbono @obensource
The text was updated successfully, but these errors were encountered: