web synth docs

2021-04-25

MAIN GOAL: Plan out what kind of features + functionality we want/need for the [sample-editor], sample playback, and other sample-related modules.

Working with samples is a vital part of music production, and I've reached a stage of the project where I find that I have a need to manipulate, record, and work with samples dynamically with the tool itself. Currently, the only thing that exists is the sample recorder that's glued on top of the granular synth. That's obviously not the right place for that to be, and we should have a dedicated sample recorder node/component - possibly one that's combined with a top level [sample-editor] component.

So, what exactly is the sample recorder going to consist of? What do we want to do?

Well, one thing that we definitely want to be able to do is cut up and combine samples in an audacity-like fashion. I want to be able to reverse samples in-place, place many samples in order to produce rhythms, place one-shot samples, etc. So more and more the core of this functionality is looking like a single track of Audacity. Of course, there are many pieces of this that will need to be created in order to make it work. Happily, I think that this is something taht can sit on top of the notebox backend that we built before; so many of its features and functionalities match directly.

Before we go on, I want to make extra sure that this is actually required. I'm almost sure that it is, but let's list the places that I'm almost sure that this will be needed right now:

  • track compositor, where we will have sample tracks that can be edited and composited like MIDI tracks
  • standalone VC for sample recording/import/export. I think that this is something that will be important; the actual "sample editor" VC. It will basically be the same UI as the single sample track editor and have the same features, but we can have a toolbar there with import/export functionality, tight integration into the [sample-library], and stuff like that.

Yeah I think that's the gist of it. It's become quite clear that the single track sample editor is the core of what we want to build. Let's flesh that idea out further.

There will be multiple samples which will be the "notes" for the line. Each note has a sample that is associated with it and some set of modifiers. I like the idea of having some source sample and a list of transformations applied to it which yield the output sample which is actually played. We can cache all of the layers and save a rendered version of the output for efficiency, but it makes sense that we need some kind of source audio data to fall back to which contains the actual data.

There will need to be an AWP which handles actually playing back the line of samples. That will be OK; playing back samples is pretty easy you just read stuff out of a buffer. There may be some annoying sample alignment kinds of things to deal with, but I think it won't be too big of a problem. We will need to write the code for the various sample transformations, but we can handle that in a number of ways and there is no real-time need for that so that's cool too.

We might end up setting up some kind of class inheritance for things like looping-- actually probably not. We're handling that pretty fundamentally differently. I like the idea of just building everything from scratch for the sample editor; it's very different from the MIDI editor in more ways than one. If we want to re-use zooming/panning/etc. functionality, we can do so by pulling out helper functions I think.

I think it is a good idea to plan for having many of these sample lines together in the same view and controllable as a group, just like Audacity. I don't have an exact vision for how that will work or what the exact requirements for that will be, but making view stuff externally controllable is a good idea probably. I don't think we will need to do a ton to facilitate that, though.

There is a need for waveform renderer. We already have a primitive version of that built, and we probably want to upgrade it. We can do interpolation and stuff. We can add caching if we need to. This will be a function-like thing rather than any kind of module, similar to the note lines backend. We call into it to get the data we need, and then we render it from within PIXI.

I think the idea of having arrays of transformations applied to source samples is a neat one. I've been trying to think of ways in which it won't meet the needs of what we want to do, but I can't really think of any. There might be issues if samples are deleted or note available. An alternative I can think of is saving transformed samples as data rather than transforms on source samples. This removes the dependency on the source sample, but it adds the requirement of saving the derived samples data with the sample editor somehow. We can use IndexedDB for that, persist them to server once user accounts are up and running, no problem. I'm thinking that this is a better idea; the idea of having stacked transformations is cool and nice and feels quite smart, but I don't think there will really be any benefit to it. We can have a registry of internal samples that is associated with the sample manager isntance. The backing store doesn't matter too much, but basically we have a set of samples that we have created and care about and whenever we create a transformed sample we create a new entry in that registry and use that. It's simpler and more flexible than the original, imo.

SO - going into thinking about actually building this. What does an MVP look like?

We need the actual sample line UI, need to be able to -- let me make a list:

  • Basic sample line UI with scrolling, zooming, waveform rendering. Need to be able to drag in samples, move them around, copy/paste.
  • AWP to serve as the backend for the sample editor. We will need to communicate UI updates to it when the UI is updated, handle de/serialization, handle updates, and render audio live based on current beat. There are probably pieces here that I'm forgetting as well; this might end up being somewhat involved due to the surface between the frontend and the AWP but we'll see. I briefly considered merging this with the [granular-synthesizer], but decided against it. I think this should be a greenfield implementation solely focused on playing back samples with no special handling or anything like that. Max efficiency, max efficiency, etc.
  • Sample transformations. I want to be able to do things like pitch samples up/down, reverse samples, cut up samples, etc.
  • Sample recorder UI. It will embed a single sample line and expose VCs for inputs and outputs (passthrough) with a toolbar to control. Should be versatile to support both capturing + editing actual samples as well as a sort of export node for full songs. We can prioritize this after the main sample editor piece since it's going to embed almost all of it.

And... that's as far as I've thought so far. I think that this is actually a pretty good roadmap and that having this will greatly improve the capabilities of the platform as a whole. I'm pretty convinced that dealing with samples is very important; dealing with raw audio data like this is something that's universally necessary and I'm looking forward to having support for that in the application.

Tomorrow, we build. Right now, we sleep.


OK it's tomorrow, and I've started building. I got just got barely started with the buildout of the the PIXI instance and then built out some foundations for the AWP instance since it seems that building things out from bottom up is a good strategy. One thing I'm realizing is that it's possible that it would be better to adapt or re-use the [midi-editor] for this rather than build out a completely new thing.

I'm trying to consider all of the places where there can be overlap and where they diverge. I was trying to imagine how much would have to be re-worked to shoehorn the sample editor into the MIDI editor UI as well. It started with the realization that one of the things that the "sample editor" is going to be used for is drum sequencing, which in many ways is a "sample synth". There are some things that we'd want to do with that which aren't supported by MIDI editor like note transformations, but thinking about that it seems like that's almost a triviality and it could be solved by abstracting over the rendering of individual notes and just having an abstract base note that we deal with from the MIDI editor.

THINGS THAT DO NOT MATCH:

  • MIDI numbers / Piano Keyboard
  • Playback / recording
  • Notes themselves
  • Note placement
  • beats/time domain

THINGS THAT DO MATCH:

  • Cursor / Cursor Gutter
  • Scrolling
  • Zooming
  • Note selection/deselection with shift/selection box/etc.
  • Note deletion
  • Base UI rendering (ticks, note lines, etc.)
  • De/serialization (?)

I have a feeling that making the wrong decision here may have very bad effects on the whole MIDI editor and surrounding ecosystem of code, so I want to make sure I make the right choice. If this is going to involve editing the 1k LOC MIDI editor component and adding 20 conditionals and special cases for sample editing stuff, I'm not sure I want to do it.

Let's explore what this might look like if we go for it. The first thing I'd do is make notes more generalized, splitting out MIDI notes and creating a new sample note component that matches the required interface. I think that would probably consist of adding some optional note metadata which is used to determine the renderer component and what gets rendered. The actual stuff that gets rendered in the sample editor notes is kind of hidden and the MIDI editor itself doesn't really care too much; notes can manage and re-render themselves happily and all that's going to look like to the parent MIDI editor is changing a metadata field. This is actually really nice since the samples themselves are one of if not the most complicated piece of the sample editor UI.

Once that's done, we'll need to move on to note lines. This is where creating notes and playback both come into play. My initial idea here is to have a type parameter for NoteLine which determines what "kind" of noteline it is - MIDI or sample. This immediately feels like a special case, but let's see how far it will extend into other parts of the MIDI editor. The behaviors that we want to modify for sample editor are note creation, selection, deletion, and event handlers like mousedown etc. Dragging notes around will still work just like before, at least mostly. Maybe we want to constrain sample notes to a single line, but that's a small thing right? Hmm things are getting quite murky here...

Looking at this a bit more closely, it seems that almost all of the base MIDI editor is generic enough to the point where I'm more confident than ever that this can work out. parentInstance actually handles all of the playback-related stuff, and we can make that abstract without too much effort and have one version for MIDI scheduling and one version for sample editor scheduling. I really think that this can work cleanly, at least cleanly enough to the point that it outweighs the cost of re-building all of those lower pieces from scratch.

The ugliest edges I can think of right now is MIDI number<->line ix mapping, and dealing with beats vs. time for layout. One potential solution is to just treat beats and seconds as the same unit when using as a sample editor. There is the consideration of having to deal with note collisions and accurately sizing notes, but that's something that would have to be dealt with even if we built the sample editor up from scratch. We can just have a local BPM that we use to do all of our note sizing.

I really do think that this can be done. I was originally very hesitant due to the fact that over-abstracting the MIDI editor v1 was something I had firmly rooted in my mind as causing it to become horribly unusable from a dev point of view, but I think that those issues were more strongly caused by the failure to keep rendering and state separate, failure to use hierarchical rendering, use of SVG as a renderer, and locating the core control logic in Wasm rather than JS. Yeah, listing them all out like that makes me even more convinced.

I will officially adopt the strategy of modifying/augementing the existing MIDI editor to implement the sample editor rather than building it from the ground up. Let's see how it goes!

2021-04-25