Popular Posts

Monday, February 21, 2011

The Spectrum - Pythagoras's interface

You can click on the image to zoom in on it. I am working on re-thinking the whole concept of fretting, and putting in snapping points that correspond directly to the spectrum. Most current attempts to find a set of fixed notes appear to be rooted in the use of instruments with discrete notes, and trying to wrangle MIDI into doing Microtonality correctly (it won't! it never will! the protocol deeply assumes that you play 'notes' and bend with 14bits of resolution at most, usually not more than +/- a whole tone, want to bend them all together or put them all on separate channels. the MIDI spec for alternate scales still assumes there is some small number of notes within an octave. The whole mentality is broke; it should be more like a channel being a string that continuously takes on arbitrary pitch values. if all MIDI did was take on a frequency orientation, this would all be simple and work correctly. But 30 years later, it hasn't changed. OSC is still not well supported, and is vague like XML, not guaranteeing that you can just plug in an instrument and something reasonable will happen.).

In Pythagoras, I have decided to tune the rows to the fourths of the spectrum rather than of 12tet. You can intuitively see the ratios and just play at red-green line intersections, or follow the lines around if you want to calculate what it actually is for some reason.

I don't want to obsess on the math, but here is a demonstration of why this works: The ratio 3/2 is the bright diagonal line. The next angle counter-clockwise (straight up!) is 4/3, then 5/4, then 6/5, and 7/6. To follow the line up is to multiply by it, to follow the line down is to divide by it. As an example, if you go up, diagonal up right, down, diagonal down left then you navigate: root, fourth, octave, fifth root. In numbers: 1, 4/3, 4/3 * 3/2 = 2, 2 * 3/4 = 3/2, 3/2 * 2/3 = 1.

So if you want to have a chord with some whole number ratio, just make sure that lines match up at the intersections with the red line. I have an overwhelming preference for intervals made purely by navigating the fourths, as these are all the sounds I settled on after months of fretless play before I came up with the idea of using markers to explain them away.


An older video showing the microtonal markers closer up:


Notice that I have not made any mention of any number of equally spaced smallest intervals. I don't see the usefulness of anything except 12 or perhaps 24. Though you will see a tantalizingly close to equal pattern when you play in Pythagoras a median second interval (quarter-flat whole-tone); I haven't counted exactly, but it looks like the 53 tet that I have read about. But in any case, it's a uselessly large number that might as well have you playing completely fretless.

Also note that in the discrete fourier transforms (turning signals into frequencies electronically), the lowest detectable frequency is the fundamental and only integer multiples of it can be represented cleanly (ie: frequency 1 is lowest detectable, 3 is a fifth and an octave up...there isn't a clear bin available for 2^(17/12) times the fundamental frequency to get the pitch at 17 'frets' up. ). This integer limitation is microtonality, and it is completely natural in the sense that it is tied into computing, math, and physics rather directly. It is also the practical sort of microtonality created by whole ratios.

Anyways, this is where Pythagoras is on progress. It hasn't surpassed Mugician yet, but the plan is to have the best instrument for playing fretless and learning microtonality in a free-form manner. It is not open source like Mugician is now.

Wednesday, February 16, 2011

Mugician Open Sourced On GitHub

I put Mugician up on GitHub. All I ask is that anybody trying to use it, don't confuse people with what you build. I would prefer that you submit sane changes and bugfixes for the official build. But if you want to do something wierd, then it will have to go into your own instrument. In that case, give credit to Mugician for its lineage. Give it your own name. Give it your own app id. Most of all: Make sure that there isn't something about your app that breaks Mugician builds that users will already have on their iPads.

I am working on a different instrument now, but I think for historical reasons, and people who would simply like to make derivatives this is useful. Mugician's code isn't pretty. In some respects, it might be the worst code I have ever written (theoretically, in a pure software engineering sense). But it did get an incredible amount of craftsmanship in getting the details right as far as sound,latency, feel under the fingers, etc. That's why it looks like some clay that has been stretched and pulled endlessly!

It was a rush job, but it served me really well in the real-world. It worked out so well, that having it has been a sort of addiction that prevented me from having a next instrument; or more importantly, prevented me from caring about what anybody else wants.

I am working on Pythagoras now, which is similar to Mugician, and uses the same sound engine (which I had to pull out and clean a little bit).


Note that this source is not exactly like the build on the store. I hadn't been bothering to commit and tag builds when I was working on it. There is also something that needs to be commented out for anything that you plan to submit to Apple.

Contact me: rob.fielding@gmail.com

Wednesday, February 9, 2011

Pythagoras Is Underway

The more I think about it, the more appropriate using Pd sounds as an actual alternative to both OSC and MIDI. iPad is a mobile device, so you really need a brain for it no matter what else you do. And this thought just stuck with me:

Don't talk to an external device over a protocol in order to use its patch. That has inherent latency, risk of stuck notes, lost connections, and setup complexity. Move the patch into the instrument if it is possible.

What people really want when they ask for MIDI or OSC is the ability to control arbitrary patches. What they are not asking for is the latency hassle of talking over a network, or a rats nest of devices connecting to each other, or some configuration so that the components find each other. Pd is interesting in that if it becomes a standard, then you can move the patches into an on-board brain rather than talking over a protocol. Once you have a patch installed on a device, you can unplug it and chuck it off into a corner until you need it, rather than having to get devices strung together correctly.

In any case, Pythagoras is the name of this app for now. It's a basic bit of code that just exercises fast 10 finger tracking, getting libpd fired up, and activating a polyphonic synth. I am a complete Pd newbie, so I may be writing the world's most inefficient synth, but in any case the sound is smooth with no skips.

Mugician was painful to write in large part because I was learning signal processing and making the efficiency tradeoffs for reverb vs harmonics vs the rest of the system. I managed to get around 30ms of latency. Many things come into play to make this happen. The most important of all is the buffer size being used. Since buffers must complete before the next one can start, they must be as short as performance allows. I use length 256 buffers in my app so that responses can be handled right away. There is a tradeoff however. If you reduce the buffer size, as you approach size 1, CPU usage gets out of control and you end up being late. When you are late there is silence, which creates clicking noises in the sound; impulses. There is a lesser known thing to worry about as well. The touch screen of an ipad physically takes something like 12ms to detect the touches. Then the touches have to be discovered by the OS and handed to the application for processing. In addition, when the app gets these touches it tells the sound engine to change a parameter like volume or frequency as fast as possible. Depending upon implementation, more than the buffer length may determine how long that takes. Think of thread scheduling, etc.

I figured out what I was doing wrong with latency when using libpd. Pythagoras's original goals are being pursued now.

Saturday, February 5, 2011

libps for IOS - MaxMSP brain in an iPad music app

A few nights ago, I got a basic OpenGLES2 application using libpd, playing a synth defined completely in the *.pd file. What this means is something like Mugician where the synth can be totally rewritten (and updated by users after the app has shipped)! I presume that the signal processing is much more efficient than my custom code. Pd is also a sound/synth designing standard.

The only downside is that I used OpenGLES2 again for the user interface, which means that a lot of nice things I would have to say no to, to keep complexity down. It has awesomely low latency, but no readily available solution for fonts, etc. Mugician for instance can only draw lines and triangles, and has nothing in its design to support windows menus, etc.