Model Maker
From code-breaking jumbled genres to crafting fresh analogue circuits out of software emulations… For modelling maestro Andrew Simper all it took was a little feedback.
Andy Simper: Mr Cytomic, the ‘circuit whisperer’
I can still remember the first time I saw Bomb Factory’s BF76 plug-in. At the time, I would have struggled to recognise the legendary compressor on which it was modelled let alone describe its sound. Still, I was entranced by its direct connection to the studio world I yearned to experience. In the 15 or so years since, ever more developers have chosen to brandish the ‘Analogue Modelled’ standard as a universal indicator of quality and authenticity. I’m certain, for many newcomers, adolescent gear lust first takes root via these virtual d’vices. But what’s it all about? Does my software interface need to look like a wall-o’-rack at Ocean Way? And are the tones I’m hearing capturing the musical mojo or just the myth? It’s time to call circuit whisperer Andrew Simper and get some answers.
Based in Perth, Andrew has spent the last decade etching his own path through the intricate field of component-level modelling of analogue audio circuits. His Cytomic plug-ins — The Glue and The Drop — continue to feel like well kept secrets in spite of their popularity. A recent well-publicised collaboration with Ableton has seen The Glue’s rebirth as a native Live device and the contribution of a new linear state variable filter (SVF) algorithm for EQ8. It’s rapidly closing in on 20 years since his debut Vellocet VReorder plug-in first garnered international interest, but it was Simper’s time at FXPansion that has proved to be most influential on a career now dominated by the analysis, emulation and modification of classic analogue audio gear.
Andrew Simper: When I arrived at FXPansion I still didn’t own an engineering textbook. I hadn’t done any circuit analysis. I just managed to code stuff by ear that sounded reasonable and used some interesting ideas. While there, two things happened. Firstly, Antti Huovilainen wrote a prominent paper on the non-linear modelling of a Moog low-pass filter. He actually went through breaking down the circuit in a practical way that opened the door to me, thinking this stuff can be done. Secondly, FXPansion wanted a model of a bus compressor for BFD 2. All of a sudden that was my job and I had to learn how to model. They provided me with a hardware unit to A/B against (an SSL X-Logic G series compressor) and with a bit of help from Antti, we got going.
Andrew Bencina: How do you start? What are the tools you need to make a good analogue model?
AS: So, apart from having the physical circuit and an oscilloscope, you need an audio interface so you can listen to and record the circuit. An input, kind of like a microphone preamp — something with high impedance. When you plug into different parts of the circuit they need to be buffered so that you’re not leeching current and affecting how things sound. Finally, you need some circuit simulation software. Most of it’s on Windows and there are some free ones. The main program I’ve been using is an open source application called QUCS (Quiet Universal Circuit Simulator). It has excellent technical documents running through the algorithms in a lot of technical detail, which is perfect. QUCS is great at some things while other simulators are better at handling other circuits and have better component libraries for reference purposes. Just like audio production, you use whatever available tools you have to get the job done. I don’t think I could have gotten to where I am now without all of these resources.
When electronic circuits were first made, you had resistors and inductors and some capacitors and people started building things which could process audio. Things started to move to valves and then into solid state components, like diodes and transistors. Most of the time, people building audio circuits understood the basic operation, the theory, and so could design and construct circuits without simulating anything. You just build them and then listen. In engineering you can build a chair or a table, it’s not until you build a bridge that you need to start thinking about simulating and checking stress levels. The ‘bridge’ threshold in electronics was integrated circuits.
There was a lot of cost involved — and still is — in making the blueprint, like a master disc, for a chip. If you screw it up then you’ve just lost a lot of money. That’s why they started making these circuit simulators. Those integrated circuits were for analogue tasks as well as digital ones but you couldn’t actually simulate a circuit without a computer. At the same time you can’t build a better computer without the simulator, so progress came directly from the interaction between the two.
For audio, there’s a bunch of different imperatives, things which are more important and musically pleasing so you actually have to go out of your way to add component variation. Even now, manufacturers do their best to try and get a reasonable model going but it’s just not going to completely capture all the operational variability of the real components. For practical design purposes, if you build it, it’s not going to do something unexpected. But audio is all about tone and exactly how it works, not just, ‘Does it work?’
Luckily, most of the large level features are already captured by models, so for those it’s just a matter of fine tuning the parameters. There’s a lot of analogue modelling where people use the brand ‘analogue modelled’ but really, the detail is lacking. But, just so your readers know, it does actually work. Good software emulations do model, to a very large degree of accuracy, what’s going on in the circuits and sound really good. There are however a bunch of issues to get it right and it does take a lot of CPU. So, basically, if something is running really efficiently, it’s probably not modelling much. Just because you have access to a word processor you’re not automatically an author. Anyone can fire up a circuit simulator and add all the components and simulate it, but that doesn’t give you a plug-in that is useful and processes with low CPU and is fun to use and sounds good.
AB: What is the thing that separates the great model from the all-so-ran? How do you make all of those decisions about musical sound at the component level?
AS: There’s two sides to it. Firstly, which components can you actually hear? Half of analogue modelling for me is optimisation. Which components can I throw away while still maintaining the core operation of the circuit. Then the other side of it is… ‘Okay, so these components do matter. What are they doing? What are the manufacturing constants of these components and their large-scale operational behaviour? Their high-level capacitance and resistance, their saturation currents?’
I’m not interested in exactly matching a particular circuit, because if you bought another unit it would be slightly different. I’m interested in capturing the high level model parameters and then being able to vary them in a natural way, and simulate the construction of each component’s variability because that’s an important part of the sound. There are a lot of components in a circuit and none of them are exactly matched. When the audio runs through each of these slightly mismatched components it all adds up.
You also have to listen — sometimes with your eyes. Your ear isn’t attuned to low frequency behaviour so well. They’re really good at the buzzy side of things but when it comes to low frequencies, you can have two versions of a slightly high-passed signal, one that has no DC blocking on it, and they’ll sound really similar. Look at them on an actual waveform view, an oscilloscope and it’ll look completely different. For the high frequencies I do use a spectrum analyser — just to make certain the balance of harmonics is exactly right — but generally you can hear if you’ve captured the right kind of ‘buzziness’ to the top end. One other important area of simulation is noise modelling. That side of things is really computationally expensive. You can do some basic things to add a bit of noise and for it to be at the right points in the circuit but it’s really something I don’t think has been done in any great detail by anyone yet.
AB: Do you have to be able to build your own circuits?
AS: That’s the thing. It always sounds best. When I include a shortcut hack to accomplish a task, typically it sounds bad. If I measure that it’s actually just draining a bit more current from here I can add a resistor. Too much high frequency there, so put in a blocking capacitor and then it sounds good. I always try and accomplish a task with components. I can cheat a bit, for example, a perfect op-amp which doesn’t saturate, is a perfect buffer and can deliver any amount of current without blowing up. But the majority of the time I’m designing everything as a component in the circuit, so I can build it. Every single one of the models I’ve done includes modifications. Either it’s more convenient in a digital format or once you’ve identified a musically useful aspect of the circuit, it’s nice to be able to parameterise it and perhaps add a knob to control it. On The Glue, there’s the Range knob, that’s modelling some saturation in the side-chain. You can’t change the voltage rails on your analogue compressor very easily, but you can in a model — it’s a trivial thing. For the filters in The Drop, the source you’re affecting could be a full mix or just a buzzy sawtooth. Different things are more appropriate in each situation so I made modifications to the Sallen-Key MS-20 circuit. I’ve redesigned the high-pass circuit to be a two-pole filter, instead of the original’s single pole. It’s quite a big redesign but it could be physically built… and it sounds really good.
You can’t change the voltage rails on your analogue compressor very easily, but you can in a model — it’s a trivial thing
AB: If you wanted to run as accurate a model of The Glue circuit as is currently possible would that use up 100% of the resources of a current machine?
AS: It would. When you do circuit modelling for each sample you have to loop over multiple times to converge on the solution. These are non-linear equations which you have to solve using numerical methods and so you can only move onto the next sample once you’ve solved the current one. Due to all the feedback loops and energy storage components, you cannot run this in parallel. If you loaded a full schematic of a bus compressor into something like QUCS, and then tried to process some audio, it would probably take you about 10 minutes to render a second of audio at a reasonable sample rate. I see audio moving towards the approach employed within the 3D industry, where you have a preview mode and a render mode. I already have this in all of my software. You get to pick the amount of oversampling headroom you have in a real-time preview, as you do for an off-line render. This makes a big difference to the audio quality, even for a compressor. I think for really good analogue modelling, that’s the level at which you have to operate. Then instead of the mix just using the same model as the real-time playback, you can step things up and switch to a full circuit simulation of all the necessary components. You still won’t want to use a circuit simulator, because rendering would be glacially slow, but I’m working on automating the process of generating and solving these models so it uses as efficient a solver as possible while still allowing for huge systems of equations. Computers will get faster but people will always want large track counts so there’s always going to be a trade-off.
AB: In recent years we have seen a shift towards a lot of new hardware designs, perhaps in part fuelled by the broad adoption of standards like the 500 series rack in audio and the Eurorack in modular synthesis.
AS: There are a lot of boutique analogue places opening up, which I think is brilliant. The number of Eurorack modules, produced by people like Tip Top, incorporating DSP elements, is cool and the integration of computers into modular systems via audio rate control is really positive. Hopefully, some of that energy and development can be brought to DSP land. Not just in terms of directly modelling their work but by making new models of circuits that have never been built. There’s far more scope to do that and avoid all the production overheads of physically manufacturing the circuit. The beauty of course being that you can have all kinds of variations and modifications available in a single plug-in. I can’t predict the future but I know it’ll move beyond the possibilities of analogue circuitry. Different things can happen because of what computers are and the level of control and recall you have. Everything is just another number so there are physically unrealistic situations that are quite reasonable within a model. You can’t suddenly unsolder and resolder a component during a performance but you can automate a model to change components every 16th beat. You can’t put a knob on the manufactured component and tweak its tolerances. You can’t do any of this stuff, it’s just fixed in stone. But everything is just a number on a computer.
The tone of analogue modular systems is kind of alive, distorted and gritty. And while a digital model can only reproduce some of this, you’re able to control that system in very different ways: the noise, the tone. Every part of that model is optional. In audio if it sounds good and you like it then it’s good, I mean that’s just all there is. But I’d never enforce upon a user that their signal will be distorted at mix down; it should be a choice. The Glue is a pristine version of a bus compressor where the stereo image is perfect. The only way you achieve this in hardware would be to process your left signal separately to your right signal through the same channel and then combine them. This is the level of precision computers can offer.
AB: But as a software developer you’re still buoyed by the same spirit that is driving this new wave in analogue?
AS: Yeah, definitely. I’m excited by it and get inspired by having all these cool little things to play with. It just makes music more fun. Hopefully, I’ll be contributing analogue circuits to model as well. In doing The Drop I’ve probably got 10 different filter ideas that I could build in a circuit, model and then implement in both hardware and software. Giving musicians access to good sounding tools is a really positive thing — whether it’s done through a model or a physical unit. It’s good to have both. It wasn’t the people inventing the sampler that made new musical movements happen. The musicians made it happen. I think that’s just how the evolution of ideas works. It’s always this feedback loop; like me doing VReorder. Coil and Danny Hyde remixed Nine Inch Nails’ track Gave Up, doing all of these intricate manual vocal edits and I thought, ‘That sounds cool. Let’s take that sound and do something more with it.’ Hopefully VReorder inspired people to write more non-linear processing units making it easier to achieve these musical tasks. Once you’re no longer stuck in a wave editor with a pair of scissors, you can do more things more easily, with a higher level of control, and that opens up more options for musicians who then go on to create new sounds.
RESPONSES