The World May Be A Madhouse, My Life May Be Disorganized, But My Deliveries Are Pristine.
- Xiao'an Li (李晓安)
- 6 days ago
- 6 min read
Generative AI bros will tell you that the craft of composing music is something best automated, so that we can focus on the ideas. Interestingly enough, ideas are like assholes - you'll find a limitless supply of them wherever generative AI bros are gathered.
However, the bros are correct in some respects, in that there are aspects of music creation in the digital world that can be made much more efficient. The solution in this case is not AI, but rather, organizational systems that cut out unnecessary set up and rendering time.
This is no different from a painter having their colors and materials organized, or a carpenter having their fasteners sorted into cute little drawers so they know where to find them and their workshop isn't a hellhole.
Summary of goals:
(Write Fast) Write as fast and with as few interruptions as possible.
(Render Mixes) Render acceptable demo-level mixes as quickly as possible.
(Deliver Assets) If an engineer is involved, deliver them neatly organized assets that are easy to work with.
Common impediments to the above goals (in order):
(Writing Fast) Having a creative process on every project that is a matter of guesswork, leading to long set up times, unnecessarily duplicated instruments (RAM hoarding) with haphazard bus assignments, excessive use of plugins as inserts on individual instruments where bus processing will do (CPU hoarding).
(Rendering Mixes) Mixing in small movements "as you go" is a surefire way to compound otherwise simple issues. For example, if you excessively EQ or compress an element or adjust its levels while you are writing, you may find that it sounds good on its own but that it behaves weirdly in the context of the rest of the instrumentation.
(Delivering Assets) If you haven't already set up an export protocol (separated out by stems/multitrack etc) by the end of the project, you're faced with pile of crap you have to mix after exhausting yourself creatively. God forbid you have to send that to a mix engineer, because they will most definitely have an aneurysm trying to make sense of VIOLIN2COPYSHORTSOFT.
We're going to very quickly go through some solutions that have worked for me from an orchestral perspective, using Logic Pro, but this works well for any standard instrumentation or DAW you might use on a regular basis - big band, rhythm section, and so on.
The organizational principles also apply if you are building a sonic palette from scratch on a project.
Fig.1 Organization Of Instruments And Groups
Place individual instruments in a track stack or summing folder organized by minor section (e.g. Flutes), which in turn are summed to a major section bus (e.g. Woodwinds). In this picture, there are 4 subsections (Flutes, Oboes, Clarinets, Bassoons) that are all routed to 1 section (Woodwinds), that are then routed to a Full Orchestra bus.
In the arranging window, it should look something like this.

Fig.2 Routing Of Instruments And Groups
Here is what the above Flute section looks like in the mix window. (The yellow arrows show routing - Flute 1 is routed into Bus 6, which is set as the input of the Flutes bus. The Flutes bus is in turn routed into Bus 30, which is set as the input of the Woodwind bus. The exact bus numbers do not matter.

Fig.2 Arrows - Explanation
A. I have a compressor set up on each major section bus that is bypassed. When I reach the mix stage, this allows me to save a few seconds searching for the plugin. Typically, I use this plugin to gently tame the most egregious volume spikes in that section, taking care not to be too heavy-handed.
B. Each minor section bus has an EQ on it that I use to shape the general character of that section as well as lightly correct what are usually midrange buildups . Again, fairly subtle moves overall.
C. This is time-consuming, but each individual instrument also has an EQ that I use to filter out unnecessary low frequencies while also targeting egregious resonances - the latter is something I do much less of these days as I have found that such micromanagement is unnecessary and best left to a real engineer if the project has the budget for one. Some resonances are simply part of how an instrument sounds - an engineer is best equipped to decide if that is good, or bad.
D. I use VirtualSoundStage, but there is also Inspirata (which I don't like) and MIR, which serve similar purposes, to apply early reflections and set the general spatial positioning of each minor instrument section. This is very subjective and it depends on how you like to work/what types of libraries you have.
E. I'll cover this later in the article, but basically this allows me to "record" the Flutes bus output onto an audio track to later export it separately from the rest of the orchestra - sometimes a client or an engineer will want an individual instrument section for music editing or mixing purposes later on.
F. I apply a reverb tail without early reflections (as a send) on the major instrument sections, which allows me to customize the amount of reverb I apply to each section. However, I don't do this a great deal.
G. Similar to E, this allows me to "record" the Woodwinds bus output onto an audio track to later export it separately from the rest of the orchestra. Depending on the engineer's preferences, sometimes this level of granularity is sufficient to work with to deliver a mix.
H. All major instrument sections are routed to the "Orchestra" bus, without reverb, which is on a separate channel.
Fig.3 Organization and Routing Principles At Scale
Barring a few minor variations (I process Low Percussion differently from the rest of my percussion and set its output directly the the Orchestra Sum, for example), the principles hold constant for the rest of the instrumental sections. (Apologies for the small image)

Fig.4 Reverb and Instrument Summing

Fig.4 Arrows - Explanation (See Fig.2 F & H)
A. I have 2 Hall reverbs with different characters/personalities - I default to the first, which is a concert hall, but sometimes switch to the second, which is a scoring stage, if I think it will work better for the specific track it's on.
B. I use an EQ to keep things need and to prevent the reverb from clouding my mix. I make some fairly aggressive moves here but it's important to remember not to destroy the character of your reverb while trying to fix problems.
C. A very optional bypassed compressor that I turn on when I have a particularly dense mix. It is sidechained to the dry Orchestra signal, using a fast attack and medium release. It lightly compresses the Wet reverb signal when the Orchestra is playing above a certain dynamic range, allowing the reverb to "get out of the way" when the Orchestra is busy, while preserving a nice ambience when the Orchestra signal comes back down. This way, I can set higher reverb levels than I ordinarily would, without making the mix too muddy.
D. This allows me to "record" the Reverb bus output onto an audio track to later export it separately from the rest of the orchestra. A client, editor, or an engineer, can use this to independently adjust the amount of reverb in the track to their taste.
E. All orchestra and orchestra reverb elements are summed onto the Orchestra Sum bus. I use this to keep orchestral elements separate from any vocal/synth or other elements that I might prefer to process separately, with their own reverb and so on.
Fig.5 & 6 Stem Exporting
These are the audio tracks that I use to export my stems (in a "STEMS" summing folder). Refer to Fig.2 "G" for basic routing information. When I need to create a set of stems, I record-enable the audio tracks I want, and press record. This creates a set of audio regions that I can then export.

After the audio is recorded, I select the regions I want to export, right click on them, and select "Export as Audio File(s)". Prior to this, you can also use "Name Regions By Tracks" if you would like to remove any unnecessary filename alterations that your DAW has made.

Fig.7 Multitrack Exporting

Comments