Music Tech Nerd Talks

I have some friends who get off on tech stuff. When listening back to things they actually think about tone and instrumentation and blah blah eq compress threshold square wave side chain blah blah. This is that nerd out.
Vocal recording was done in a bedroom, possibly the worst location to record vocals. In the past I got some interesting results by cranking the gain and recording while a microphone is on a chair away from me. The result was noisy to begin with given my older equipment, but had more life in it. The other time I got a good sound in a bedroom was during a period where I didn’t have carpet in. This time though I went straight in to my Sennheiser e835 (through a spit-moist pop screen), and enlivening it with reverb and effects was done in post. I do have an agenda with that. This time around I’m treating each song differently. A typical metal/rock band will use get one tone that works across an album. Pop and electronic music will have a different vibe on each song, even parts of a song. If the rest of the song will be balanced to a different location it makes sense to work with the vocals in that (digital) space as well.
I used to plug through an 8 track filthy mixer to a computer mic port. These days I’m using a Focusrite 2i2 to get the job done. It’s noiseless and lifeless, and provides clearness in audio in and out that I’m just not used to. I normally work off natural dirt in mixes, but with my current set-up I’m inserting all of that noise, distortion and feel.
Vocals were pretty heavily compressed after automation through the vintage optical compressor plug-in standard in Logic. That became my go to compressor on most things. If anyone knows how to get anything workable from the standard “platinum” compressor let me know because I found that thing useless.
“Does doing vocals hurt?” “No. It’s just practice.” “Does doing vocals hurt when you haven’t practiced in two years?” “Yes.”
My brother drinks milk when doing vocals. Old death metal trick. I can’t do that with screaming though because I get mucas-ness, which means I spit a fair amount (I did discover eating red curry prior to vocals aided me though, almost the complete opposite of the milk trick). Which means I just suffer and try not to push my throat. My head though I can push. Headaches from screaming last the rest of the day which sucks. I should really practice.
And of course my other vocals were provided by Hatsune Miku. She’s pretty badass. Her range is apparently not as good as other Vocaloids, but the tones I was after were all there. Within the studio space for the plug-in the delay becomes really annoying to work with but you get used to it. A bit like when you listen to a track and start in the middle of a midi file and it comes out wrong, but instead it went through weird lag. Either way, you hit play and don’t get what you want 10% of the time and it’s annoying. Not Miku chan’s fault though.
The tones I went for were based on a few of the built in parameters. The first is called Gender. But what it actually does is effect something more like age. Around “44" you get a squeaky 16 year old Miku that has been popularised everywhere, while under “66" you get an older voice. Adding Breathiness to that voice and playing with phrasing parts sound somewhat French in parts.
Of course you can’t get around the Japanese accent. I did try to fix things at first, feeling a bit racist with how I was letting things come through, but the vocals are by a Japanese person, and they don’t differentiate the sound of L and R as well as some other things. It means there are occasionally “crowds” not “clouds” and there’s a line about “ringing” to the edge instead of “clinging”.
The earlier mentioned Focusrite is my output to my Shure SRH440 (entry level home recording) closed back headphones. While they don’t have a lot of character, the fact that I can hear everything clearly is weird in itself and, as I have in the past, I accepted there will be things that sound only as intended when the low end comes through on nicer speakers and headphones. I have compensated so that nothing is missing on cheaper speakers though. At least not atmospherically.
Last thing the Focusrite does is DI guitar tracks. I haven’t used any live guitar sounds, and I didn’t even use my nicest guitar. Same principal as vocals, using a dead lifeless signal. Passive EMG bridge pick-up in a single-cut Epiphone. Nothing to write home about.
I did this on the Fnord EP as well, but on that I was using Guitar Rig which seems infinitely better from memory than the Logic Amps and Pedalboard I used on this. I have no presence from the guitar unless I do some weird shit you would never do with real guitars. One instance is that I use a centred single guitar track on Significance of a Unit of Measure. The only other guitar I’m really happy with is the noisy parts. And maybe Final Thoughts, because it comes off well enough, even if it wouldn’t on any other song and I probably fluked it.
On those effects, I actually did find a good use for the amp processor. Vocals. Grainy, distorted, distant vocals through come off much better that any shitty “radio” preset or through pedals and other means (though I did some of that as well). It was done more on electronic tracks than others, and most of my grim-verb was just reverb and tape-delay (plugin).
I got into this knowing very little about synths. On the Fnord EP I used a Korg DS-10 (which is an emulator of the Korg MS-10 for Nintendo DS) which at least was aesthetically familiar after using a mixer to do feedback loops. The synths in Logic though are very different though. I started from the bottom up, trying presets and watching tutorial videos. I think I can competently use and understand synths on some basic level now. At least to the point where I created all of my own patches for synth sounds, while the emulation of piano and strings I tweaked presets with.
Basic idea behind me writing with synths is - I own a fucking big CME UF60 keyboard but don’t know how to play piano. My work around is finding chords I like, matching those with notes in the same key, and creating variations with automated arpeggios. This got me pretty far. It ends up looking a bit crazy when I have sticky tape notes on the keyboard like a child learning, but it got the job done. I only actually “play piano” on Goddess.
The drum space is much more familiar. I saw the user friendly drum shit Logic has and scrapped all of that for Ultrabeat (the only thing worse is pre-made loops, of which people pay money for … for wav loops … what the fuck?). Ultrabeat is less aimed at real drums and more at electronic drums, and after using a DR880 for years plays in a familiar way. You can still edit in piano roll later, editing sounds and kits is simple enough, and you can get all of you layers onto the mixer with multi-output while they stay in the same space on the timeline and in piano roll. Not sure what I would use multiple output for other than drums though.
I did use the DR880 through Ultrabeat for some electronic drum writing. It would probably be no more difficult than not having it, but the convenience and familiarity made life easier. The time the DR880 came into its own was when I repeated a bad idea I had previously. I would “play drums” with my fingers, recording the hands in one take and the bass drum in another. It worked well enough, and with a bit of fixing up and and some unifying on kick drum levels it created a better black metal and doom feel when used than humanising. I feel like if you humanise you fall short, wanting your drummer to be tight. I wanted a properly boozed up shit head behind the kit though.
I feel like my approach to mixing is conservative, and my approach to mastering is being a shit head, so I’m not sure they’re worth detailing too much.
Some important shit I did pick up along the way though includes thinking of placing your vocals level near where you place a snare, which is weird, but makes a good starting point, although my vocals are evidently louder than on most of what I’ve done before.
Linking a punchy kick to a test oscillator sub around your bass tone was helpful in some cases.
One thing I did not follow at all but which makes complete sense is to organise like an orchestra, where you place things in a way almost like you would live, and increase distance with reverb. I was doing too much shit that I wanted very little reverb on though.
Trying to drop frequencies when I was to unmuddy parts works amazing well. Intuitively I would raise an element like I was mixing it, but the opposite is quite effective for some reason.
Don’t be a dickhead with the limiter or exciter.