Vocal Effects of GLOWSTONE – LIGMA

Creating music that truly stands out can be a complex process, but understanding the techniques used by successful producers can make all the difference. We’re going to take a closer look at the production techniques used on GLOWSTONE’s hit track “LIGMA”, examining how the vocal effects were used to create a unique, distorted vibe while still allowing the performance to shine through. We’ll also delve into the backgrounds of the producers behind the track, Sober Rob and Madi Makes Musica, who formed GLOWSTONE as a couple. With a combination of powerful vocals, heavy distortion, and infectious beats, GLOWSTONE makes some great music.

Album artwork for EP "I" by GLOWSTONE outpainted by DALL-E
Album artwork for EP “I” by GLOWSTONE outpainted by DALL-E

The vocals are the king of a mix, and must be treated as such. Processing the vocals in your track is one of the most important steps to creating music that slaps. So when I’m making decisions of how to process the vocal effects in my tracks, it’s useful to see how great producers handle the task.

Today we are looking at the production techniques of GLOWSTONE’s “LIGMA”. And specifically going into detail on how the vocal effects blend GLOWSTONE’s ‘s voice into the distorted vibe of the track, but also allowing the performance to soar above the mix.

Album artwork by GLOWSTONE outpainted by DALL-E

Meet the Producers

LIGMA was produced by Sober Rob and Madi Makes Musica who formed GLOWSTONE as a couple. Both can be found on SoundCloud and Spotify.

Madi who “writ[es] songs about brain :):” and Rob who’s been known as “[…]being [a] boundless creative being[…]” team up to combine great vocals with heavy distortion and amazing beats.

The Effect Chain

Microphone

For vocals to sound good distorted, it should sound good recorded. The technique used in the most distorted sections are captured very close to the source, allowing Madi to whisper into the mic. This is similar to how ASMR might be recorded. When the vocals transition to singing, the mic placement gets a little further back to allow for more dynamic range in the recording. It seems some harmonies are also recorded from even further back, to layer in the vocals during the chorus.

EQ

Equalization, or EQ, is a crucial element in the production process, as it helps shape the sound and balance the frequencies within the song. In Ligma, Sober Rob and Madi Makes Musica have employed various EQ techniques to create a unique sonic landscape that complements the track’s distorted nature while maintaining clarity and distinction between the different elements.

For Madi’s vocals, it’s essential to maintain the intelligibility and character of her voice while still allowing for the heavy distortion. To achieve this, a high-pass filter is applied to remove any unwanted low frequencies that may muddy the mix, usually below 80-100 Hz. A slight boost in the high-mid frequencies, around 2-5 kHz, helps add clarity and presence to her voice. Additionally, a gentle high-shelf boost, starting around 10 kHz, adds some air and sparkle to the vocal performance. This EQ approach ensures the vocals cut through the mix without being too harsh or piercing.

In the final stages of EQing the mix, Sober Rob and Madi Makes Musica would have carefully considered how each element sits together. Any problem frequencies that create harshness, muddiness, or conflict between the elements would have been addressed using surgical EQ techniques, such as notch filtering or narrow boosts and cuts.

By thoughtfully applying these EQ techniques, the producers of Ligma have achieved a well-balanced and cohesive mix that showcases the unique blend of distorted and melodic elements, while allowing Madi’s vocal performance to shine through.

Compression

To create the distorted soundscape, the instrumental backing of the track is heavily compressed, which glues the mix together. Side-chaining is used to help the big 808’s punch through the mix, which ducks the rest of the audio until the kick has finished punching you in the gut. Distorted high hats and trumpet or trombone sounding synths are flaring with punch and compressing down to allow the 808’s back in throughout the bulk of the track, which is a major component of the energy of this song.

Distortion

It seems like every element of this track has some amount of distortion applied. To my ear, it seems like most of the audio has been wave-shaped into loud square waves, bringing lower toned elements into the upper register. Madi’s vocals feature a blend of clean and distorted sections that contribute to the track’s dynamic and emotive quality. The distorted sections are achieved by using saturation, overdrive, or dedicated distortion plugins to add harmonic richness and warmth to her voice. This processing can be applied subtly to enhance the natural harmonics of her voice, or more aggressively to create a gritty and powerful effect that complements the distorted instrumental elements.

The instrumental elements of Ligma heavily feature distortion. This may be achieved by using a variety of distortion techniques, such as analog-style saturation, digital clipping, and waveshaping. Each type of distortion imparts a distinct sonic character that contributes to the overall texture and intensity of the track.

Analog-style saturation, which emulates the sound of overdriven hardware, adds warmth and harmonic richness to the synths and bass. Digital clipping introduces a more aggressive and edgy character, which works well with the track’s heavy and intense atmosphere. Waveshaping, which involves manipulating the waveform of a signal, allows for more creative and experimental distortion effects that can contribute to the track’s unique sound.

It’s essential to balance the levels of distortion in the mix, ensuring that the heavily processed elements do not overpower the more subtle and melodic aspects of the track. This can be achieved through careful gain staging, EQ, and compression, as well as adjusting the blend of wet and dry signals for each processed instrument.

The Song

Get Producing

Take the lessons of LIGMA and apply them to your own vocal effect chains. Personally, I may go further with some distortion in my mixes, but maybe a new compressor plugin is your take-away. Let me know in the comments if you are using any of these techniques in your tracks. Or learn more audio engineering techniques here.

Thanks for reading ✌

GPT-3 can Create Incredible AI Image Prompts

What is GPT-3?


GPT-3 (Generative Pre-trained Transformer 3) is a large-scale language model developed by OpenAI. It is a deep learning-based natural language processing (NLP) model that uses unsupervised learning to generate human-like text. GPT-3 is trained on a massive amount of text data from the internet and is designed to predict the next word in a sentence. It can also generate short passages of text, which can be used for summarization and question-answering. GPT-3 is the largest and most powerful language model ever created and is capable of understanding, generating, and manipulating natural language.

GPT-3 can be prompted to generate many realistic answers to chat-based input, including responding in poem, with factual information, and with usable code. There is still a long way to go before the model is perfect, as it’s poems aren’t great, will often confidently state incorrect facts and will provide code that doesn’t run. Despite these shortcomings, GPT-3 is a huge stepping stone in the direction of incredible AI tools to improve society.

Learn more by visiting their site: OpenAI

How can GPT-3 create incredible AI Image Prompts

GPT-3 can be prompted to output a set of words that describe the image similarly to how we would prompt for an image. Although the AI doesn’t have an idea of which words exist in the model, due to the vast dataset of Stable Diffusion, often we’re able to create great looking art.

In this example, we’ll be using MitchJourn-E to create the Stable Diffusion images and the beta app for GPT-3 Chat.

To get GPT-3 to create these prompts, I’ll start by specifying “Using only nouns and adjectives”. That should eliminate any connecting words that aren’t as important in image prompting. Then I’ll ask “describe an incredible work of art”, which will start returning the output to plug into our image generation apps. Sometimes it will only output a few words, so I’ll also put at the end “Using 50 words or as many words as possible”, something to get it to output some more useful words.

Examples:

GPT-3 prompt:

Using only nouns and adjectives, describe an incredible and detailed painting of a landscape with as many words as possible.

Output 1:

Majestic mountains tower over a sprawling, verdant valley. A crystal-clear river winds its way through the landscape, cascading over rocks and boulders as it flows towards a distant, azure sea. Wildflowers dot the rolling hills, and a single, graceful tree stands tall on a grassy knoll, its leaves rustling softly in the gentle breeze. The sky is a deep, rich blue, with fluffy white clouds drifting lazily overhead. The whole scene is bathed in the warm, golden light of the setting sun, casting long, surreal shadows across the landscape. It is a truly breathtaking and awe-inspiring sight.

GPT-3 prompt:

Using only nouns and adjectives, describe an incredible and detailed painting of a landscape with 50 words.

Output 2:

Mountains, cliffs, sky, clouds, sun, trees, grass, river, rocks, waterfalls, wildlife, flowers, colors, beauty, serenity, majesty, vastness, depth, perspective, realism, skill, talent, artistry, wonder, amazement, awe, admiration.

Output 3:

Vibrant, colorful sky stretches above rolling hills and tranquil lake. Majestic mountains tower in the distance. Lush, verdant forests and fields dotted with wildflowers. Sun shining, birds singing. Tranquil scene captured with stunning detail and precision.

GPT-3 prompt:

Using only nouns and adjectives, describe an incredible and detailed abstract painting inspired by a landscape with 50 words

Output 4:

Vibrant hues, sweeping brushstrokes, and majestic peaks form an awe-inspiring abstract landscape painting. Lush forests, sparkling waters, and towering mountains are captured in a mesmerizing display of color and texture. A sense of majesty and wonder permeates the artwork, drawing the viewer into a world of beauty and inspiration.

Additional Prompt Info

Negative prompt:

lowres text error cropped worst quality low quality normal quality jpeg artifacts signature watermark username blurry artist name deformed disfigured poorly drawn out of focus censorship amateur drawing bad art poor art messy drawing portrait person people text signature watermark frame framed display

Steps: 50

One of the seeds: 902193268

Width: 640

Height: 384

Prompt weight (CFG scale): 7.5

Sampler: k_euler_a

Output Images

We can use these GPT-3 outputs and plug them into MitchJourn-E to create some beautiful images:

Output 1:

Majestic mountains tower over a sprawling, verdant valley. A crystal-clear river winds its way through the landscape, cascading over rocks and boulders as it flows towards a distant, azure sea. Wildflowers dot the rolling hills, and a single, graceful tree stands tall on a grassy knoll, its leaves rustling softly in the gentle breeze. The sky is a deep, rich blue, with fluffy white clouds drifting lazily overhead. The whole scene is bathed in the warm, golden light of the setting sun, casting long, surreal shadows across the landscape. It is a truly breathtaking and awe-inspiring sight.
Majestic mountains tower over a sprawling, verdant valley. A crystal-clear river winds its way through the landscape, cascading over rocks and boulders as it flows towards a distant, azure sea. Wildflowers dot the rolling hills, and a single, graceful tree stands tall on a grassy knoll, its leaves rustling softly in the gentle breeze. The sky is a deep, rich blue, with fluffy white clouds drifting lazily overhead. The whole scene is bathed in the warm, golden light of the setting sun, casting long, surreal shadows across the landscape. It is a truly breathtaking and awe-inspiring sight.
Majestic mountains tower over a sprawling, verdant valley. A crystal-clear river winds its way through the landscape, cascading over rocks and boulders as it flows towards a distant, azure sea. Wildflowers dot the rolling hills, and a single, graceful tree stands tall on a grassy knoll, its leaves rustling softly in the gentle breeze. The sky is a deep, rich blue, with fluffy white clouds drifting lazily overhead. The whole scene is bathed in the warm, golden light of the setting sun, casting long, surreal shadows across the landscape. It is a truly breathtaking and awe-inspiring sight.
Majestic mountains tower over a sprawling, verdant valley. A crystal-clear river winds its way through the landscape, cascading over rocks and boulders as it flows towards a distant, azure sea. Wildflowers dot the rolling hills, and a single, graceful tree stands tall on a grassy knoll, its leaves rustling softly in the gentle breeze. The sky is a deep, rich blue, with fluffy white clouds drifting lazily overhead. The whole scene is bathed in the warm, golden light of the setting sun, casting long, surreal shadows across the landscape. It is a truly breathtaking and awe-inspiring sight.

Output 2:

Mountains, cliffs, sky, clouds, sun, trees, grass, river, rocks, waterfalls, wildlife, flowers, colors, beauty, serenity, majesty, vastness, depth, perspective, realism, skill, talent, artistry, wonder, amazement, awe, admiration.
Mountains, cliffs, sky, clouds, sun, trees, grass, river, rocks, waterfalls, wildlife, flowers, colors, beauty, serenity, majesty, vastness, depth, perspective, realism, skill, talent, artistry, wonder, amazement, awe, admiration.
Mountains, cliffs, sky, clouds, sun, trees, grass, river, rocks, waterfalls, wildlife, flowers, colors, beauty, serenity, majesty, vastness, depth, perspective, realism, skill, talent, artistry, wonder, amazement, awe, admiration.
Mountains, cliffs, sky, clouds, sun, trees, grass, river, rocks, waterfalls, wildlife, flowers, colors, beauty, serenity, majesty, vastness, depth, perspective, realism, skill, talent, artistry, wonder, amazement, awe, admiration.

Output 3:

Vibrant, colorful sky stretches above rolling hills and tranquil lake. Majestic mountains tower in the distance. Lush, verdant forests and fields dotted with wildflowers. Sun shining, birds singing. Tranquil scene captured with stunning detail and precision.
Vibrant, colorful sky stretches above rolling hills and tranquil lake. Majestic mountains tower in the distance. Lush, verdant forests and fields dotted with wildflowers. Sun shining, birds singing. Tranquil scene captured with stunning detail and precision.
Vibrant, colorful sky stretches above rolling hills and tranquil lake. Majestic mountains tower in the distance. Lush, verdant forests and fields dotted with wildflowers. Sun shining, birds singing. Tranquil scene captured with stunning detail and precision.
Vibrant, colorful sky stretches above rolling hills and tranquil lake. Majestic mountains tower in the distance. Lush, verdant forests and fields dotted with wildflowers. Sun shining, birds singing. Tranquil scene captured with stunning detail and precision.

Output 4:

Vibrant hues, sweeping brushstrokes, and majestic peaks form an awe-inspiring abstract landscape painting. Lush forests, sparkling waters, and towering mountains are captured in a mesmerizing display of color and texture. A sense of majesty and wonder permeates the artwork, drawing the viewer into a world of beauty and inspiration.
Vibrant hues, sweeping brushstrokes, and majestic peaks form an awe-inspiring abstract landscape painting. Lush forests, sparkling waters, and towering mountains are captured in a mesmerizing display of color and texture. A sense of majesty and wonder permeates the artwork, drawing the viewer into a world of beauty and inspiration.
Vibrant hues, sweeping brushstrokes, and majestic peaks form an awe-inspiring abstract landscape painting. Lush forests, sparkling waters, and towering mountains are captured in a mesmerizing display of color and texture. A sense of majesty and wonder permeates the artwork, drawing the viewer into a world of beauty and inspiration.
Vibrant hues, sweeping brushstrokes, and majestic peaks form an awe-inspiring abstract landscape painting. Lush forests, sparkling waters, and towering mountains are captured in a mesmerizing display of color and texture. A sense of majesty and wonder permeates the artwork, drawing the viewer into a world of beauty and inspiration.

Create your own beautiful art

MitchJourn-E is a free and open source tool available to download on GitHub. Paste your GPT-3 prompts directly into the interface and start generated the most AI AI images you’ve ever seen!

Download now from GitHub: https://github.com/MitchOkabe/MitchJourn-e

What is Prompt Engineering?

Simply put, prompt engineering is the process of carefully crafting and tuning prompts for AI generation. A prompt is a piece of input that an AI system uses to generate output, such as a text, image, or audio clip. By carefully designing and fine-tuning these prompts, we can help AI systems to generate more accurate and engaging outputs.

Now, you might be wondering why prompt engineering is so important. After all, don’t AI systems generate whatever you type into them? While it’s true that AI systems can generate stunning outputs by simply typing whatever you wish into an input prompt, the results can often be unstructured and unpredictable. By providing carefully designed prompts, we can guide the AI system and help it to generate more coherent and useful outputs.

For example, imagine that you’re building an AI system that’s capable of generating pictures. Without prompts, how would it generate a scene of “A statue on the side of a cliff in Japan”? It may choose to generate a beautiful statue, then put the cliff in the background to the side, and the setting is Japan. Or it may have been trained that pictures of Japan, that are by cliff sides, are usually accompanied by Japanese architecture, so a traditional Japanese building becomes a focal point instead of the statue. However, by providing carefully designed prompts, such that different concepts are more or less weighted based on how the model is trained, the AI can create incredible and well imagined art.

An image without prompt engineering.
incredibly detailed (A statue on the side of a cliff in Japan)1  (cinematic lighting hyperdetailed hd 4k 8k sharp focus highly detailed concept art splash art by greg rutkowski):1
(A statue on the side of a cliff in Japan)1
An image with prompt engineering.
incredibly detailed (A statue)1.3 (on the side of a cliff )1 (in Japan)0.7  (cinematic lighting hyperdetailed hd 4k 8k sharp focus highly detailed concept art splash art by greg rutkowski):1
(A statue)1.3 (on the side of a cliff )1 (in Japan)0.7

Prompt engineering is an essential aspect of AI generation. By carefully designing and fine-tuning prompts, we can help AI systems generate more accurate and engaging outputs. Without prompt engineering, AI systems will generate results based on how the data was trained and will create biases in the resulting work. Prompt engineering allows us to guide the AI system and create more coherent, useful and artful outputs.

Start Creating, like a prompt engineer!

MitchJourn-E is a tool that incorporates Prompt Engineering as a core concept when creating images. You can define weights for each part of your prompt and add helper words and negative prompts automatically. MitchJourn-E is free and open source on GitHub: MitchOkabe/MitchJourn-e: Windows desktop application for Stable Diffusion (github.com)

AI Generated Art: Flume – “Never Be Like You”

Flume is one of my favorite artists, and has created some amazing music, but what if some of that music was converted into visual art? We already have music videos, but Stable Diffusion opens the door for a new visual interpretation using text-to-image generation. What would happen if we feed the lyrics from our favourite musical artists into a text-to-image model to create visual art? This art can represent the emotion, feelings, and themes of the song lyrics. It could be a simple design or something more complex. This could be a great way to explore the emotions and themes of a song. It could also be a great way to bring more visual art to the music industry, creating a new way of experiencing music. A lyric is worth a thousand prompts.

Flume – Never Be Like You

What I would do to take away
This fear of being loved, allegiance to the pain

What I would do to take away
This fear of being loved, allegiance to the pain

Now I’m fucked up and I’m missing you

Now I'm fucked up and I'm missing you

He’ll never be like you

I would give anything to change

This fickle-minded heart that loves fake shiny things

I would give anything to change This fickle-minded heart that loves fake shiny things

Now I’m fucked up and I’m missing you

He’ll never be like you

Now I'm fucked up and I'm missing you He'll never be like you

I’m only human, can’t you see?
I made, I made a mistake

I'm only human, can't you see I made, I made a mistake

Please just look me in my face
tell me everything’s okay

Please just look me in my face, tell me everything's okay

How do I make you wanna stay?
Hate sleeping on my own, missing the way you taste


Stop looking at me with those eyes
Like I could disappear and you wouldn’t care why


I’m falling on my knees
Forgive me, I’m a fucking fool


I’m begging, darling, please
Absolve me of my sins, won’t you?

About the Song

Released16 January 2016
Recorded2015
GenreFuture bass, alternative R&B
Length3:54
LabelFuture Classic, Mom + Pop
Songwriter(s)Harley Streten, Alessia De Gasperis-Brigante, Geoffrey Patrick Earley
Producer(s)Flume, kai, Earley

Learn more

These images were generated using MitchJourn-E, a text-to-image application for Windows developed by Hous3. It’s open source and free to download, create your own art here!

✌️

Image Generator: MitchJourn-E

Introducing MitchJourn-E! The text-to-image generator app for Windows. This app aims to enable intuitive, fast, and exciting ways to generate new images, using the power of InvokeAI and Stable Diffusion on Windows desktop.

What is MitchJourn-E? The name is a play on words, referencing the popular image generation services “MidJourney” and “Dall-E”. This app provides an easy-to-use desktop interface for the popular Stable Diffusion fork called “InvokeAI“. MitchJourn-E is an open-source Windows desktop application that allows you to type in a description of an image and the app will create that image using the Stable Diffusion model.

A screenshot of the MitchJourn-E application showing an image generated from the prompt "a beautiful scenic landscape of a Hawaiian sunset"
A screenshot of the MitchJourn-E application

Download for Free

MitchJourn-E is an open-source free download available from GitHub. Click the green “Code” button and choose to clone or download as a zip. Within the downloaded folder, run MitchJourn-e.exe.

Generated Image Examples

Now I'm fucked up and I'm missing you

Features

Text-to-Image: enter text prompts and generate an image

Image-to-Image: reference an existing image to create images with similar composition, style, and size

Create variations: reference a previously generated or local image and create similar but different variations of it

Negative prompting: enter prompts that are banned from generation, usually generating inverse results. Combine with regular and image prompting to fine-tune results

Prompt blending: define two separate weighted prompts to generate an image of intersecting concepts

Prompt Helper: a customizable menu to quickly add commonly used phrases, artist names and styles to speed up workflow

Continuous prompting: Images can be generated one after another, while changing the prompt and the parameters of the prompt to fine-tune the image

Sequential prompting: Images can be generated based on the previous image, allowing prior results shape future results

Upscaling: generate images quickly at low resolution, then upscale them after, for an overall fast way to generate full sized images

High Resolution generation: Generate images at resolutions higher than usually possible, using a 2-phase approach of generating a smaller image, upscaling it, then using the upscaled image as a reference, generate the high-resolution image

Aspect ratio presets: define presets (or use the defaults) to create images in common ratios like 16:9, 2:3, 1:1

Creativity slider: quickly modify multiple parameters with one slider movement that will result in more and less “creative” results, useful for moving between photo-realism and fantasy

Prompt noise: add Perlin noise to the prompt to subtly (or drastically) modify the starting point of the image generation

Latent limiter: limit the amount of data available to the generation, which can generate more photo-realistic/true to source images

Seamless tiling: generate images that can seamlessly tile together, great for clothing patterns and game assets

Generate progress images: save every step the generation makes before creating the end result (for you noise lovers!)

Output sorting (experimental feature): automatically sort generated images into folders based on the prompt used

Prompt folder (experimental feature): reference a folder of images and generate new images referencing each image in that folder sequentially. Good for creating videos

Out-painting (experimental feature): generate additional resolution to existing images, based on the prompt

Metadata Extractor (experimental feature): reference an image and view all the associated metadata. Typically, images generated with Stable Diffusion have the prompt and other parameter info saved within the metadata

Installation

  1. Download and install InvokeAI to the root of the C drive. Follow this separate setup guide (easy) to install Anaconda, Python, and all the other prerequisites automatically: https://invoke-ai.github.io/InvokeAI/installation/INSTALL_INVOKE/#installation
    • When complete, you should have a folder structure like C:\InvokeAI-Main\
  2. Now you need to download a model file. Here is Stable Diffusion 1.5 (v1-5-pruned-emaonly.ckpt) (recommended for best general results). You will need to create a hugging face account before it shows the download link: https://huggingface.co/runwayml/stable-diffusion-v1-5
  3. Rename the .ckpt file to “model.ckpt” and place it here: C:\InvokeAI-main\models\ldm\
  4. Now relaunch MitchJourn-E, and you can start generating images!

Where credit is due

Thanks to Emad Mostaque and his team for creating the Stable Diffusion platform and model which this application uses as it’s backbone. Additionally, thanks to Lincoln Stein and contributors for creating the Stable Diffusion fork which this application leans on for the majority of its image processing features.

This application is a user interface that leverages the hard work of many creators to create beautiful art. In keeping with its predecessors, this project is open source and can be remixed and added to. Feel free to fork and collaborate using GitHub.

Thanks for reading ✌

Unheard Music: God Rays

Picture of god rays streaming into a cave, lighting mossy rocks. Someone poses like a starfish at the top of the rock.

We all have unfinished projects in our archives, this is one of mine. God Rays belongs to an ever increasing list of tracks that may never fully unfurl into a true song. When listening to the back catalogue, ideas of what comes next, in the evolution of the track, become more clear. When listening to an idea over and over, sections become solidified and unchangeable, as the mind gains an expectation of repetition. The unwillingness to change often will halt progress towards great potential, dooming a bad song to only sound like a polished bad song.

God Rays (unfinished)

What makes a God Ray?

Listening back at this track, created in the summer of 2019, I start to recall it’s inspirations and asperations. This track was inspired by Knife Party‘s synth manipulation and hard hitting kicks. The lead synth of God Rays has a downward lowpass on the tail end of each note, which is a common technique used by Knife Party. Although in their tracks, many well crafted synths play one or two notes, before another synth comes in to continue the melody. This variation is sorely lacking in God Rays, leaving the main melody to sound repetitive, and simplistic after a few loops.

The fantasy of God Rays is of a track that inspires dancing and energy. As such, the beat comes in hard after a brief intro. Shortly after, the full array of synths and the synth pattern kicks in, with rapidly moving pitches and counter synths. Later on a mostly empty breakdown section comes in. This section would be filled out with an idea that’s yet to exist, but the structure is there as a placeholder, leading up to the next synth chorus.

The Synths

God Rays has 4 synths used throughout the track:

  1. There is a vocal chorus emulation, making an “aw” sound.
  2. The lead synth with it’s dynamic low pass filter
  3. A counter-synth to the notes played in the lead synth
  4. A supporting synth to the lead synth

Synth 1 – Vocal Chorus

Synth 1 – Vocal Chorus

The vocal chorus synth plays throughout the song to add ambiance and simulate the pressure and loudness of the other elements of the track. When this synth plays on it’s own, it’s interesting and relatively loud. But when the kicks and lead synth come in, the volume of this synth ducks hard, but can still be subtly heard between the notes and kicks. This technique is called Sidechain Compression, a technique used in most of my tracks.

Synth 2 – Lead Synth

Synth 2 – Lead Synth

The Lead synth is big and distorted, with a built in lowpass filter on each note. The lowpass cuts out all the high end frequencies and lets only the frequencies at the bass end through. The filter sweeps, allowing all frequencies through when a note is held, and closing the filter when released. This technique makes a buzzing and somewhat grating synth sound dynamic and interesting. Additionally, this synth has a 16ms delay that pans left and right to make it sound wider and larger than it is. This sort of delay is typical reserved for a leading instrument or vocal performance, found in tracks like Post Malone’s “Circles”.

Synth 3 – Counter Synth

Synth 3 – Counter Synth

The counter synth plays it’s notes when the lead synth is resting. This sort of lead and counter melody fills the space and keeps the energy of the track moving.

Synth 4 – Supporting Synth

Synth 4 – Counter Synth

The supporting synth adds extra bass and thickness to the lead synth. As well, this synth doesn’t have any of the dynamic low pass filtering or delay, which adds a lot of complexity when the two synths are playing at the same time. When the two synths are in the middle of a note, they don’t sound like two separate sounds. But at the end of a note, when the lead synth begins it’s low pass filter, but the supporting synth finishes the note normally, a stereo separation emerges. This separation may make you feel like the sound starts in front of you, then ends behind you.

Synth 2 and 4 played at the same time

What’s next for God Rays?

Most likely, this track will forever live in the vault of unfinished music. But perhaps the lessons used in this track will inspire future music to come. Keep an ear out for synth layering, the technique is used all over the electronic music world.

Thanks for reading ✌

Vocal Effects of Post Malone – Circles

The vocals are the king of a mix, and must be treated as such. Processing the vocals in your track is one of the most important steps to creating music that bumps. So when I’m making decisions of how to process the vocal effects in my tracks, it’s useful to see how great producers handle the task.

Today we are looking at the production techniques of Post Malone’s “Circles”. And specifically going into detail on how the vocal effects blend Post Malone’s voice into the trippy vibe of the track, but also allowing the performance to soar above the mix.

Meet the Producers

For Circles, the primary producers were Louis Bell and Frank Duke, along side Posty himself. Both of the primary producers are heavyweights in the hip hop electronic production industry.

In an interview for Universal Audio, Louis mentions a few of the plugin’s he uses with Post Malone’s voice, and why he uses them. We can take these plugin’s and see how they work with Post Malone’s voice to make it the king of the mix.

The Effect Chain

Compression

The first effect typically added to the vocal mix is the compressor. Allow the vocal to become consistent in it’s dynamic range. In Circles, Post Malone get’s close to the mic and sings softly at times, and others, he’s back further singing his heart out. By compressing your vocal, the soft parts don’t get drowned out in the mix, and the loud parts don’t cover everything up.

Louis Bell uses the Universal Audio 1176 Limiting Amplifier, the classic ultra fast limiting compressor from the 60’s, used on thousands of pop tracks since it’s release. But for modern producers, a software emulation from Universal Audio or Waves can replicate the fast attack and all-buttons-in mode of the original. The plugin versions may not give the exact same character as the hardware unit, but neither did one hardware unit to the next one.

Close up picture of the hardware 1176 Limiting Amplifier from Universal Audio

EQ

EQ will usually come before or after the compressor. Either way, the purpose will be the same: make the character of the vocal shine through, without too much low end nasal or high end hiss. Typically, there is a big cut to the very lowest frequencies, which usually end up being background hum, or mic stand noises. Then another cut to the low frequency sound of air hitting the mic when saying words like “boat” or “pool”. There may be spots throughout the frequency range that resonated with the room too much and would want to be cut out as well.

In Circles, Post Malone’s voice has a high end boost that makes his voice sound more present and airy. Louis Bell uses another classic piece of hardware in the vocal chain, the Neve 1073 preamp. This price of gear comes just after the microphone, and includes only a few knobs of EQ, along with some old hardware warmth. A software emulation is also available.

Close up picture of the 1073 hardware preamp

Space

When Post Malone sings, a lot of space is added to his voice, creating an ethereal and wonderful presence. Delay, reverb and chorusing in the vocals are stacked with delay and reverb in the instruments to truly send the mix into outer space.

Delay

A delay technique that old rock vocals often had was a “slap back” delay which adds a prominent quick echo that makes the vocal sound subtly doubled. This technique is also used on Post Malone’s vocals. Adding a delay in the range of 20-60ms.

Chorus

Chorusing is also used in the vocal chain. Louis Bell used the Brigade Chorus Pedal from Universal Audio, usually reserved for guitars and psychedelic tunes. This is mostly used in the chorus, though it seems like a lesser version is used during the pre choruses. The effect widens the stereo field to make the vocals come from the left and right track in a slightly different way.

Render of the Brigade Chorus Pedal software plugin with some cables in the background

Reverb

And last, and probably most, is the reverb on Post Malone’s voice. Reverb is a basic requirement for pop vocals, to make them sound like they weren’t recorded in some of the quietest rooms in the world. In this case, the reverb is strong and can bring images of Post Malone singing away into the sunset as he fades out of view.

The verses show Post Malone’s voice covered in reverb to fill in the empty space where he’s not singing. In the Chorus the reverb is turned up a notch or two. After the first chorus, you can hear Post’s voice completely dry for two words “Let go”, then the effects start coming back in.

Layering

Vocal layering is also a very important technique for strong choruses, like the one in Circles. The verse hears Post’s voice in a single track with effects, where the chorus, you can hear 4 or more different (but extremely similar) takes stacked on top of each other.

Get Producing

Take the lessons of Circles and apply them to your own vocal effect chains. Personally, I may use some more chorusing in my mixes, but maybe a new compressor plugin is your take-away. Let me know in the comments if you are using any of these techniques in your tracks. Or learn more audio engineering techniques here.

Thanks for reading ✌

How to Create a Remix – “Work” by Rihanna

Creating a remix is about putting your own spin on an already established idea. When listening to a song on Spotify, I often hear my own melody or expectation of the next verse before the actual verse starts playing.

These differences in expectation mean the original producer is either creating very original tracks, or failing to meet your standards for that type of music.

Rihanna – Work (Hous3 Remix) – 65kbps
Picture of an art piece that looks like a collage of different patterns and iconography. Notably a single eye is at the center.
A remix of another medium

Rihanna – Work (Feat. Drake)

“Work” was a song that always intrigued me as a successful, catchy and fun song, with some very interesting vocal techniques. The chorus of the song has Rihanna singing the word “work” many times over, but not too many to where we forget the whole meaning of the word.

But maybe Rihanna did loose the meaning. In the later portions of the song, “work” looses it’s annunciation and sounds more like a “whur”. This semantic satiation dislocates the idea from the music, and the catchy rhythms take over, making the song very easy to dance to.

Screenshot of the remix with the different layers of audio in a timeline.
Chop it up

The Original Beat

The original production on the Rihanna’s “Work” is a laidback summery vibe with an unrelenting groove. During the chorus, slightly larger bass kicks will hit, but nothing too crazy. The tune is mostly carried by the bass guitar notes that pluck throughout the track, accompanied by some sub bass.

Sub bass, Guitar bass, Melody (mostly bass)

The Remix

With the original beat being on the more subdued side of electronic music, I felt a harder hitting house/dubstep beat would be a nice way to contrast the original. The Hous3 remix includes an electric buzzing bass line, hard hitting kicks and a dubstep-style chorus with explosive wobble.

Structure

In a lot of cases, a remix will take on the structure of the new genre the remix is creating. For instance, the structure of a dubstep remix will usually include a long build up sequence that drops into lovely wobble-fest. But in this case, I chose not to change the structure of the original song at all.

When I was first looking at the pieces of the original song, I started to dissect the timings and flow of the track and found some interesting deviations from typical song structure. These deviations may have been due to a compromise of trying to make the track shorter, or a way of making the track feel faster in getting to the good parts.

Breakdown of the different song elements. Almost half (44%) of the song is chorus and pre chorus. While 36% of the song is the verse.

Typically when writing a song, you want to subdivide everything into 4ths. 4 beats is a bar and 4 bars is a phrase. In this song, the choruses are 6 bars long, adding an additional 2 bars to the typical verse length of 8 bars.

Using this same song structure in the remix was a fun limitation to work around, that helped produce interesting results. Although I wanted to have dubstep elements in the remix, I am not a big fan of the long buildup sections. With the limitation of the song structure in mind, this allowed me to leave the buildup moments to just a couple bars before the chorus.

A Remix is like an Onion

Layering the new elements of the song with the existing elements of the original track can be a great challenge when remixing a track. The only elements I wanted to keep where the vocals. Rihanna sings beautifully and I didn’t want to remove any of the beauty by having my new instruments interfere with her vocal lines.

In early iterations of the remix, the dubstep bass wobbles were harsh and would introduce their own rhythm to the already rhythmic vocal lines. I needed to use wobble elements that reinforced the existing groove of the vocals.

Thinking back to how the repeating “work” line was being reduced to a “whur”, I wanted to use a dubstep wobble that also sounded like a “whur”. In my testing, I found a pre-recorded wobble that fit the criteria and layered in pretty well.

The dubstep wobble used in the remix
How the wobble and the vocals come together

Hard Bass

Next up on the remix agenda is the hard bass. This was an element of the song that I knew I wanted from the start. It needed to be a total contrast from the original melodic bass, and something that felt powerful and electric.

Hard bass from the intro
Hard bass mixed with the vocals, from the first chorus

To make the bass sound powerful, I needed a lot of low-end punch. And for that electric feel, I wanted to add some high-end frequencies as well. Most of the high-end comes from distortion of the bass signal. Importantly, if this bass was going to mesh in with Rihanna’s vocals, it needed to leave some mid to higher frequency ranges a little softer. The resulting bass sound is crisp, with room for vocals to come right through.

The other element of the mix that I wanted to come through was the kick drum. With the hard bass competing for the same low-end space as the kick, I used side-chain compression to automatically reduce the bass whenever the kick came in.

I wrote a blog post about Sidechain Compression here.

Wrap it up

With the vocals, hard bass, dubstep wobble and punchy kick drum in the mix, the remix was simple and effective at conveying the idea of a contrasting-style remix. The rest of production was adding the bells of whistles of a dubstep/electronic song, like the drums and effects.

Because the vocal stems were already well processed, the only techniques I used on the vocals were some compression summing, minor EQ and some reverb. These processing steps were to further mesh the vocals in with the rest of the mix.

You can listen to the remix at the top of this page, although the sound quality is extremely compressed. But until I find a place to host high quality remixes, you can find a slightly better quality version by watching/listening to the music video for the remix below (be sure to play at 1080p quality).

Thanks for reading 🙂

What Is Sidechain Compression?

Before we talk about the sidechain, it’s important to talk about compression. Compression is used heavily throughout the Dreaming City album. Taking inspiration from electronic music group Justice, compression is the glue of the album.

Limiting Compressors

Songs throughout the Dreaming City contain many samples, ambient noises and trippy reverberations. These sounds aren’t necessarily cohesive when played together, but when pushed through the same compressors, the sounds begin to mesh. Two pieces of music or sound may mesh when their notes are in the same key, but also when their effected by processing in the same way.

With a Limiting Compressor, sound begins compressing if the volume reaches a certain level, called the “Ceiling”. Everything past that level is heavily compressed, and is said to be “Limited”. The goal is to compress the sound so it’s not to go louder than the ceiling.

Compressor controller showing the compressor Ceiling set to -0.1dB and a Threshold of -9.7dB. True Peak limiting is enabled.
The limiting compressor used in the song Length of a Moment (Feat. Steven Clapham)

The other parameter is the “Threshold”, which enables gain compensation to boost sound that is quieter than the specified level. This shifts all sounds into a loudness zone, where nothing goes quieter than the threshold and nothing louder than the ceiling.

Sidechain Compression in the Dreaming City

Overall, sounds are all filtered through the master limiting compressor, but individual elements are first affected by their own compression. In this album, all bass heavy synths have sidechain compression applied, to let the drums push through the mix.

Sidechain compression takes one sound as an input and one sound as the effector. The input sound decides how much compression will be applied to the affected sound. In my mix, the input is almost always the kick drum. When the kick drum hits, the compression starts, and the affected bass synth is dampened, allowing the kick to sound large and full of bass.

Sidechain compression in Length of a Moment (Feat. Steven Clapham)

In the preview above, the top module is listening to the kick drum, displaying it’s volume over time. Below is the compressor, affecting the bass synth. On the right side of the compressor, you can see the gain knob ducking in sync with the kick hitting. Listening to the bass, you can hear how the sound seems to pulse with the kick, when usually the bass line would sound like a constant tone.

This form of compression applied to the bass synth is constant across it’s audio level, unlike most compression. This means no matter how loud the bass is, it’s signal will be lowered by the kick drum. If the bass was more dynamic, and included quieter elements, along with louder elements, that worked with the kick, then a more typical compression would be used. to keep the dynamics of a softer sound, but dampen the louder sounds, the side chain would effect the threshold knob instead of the gain knob. Additionally, the ratio would need to be set to something like 10:1 so the sound within the threshold would be 10 times quieter.

What is Paulstretch?

Paulstretch is a program for stretching audio by extreme amounts, to make a 3 minute song 1 hour long (or more or less).

Sample of Paulstretch

In the above example, I recorded myself saying “Hello” and you hear it played back 3 times with different stretch factors. The first hello is the original sample, a 1x stretch factor. The second hello is 2x and the third is 50x.

Paul's Extreme Sound Stretch Interface. Looks like it's from the 1990's.
Paul’s Extreme Sound Stretch Interface

Using this stretching technique, musical notes can be held for an extremely long time, without producing unpleasant audio glitches. That said, when stretching music, results range from beautiful to creepy and even painful. For instance, stretching an electronic song with a big bass heavy drop, can yield some interesting results.

Flume Stretched

In the above example, the song “Say It feat. Tove Lo” by Flume, is stretched 60x at the start of the first chorus. The first thing I notice is the vocals, with a lot of high pitched static. The vocals are singing “say it” while a clap sample is layered with the start of a big synth chord, but the stretching completely changes the recognizability of these elements. With this technique, unique and evolving ambient synth sounds become quite easy to create.

Using Paulstretch in The Dreaming City

The last few Hous3 albums include samples generated with paulstretch and The Dreaming City was no exception. The easiest to hear an example is in the song “Play Me”, as the whole song was created around this technique.

Hous3 – “Play Me”

The background “synth” that plays throughout the track is all Paulstretched audio. Layers of reversed instruments and string section enhances the sound to not be too much of a blur. When parts of the song are stretched with vocal samples, the moving dynamics of the final sample feels full of depth and complexity, when in fact it is a simple stretching of a short soundbite.

Get stretching

Thanks for reading. If you want to hear more Paulstretched music, check out the song “Future Night Club” on Spotify. Or see if you can spot some less noticeable samples throughout The Dreaming City album. And if you have any music featuring Paulstretch, please let me know!
-Hous3