Paulstretch is a program for stretching audio by extreme amounts, to make a 3 minute song 1 hour long (or more or less).
In the above example, I recorded myself saying “Hello” and you hear it played back 3 times with different stretch factors. The first hello is the original sample, a 1x stretch factor. The second hello is 2x and the third is 50x.
Using this stretching technique, musical notes can be held for an extremely long time, without producing unpleasant audio glitches. That said, when stretching music, results range from beautiful to creepy and even painful. For instance, stretching an electronic song with a big bass heavy drop, can yield some interesting results.
In the above example, the song “Say It feat. Tove Lo” by Flume, is stretched 60x at the start of the first chorus. The first thing I notice is the vocals, with a lot of high pitched static. The vocals are singing “say it” while a clap sample is layered with the start of a big synth chord, but the stretching completely changes the recognizability of these elements. With this technique, unique and evolving ambient synth sounds become quite easy to create.
Using Paulstretch in The Dreaming City
The last few Hous3 albums include samples generated with paulstretch and The Dreaming City was no exception. The easiest to hear an example is in the song “Play Me”, as the whole song was created around this technique.
The background “synth” that plays throughout the track is all Paulstretched audio. Layers of reversed instruments and string section enhances the sound to not be too much of a blur. When parts of the song are stretched with vocal samples, the moving dynamics of the final sample feels full of depth and complexity, when in fact it is a simple stretching of a short soundbite.
Thanks for reading. If you want to hear more Paulstretched music, check out the song “Future Night Club” on Spotify. Or see if you can spot some less noticeable samples throughout The Dreaming City album. And if you have any music featuring Paulstretch, please let me know! -Hous3
With the release of The Dreaming City album, 6 music videos were released along side to accompany each track. The ideology of the album is to allow the listeners to immerse themselves in the soundscapes of each song. Music videos would help keep attention and facilitate immersion.
The ideas behind the videos
Although a live action or storyline-based music video is more typical for mainstream artists to release, a lot of planning and production time would be required if 6 videos were to be produced before release of the album. With this deadline in mind, audio visualizations became a more time-effective way of achieving the goal.
The above image is an example of an audio spectrum. This graph shows the audio frequencies and volumes of a short moment from the song. The columns show the sound frequencies from >= 50Hz, up to 20kHz. Their height shows how loud that frequency is playing back at. So in the 50-400Hz area, bass notes are playing, pretty loud at this point. The vocal range is 1.2kHz to 5.6kHz. This graph isn’t as exciting or visually pleasing as a “music video” would demand, so some graphical creativity would come into play.
More than an Excel graph
The first two music videos for “Lake of Dreams” and “I Can’t Make the Dreams Go Away” were created in Adobe After Effects. The idea was to translate the audio spectrum into a pretty moving graphic over a nice backdrop. This effect is seen across many electronic music publishers as a mass produced, quick and dirty music videos, that can be created for $49/month at various rendering service sites. Although the comparison was there, I still wanted to put my attempt forward, to make my own version of these visualization music videos, but better.
The result was fun to realize, as this took me down a path of learning how to create the visualization, as well as digging up old vacation footage of a trip down the California coast. The spectrogram has transformed from a moving excel graph to a circular orb of pulsing rhythm over a beautiful sunset. Starting at 12 noon, the spectrum moves around clockwise from high frequencies (20kHz) to low frequencies at midnight (20Hz).
From 2D to 3D
After the first two videos were complete, I felt the visualizations were underwhelming. Additionally, rendering these, seemingly simple videos, would take hours, due to an inefficient use of processor power. At this point I moved to a different style of visualization in 3D modeling, by using a program called Cinema 4D. The style I was going for was again focused on immersion. Everything in the video needed to be moving to the beat of the song so the soundscape could be easily perceived.
The most obvious way to convert the audio waves to movement was by creating a virtual speaker that would realistically show what the sound would look like. Though a speaker driver moving back and forth is technically correct, to really show the movement and impact of the sound, I needed something to react to it. By laying the speaker on it’s back and dropping bouncy balls on it’s surface, big spikes in audio levels, like the drop of a song, would translate to an explosion of bouncy balls flying all over the place.
Although I had created similar effects in music videos a long time ago, much of the technique needed to be relearned to realize my vision. Additionally I found many of the interactions and processes could be programmed into Cinema 4D, to facilitate a relatively fast workflow for creating interesting content. The only downside to this technique for creating music videos was the render times, typically taking 8 hours, but sometimes more like 48 hours.
How to render beautiful 3D video
Although I was previously frustrated with Adobe After Effects slow render time of a few hours, rendering something in Cinema 4D that was completely unique and beautiful, seemed to be a worthy wait. The main cause for the massive render times was ray-tracing.
To create a 3D render that could represent a scene from the real world, lighting needs to be physically accurate or it will look like a an animation. Every light in my remaining 4 music videos is accurate to 5 bounces and the results, at high resolution, is beautiful. After building and rendering “The Length of a Moment (Feat. Steven Clapham)” music video, I created and rendered the remaining videos by riffing off the same techniques.
The biggest hurtle was the music video for “Unremembered”, as the render took more than 2 days due to the relatively high rendering fidelity. Although I didn’t use even the medium tiered settings, rending so many frames of video, with so many objects, and tones of reflective surfaces, and so many light sources, made this a big undertaking for my computer. From then on, I optimized for faster render times, as the deadline for the album was approaching.
It’s show time
With all videos rendered and published to YouTube, all that’s left to do is watch. I’ve linked the playlist of videos from The Dreaming City below. Thanks for reading! -Hous3