I was lost on a lonely highway
Trying to find my place in the sun
And when I thought I’d found my destination
I found my journey had just begun
I wasn’t looking for adventure, oh no
I was just looking for a place to live my life
But I didn’t know which way was home anymore
I didn’t know which way was home
So I turned myself around
I did a U-turn on that highway
And I said to myself,
“Where are the mountains that I love?
Where’s the smell of rain in the desert?
And where are the people that I call my own?
Where are the people that I call my own?”
So I said to myself,
“I’m gonna find my way back home
I’m gonna find my way back home
I’m gonna find my way back home
I’m gonna find my way back home
Here I come!
“I’m gonna find those mountains that I love
I’m gonna find those people that I call my own
I’m gonna find my way back home”
About the Song
The guitar riff that that this song is based on was literally lost and found. I recorded a sketch of it on a cassette tape and mailed it to my cousin before I left on my mission, and then forgot all about it. After I got home my cousin sent the old recording back to me, and I relearned how to play it. (Thanks, Tom!) Here is that old recording, if you would like to hear it:
I had a basic idea of what the song was about, and had the second verse mostly worked out years ago, but I made a big breakthrough on the lyrics in 2015 when I was moving back home to Utah after living in the Midwest for 11 years. The first verse came to me at a rest stop west of Indianapolis. The lyrics capture a lot of how I felt at the time, but they don’t quite express how much I felt that I was guided by God to move when and where I did.
About the Recording
This was the quickest recording of the album so far, taking a little over a month from start to finish. I had initially planned for more aggressive drums and an electric lead guitar, but opted for the lighter acoustic sound.
The recording was done in Ardour on Linux Mint, in a downstairs room of my house that I recently claimed as my studio. The drums were programmed using Hydrogen, and a brush kit sound bank. This song was my first attempt to use Ardour’s MIDI function, which took a bit of time to figure out, but I am pleased with the result. I used the “rock organ” sound from Christian Collins’ GeneralUser GS soundfont.
About the Album
Only two more songs to record for this album! Here is my goal: Finish it during 2019!
After my recent post about authoring guitar tablature on Linux, my son asked me for the tablature for a few other songs because he is taking a guitar class this semester in school. Here are the scores, all authored with MuseScore:
While working on “Something Wrong” I had a bit of a flashback about the circumstances that inspired me to write the song back in 1997-98, so I wrote the story and posted it on my Latter-day Doctor blog. Enjoy!
This is an American song written in 1868, with the beautiful text written by Phillips Brooks (1825-1893), an Episcopal priest. The original tune was composed by Lewis Redner (1831-1908), who was organist at Brooks’ church. In England the song is more commonly sung to an English folk melody arranged by Ralph Vaughan Williams (1872-1958) in about 1903. Our recording follows the English tune.
We hope you enjoy our song. Merry Christmas from our family to yours!
Andrew Vavrek is a major proponent of the Free Music movement, and this song was released under a Creative Commons license which specifically allows redistribution and even derivative works. One of the rules of this license is that derivative works also use the same or equivalent license, and so my recording is licensed using the same. Feel free to share, redistribute, and make derivative works, as long as you give appropriate attribution.
My idea to record this song dates back to about 2007-2008, when I was reflecting on the healing power of forgiveness because of a few personal experiences. I took the liberty of altering the song’s lyrics to reflect this. (For more info, read my story about Dr. Stang.)
This song was next on the list for recording in 2008, but my music hobby was derailed and all but extinguished by my busy schedule that year (and for the next several years). I did program the drum part in 2008 using Hydrogen, and when I decided to recommence work on the recording in 2017 I found the old Hydrogen file in my archive, dusted it off, and used it with only minor changes. This was my first recording which used Ardour from start to finish, and I learned a lot about the software during the recording. The more I use Ardour, the more I like it.
About the Album
While working on this recording I also struggled with a decision about the album, which had the working title of “Moldy Oldies.” This is not the most attractive name, so I toyed with some other options. Eventually it dawned on me that I could simply re-open work on the Lost and Found album, and finish the project I gave up on so long ago.
I reorganized the website to merge “Moldy Oldies” with “Lost and Found” and I have updated the mp3 tags for Alpha, Lullabye, and Omega to reflect this. The track list is currently in flux, but is starting to take shape. Right now it looks something like this:
Standing On High
Check back here for updates or follow the blog to hear new songs as they are finished!
In 2012 I found a keyboard midi controller at a yard sale for $10, and I couldn’t pass it up even though I’m not much of a keyboardist. Once I brought it home I had to find a way to use it, and that search led me to LMMS. There are many good tutorials and other documentation which cover every aspect of installing, configuring, and using LMMS, and I’m not trying to duplicate any of those efforts. This article is meant more as a review and a memoir than as a how-to guide.
LMMS is an obsolete acronym for “Linux Multimedia Studio,” which made for an awkward name when it became a cross-platform application. The website currently says “Let’s Make Music” in the top banner, which would work for the acronym if we could think of a word that starts with “S” to add to it. (Any suggestions? How about: “Let’s Make Music, Sonny?” Yeah, nevermind.)
Actually, I did a bit of reading before I settled on LMMS. The other option was to use JACK to connect Rosegarden or some other midi sequencer to a software synthesizer. But I didn’t want to mess around with different applications held together with duct tape and chewing gum in some MacGyver-ish Linux audio setup; I just wanted to plug in my new toy and play with it. So I opted for the all-in-one approach of a single application which acts as a sequencer and a synthesizer, which is LMMS.
As I have said before, this approach differs somewhat from the traditional Unix philosophy of connecting together small, modular tools. But the Unix philosophy applies to the design of a software application, not necessarily to the preferred behavior of its end users. The web browser you are using to read this article may or may not have a modular design under the hood, but you probably use it as an all-in-one solution and its modularity is transparent to you as the user. Would you rather open a terminal and use wget to retrieve the html document, then pipe it to some html rendering engine? Yeah, I didn’t think so.
Anyway, as I was saying, LMMS is a really nice environment for composing electronic music. I was impressed with it from the first time I used it, and I am still pleased at what a capable synthesizer it is. I am not an LMMS guru, or a sound engineer, or even a music theory expert, but let me give you a quick description of its tools and how I used them to make a few songs. When you first launch LMMS you are greeted with an empty project which has one each of the four possible track types: instrument, sample, beat+bassline, and automation. In the following sections we will review three of these four; I haven’t used sample tracks in a song yet, so I’ll revisit that topic in a future post.
WARNING: This article is for geeks only. You may have noticed that already. Proceed at your own risk. I do recommend that you download the LMMS song project files from the links below and open them in the program as you read my descriptions.
This was the first song I sketched out on LMMS, cutting my teeth on how to organize a project, edit the piano roll, make a drumbeat, and shape the waveform and envelope of the sounds. The tune was a fingerstyle guitar piece that really lent itself to decomposition into melody and arpeggio parts, and I felt like a kid in a candy store designing the sounds to use in the verse and chorus for those two parts. The Triple Oscillator synthesizer in LMMS is based on the manual controls of an old analog synthesizer (like the Mini Moog), and it is easy to recreate classic old-school sounds. I am an old fan of Kraftwerk and Switched-On Bach, so these old sounds take me to a happy place. Every sound in this song, including the percussion sounds, was produced with the Triple Oscillator.
I settled on a percussive buzzy sound for the arpeggios and a softer, more sustained sound for the melodies. My favorite sound was the verse melody, which used a low frequency oscillator to produce a delayed vibrato effect. The most versatile sound is the chorus arpeggio, which is also used to make the echo chords during the intro and adds to the bass texture during the song’s interlude. A fifth sound was added for the bass, and once I had the five voices I arranged the piece as a sort of instrumental folk song, with every voice taking a solo on the different melodies.
I have two quick tips for the beginning electronic composer, which are both illustrated in this song. They are both subtle things but they make a huge difference to the listener. First, separate your sounds in stereo space. Notice how the two arpeggio voices are sonically similar, but separating them into the left and right speakers makes them more discernible. Second, adjust the volume of individual notes to make the phrasing more expressive. I can’t overstate how much this helps the listener connect with your song. This phrasing is done naturally by trained musicians on an instrument, but must be done manually and intentionally when you are programming events in a piano roll editor. This tip applies equally to percussion sounds.
And speaking of percussion, I had a lot of fun shaping the drum sounds on this song. Two in particular are worth mentioning: The Rattle sound used on the backbeat was made using a low frequency oscillator, and I thought it sounded like a guiro or washboard sound. I stumbled upon the Ride sound by playing with different ways to combine the oscillators, and then I ran it through a HiPass filter to remove the lower frequencies from the sound. It sounds a bit like a tambourine to me.
Here is where we talk about the Beat+Bassline editor, which is the place to program your drum patterns. Notice that all of the drum sounds are here, and not in the Song-Editor window. Each pattern you create in the Beat+Bassline Editor appears as a separate track in the Song Editor window, where you simply control where each Beat+Bassline pattern appears in the composition, as you can see in the Song Editor window screenshot above. Sounds can be copied from the Song Editor window to the Beat+Bassline Editor window by holding the Ctrl button and dragging the handle on the far left of the track, so if you start making a sound in the wrong place you can move it later.
Once I felt a bit comfortable with using LMMS I had the idea to revisit this old song of mine that was never recorded very well in the past. The bass, organ, and guitar are very similar to the old recordings, but I added two sounds to this arrangement which I think added a lot of interesting texture and I will describe how I made them here.
The Zap sound was inspired by an echo effect used by Jason Hissong on the song “Perfect Machine.” With just a few notes you can fill the audio canvas with sound. The Triple Oscillator uses a square wave, and I used the Feedback Delay Line effect.
The bassvibe sound demonstrates a subtle stereo panning effect where the pitch of the note determines the stereo position of the note. This is the kind of ear candy that is easier to do in electronic music than in other types of recordings, and which acts as a little love note that only your audiophile listeners will appreciate. (Sometimes when I hear the opening sequence of Kraftwerk’s “Electric Cafe” on nice headphones I say, with tears in my eyes, “I love you too, Florian!”)
A final point about Alpha is the obvious fact that there is an organic instrument mixed in with the synthesizers. I did not record the guitar with LMMS, and as far as I can tell there is no way of doing that (although this song by Jens Hochapfel is an interesting approach to work around the same limitation). So I took a mixdown of the song from LMMS and imported it into Ardour to record the guitar and do the final mixdown. There was a bit of back and forth because recording the guitar part led to some changes in the overall composition and I went back to LMMS several times to revise my work. This made for an awkward workflow at times, especially as I found that LMMS didn’t play very well with JACK on my system, so I may try a different approach the next time I do a composition with mixed electronic and organic instruments.
This song came together quickly because the composition needed less work than the others, and because I knew my way around LMMS better by the time I started working on this. There are a few techniques I want to point out in this song project.
First, the Beat+Bassline function can be really useful for any sound sequence which repeats unchanged. I used it here for the drum pattern, and also for the Duck and Reverse sounds. I could have used it for the bass and flute sounds, which also repeat themselves, but it turned out to be easier to just copy/paste a pattern in the song editor in that case because they repeat with changes. The bass track on “Alpha” was a bit of a mess for that reason, which is why I took this approach on “Omega.”
Second, the Flute sound was produced by softening the attack on the sound envelope. A similar but more extreme technique produced the fade-in of the Reverse sound. When you are shaping sounds in your synthesizer, make sure you play with the envelope to see what it does to your sound, and you will be pleasantly surprised with how much you can do.
I also played with some different synthesizers on this project, branching out a bit from the Triple Oscillator. The Duck sound was made using the BitInvader plugin, which allows you to graphically draw the waveform you want with the mouse pointer. I actually used the “pluck.xpf” preset, and added the C* AutoWah effect. (Just for the record, I wasn’t trying to make this track sound like a duck. I named it the duck sound only after my wife pointed out the resemblance.) The Harp sound was made with the Opulenz plugin, which produces very clean and pleasing sounds. I will surely revisit that plugin in other projects.
Finally, I used automation to produce a fadeout at the end of the song. To do this I created an automation track, then clicked in that track to define the desired region. Then I found the control slider for the master volume and held the Ctrl key while click-dragging that control to the region in the automation track. Then I double-clicked on the region to draw the volume envelope I wanted for the fadeout. You can use discreet, linear, or cubic Hermite (curved) progression. A volume fadeout is a trivial use of automation, but it can be used to produce any dynamic change to any control in LMMS. (As Zombo.com says, “The only limit is yourself.”) Also, different regions in an automation track can be attached to different controls, and you can have as many regions as you want in a single automation track.
LMMS is a great tool for shaping electronic sounds, and allows you to get as granular and geeky as you want. There are a lot of preset sounds, and the library gets better with every release, but you are by no means limited to using the presets. The interface is easy enough for kids to use, although I will admit that my kids are kind of geeky. I have never had so much fun with sound engineering as I have since I started working with LMMS. It is not a general-purpose tool, and there are some important things that it can’t do, but it is very good at what it does do.
The overall verdict: Two thumbs up for LMMS, which has become a permanent fixture in my Linux home studio. I offer a big congratulations and thanks to the developers for making such a fantastic tool.
P.S. Please feel free to play with my songs, remix them, rewrite them, or whatever else you want to do. Please post a comment below with a link to your derivative work!
Last year in late November my family sang Away in a Manger together, and my wife commented on how good we sounded with the kids making up their own harmonies. She suggested that we should record it for our Christmas card and email it out to our friends and family.
I said, “Then we need a Focusrite Scarlett 2i2 USB audio interface!” She was taken aback at my sudden and specific declaration. So I showed her the website on my phone, which I happened to have open. “See? It’s The Best Selling USB Audio Interface in the World!”
“It sounds like you’ve been looking at this for a while,” she deduced, correctly.
“But clearly we really need one,” I insisted.
So it was settled, and I ordered it within the next week. It cost $151.19 from the Focusrite website, which I thought was a reasonable price. When it arrived we recorded our song and sent it out to our family and friends. The Scarlett 2i2 performed like a champ, and I was pleased with the sound quality of its input. Here is our recording:
A bit of back story: Through the early 2000’s I used my computer’s sound card as my audio input, with variable results. “Lint in my Pocket” and “Aurelia Aurita” were probably the worst results, as the dying old sound card produced a lot of crackling and popping on the recording. The other New Folder recordings were better because I had a new computer with a healthier sound card, and Lost and Found was better still because I started paying attention to the noise level and applying a graphic equalizer to filter out the worst of it. But right when I felt like my recording technique was starting to mature, three things happened to derail me for a decade: 1) Medical education took over my life like a cancer, sucking up all of my free time and strangling all of my hobbies, 2) my wife and I had a bunch of kids, and 3) I migrated to Linux.
The first two of these were certainly more important, but the third one was not inconsequential. I have been tinkering with Linux since 2000, and in about 2006 I made it my primary platform for home computing. I was a poor student, and Free Software was also generally free software. But audio on desktop Linux can be a bit temperamental, and I never could get my audio inputs working the way they worked on Windows or OS X. I did manage to use Linux applications to make music (the drum part from “Rising Sun” was programmed on Hydrogen, and I started playing with LMMS in about 2012), but these don’t require audio input to work. What I really wanted was to record guitar on Linux, and I didn’t have a good way to do that.
Enter the Scarlett 2i2. From what I read online, it was a low-latency USB audio interface with good Linux support. Several people on Linux audio discussion forums reported that the device worked for them, including a few using a software environment similar to mine (for the record: Linux Mint 17.1, Linux kernel 3.13.0-37-generic, Audacity 2.0.5, JACK 5, QjackCtl 0.4.5, Ardour 5.11.4).
I should add a disclaimer here that I am neither a sound engineer nor a software engineer. I am a mere hobbyist in both realms, knowing only enough to be dangerous, and I am really not an expert. But what my story lacks in authority it makes up for in authenticity, and hopefully some other mere mortal out there will be encouraged to learn that a knucklehead like me was able to figure this out. I will assume that you know the basics about JACK, and that you can install software packages on your system.
Using the Scarlett with Audacity
When the box arrived in the mail I eagerly opened it and plugged the Scarlett 2i2 in, but it took me a few minutes to figure out how to use it. For some reason I was expecting it to appear as an input/output device in JACK, and I was confused when nothing appeared in the qjackctl connections window. After a few minutes of poking around I stopped the JACK server and opened Audacity, which is my fall-back standalone audio application in Linux. JACK is nice, and is very powerful, but I admit that its full capabilities are a bit beyond me and I am really happy when I can just get it to work at all. But Audacity I can understand. All of its functionality is in one application, and no supporting software is required. I realize that the all-in-one approach is not very Unix-y but I am more of a pragmatist than a purist when it comes to software.
Audacity has its audio I/O configuration right on the menu bar, and the Scarlett 2i2 showed up in the drop-down menu automagically! I don’t like to use Audacity for big projects, but I used it for “Away in a Manger” because it worked and because I was in a hurry to get this song recorded before Christmas.
Using it with JACK
When I recorded the guitar parts for “Alpha” I wanted to use Ardour, which is a more serious tool. This is where I had to buckle down and figure out how to get the Scarlett 2i2 working on JACK. After reading several forums I stumbled upon the solution 2 or 3 screens into a comment thread, which described a setting buried deep in the qjackctl Setup configuration dialog, on the Advanced tab (see images). The Scarlett 2i2 shows up in the drop-down menu on the “Output Device” and “Input Device” settings.
Using it with Ardour
Next I had to connect the Scarlett 2i2 input to the Ardour track I wanted to record into. This took a bit of poking around and consulting the Ardour reference manual, but it turned out to be as simple as choosing “Inputs…” from the menu which appears when you right-click on the track name. This opens a new window, where you simply choose the “capture_1” input, as shown in the screen shot.
Quirks and Other Observations
While I had the Scarlett 2i2 designated as the input and output device in Jack and listened with my headphones plugged into the monitor output on the Scarlett it worked beautifully, but if I set the output device as the computer’s soundcard then I had intolerable latency. I don’t have external speakers on my computer (an all-in-one Dell Inspiron), so if I want to hear the song through the computer speakers I have to reconfigure JACK. This is easy to do by saving a configuration preset in qjackctl. Maybe I will invest in a set of external monitor speakers to plug into the Scarlett 2i2 so that I don’t have to dig through the settings every time I want to record something.
The overall verdict: The Focusrite Scarlett 2i2 is an awesome and affordable device which works well in a Linux environment once you learn how to configure JACK. It is now an indispensible part of my home studio, and I can’t imagine how I ever recorded without it.