After my recent post about authoring guitar tablature on Linux, my son asked me for the tablature for a few other songs because he is taking a guitar class this semester in school. Here are the scores, all authored with MuseScore:
While working on “Something Wrong” I had a bit of a flashback about the circumstances that inspired me to write the song back in 1997-98, so I wrote the story and posted it on my Latter-day Doctor blog. Enjoy!
I am a self-taught guitarist, learning to play mostly by ear. When I was 10 years old my dad gave me a chord chart and a few John Denver songs to learn, and I was off to the races. As long as I had the chords and knew the tune, I could play any song. When I was 15 years old my friend showed me a guitar magazine that had the music for a song we both liked, and it was written in tablature. “What’s this?” I asked.
“Tablature,” he explained. “The lines are the strings, and the numbers are the frets.” I stared at it for a few minutes, and then tried to play a few bars. My friend let me take the magazine home, and I learned how to play the whole song that day. Reading standard music notation has never been easy for me, but tablature is simple to understand because I think of guitar music in terms of where I put my fingers on the fretboard, not in terms of the names of the notes I am playing. Learning about tablature opened up a whole new world of guitar music and playing technique for me. When my garage band broke up I spent a lot of time writing down all of our songs in tablature so that I wouldn’t forget how to play them.
There are two Free software tools which I use for writing guitar tablature on Linux, which I will review here.
I have been using Tux Guitar since at least 2012, when I arranged some music for a Christmas guitar duet with my friend Erik Aagard (who is ten times a better guitarist than I am). It was a simple arrangement, and I needed a simple tool to write it down. Tux Guitar was just what I needed to get the job done, and that splash screen of Tux holding a Les Paul is pretty awesome.
It wasn’t easy to set up, though. Tux Guitar is really just a notation editor and sequencer, and does not include a synthesizer. When you install Tux Guitar you also need a software synthesizer in order to hear any sound output. I connect it to Qsynth, which is a GUI front-end for Fluidsynth. The connection can be made through JACK or directly between the two programs. When I start a Tux Guitar session I have to start Qsynth first, and then make sure the two programs are talking to each other before I can proceed with the notation project I have in mind. If this sounds complicated, then that’s because it can be. On one of my computers I have never been able to get the two programs to successfully talk to one another, and for the life of me I can’t figure out why. It can be pretty frustrating to struggle with the Qsynth and Tux Guitar settings when really what I want to do is fire up a piece of software and get my creativity going. Tux Guitar will still work just fine without a synthesizer attached, you just won’t be able to hear any sound output (which kind of defeats the purpose). Note that I have only ever used Tux Guitar on Linux. I have no idea whether the Windows or Mac OS versions have the same issue.
Once you get Tux Guitar up and running it is a very capable program. It can do polyphony up to four voices in a single track, and also has multitrack features. The tablature notation is very rich, and includes all of the standard fretboard techniques such as hammer-on/pull-off, pitch bending, vibrato, muting, etc. If you can’t figure out how to do something using the GUI, the local help files and online documentation are very good.
For most of the last 5 years I have been using Tux Guitar exclusively for my tablature notation projects, most of which are solo fingerstyle guitar pieces. As an example, here is a score for the solo fingerstyle guitar version of Lullabye: [Tux Guitar file | PDF]. As you can see, Tux Guitar outputs a clean but not especially beautiful score, which looks like it was made by a computer. (For the record, I am using Tux Guitar 1.3.1)
I have been overall content to use Tux Guitar, aside from the synthesizer annoyance described above, but I recently discovered a few other limitations. This post was originally conceived as an introduction to Tux Guitar, sort of like my previous post about LMMS, and I set to work on the tablature score for a song I recently recorded called “My Abode,” which was going to be the centerpiece of this blog post. But I found that Tux Guitar’s multitrack tools don’t scale very well, and the interface becomes quite clunky when you have more than one track in a song. It also didn’t handle the vocal track very well, and required me to program it using tablature instead of standard music notation. The interface for copying and pasting in Tux Guitar is horrible, and it is almost easier to just type in all of the notes a second time rather than copying and pasting. These limitations were all potentially tolerable, but the show-stopper came when I tried to export a complete multitrack score, which apparently Tux Guitar cannot do. It also would freeze every time I tried to print a score to file, and Tux Guitar does not have native tools to export to PDF. Here is the Tux Guitar version of “My Abode,” which I didn’t complete: [Tux Guitar file ].
Enter MuseScore, a cross-platform music notation editor. I have been using this program since 2016, when I became the choir director at my church. (No, I don’t conduct the Mormon Tabernacle Choir, just my local congregation’s choir.) It is a fantastic program for music notation, and beats the pants off of Denemo in terms of usability and the visual quality of the scores it generates. Until recently I have been using it only for choral church hymn arrangements, but when I ran into the problems with Tux Guitar I described above, I decided to test out MuseScore’s tablature capabilities. (For the record, I am using MuseScore 2.0.2)
First of all, unlike Tux Guitar, MuseScore has a built-in synthesizer. It is so nice to just fire up your software, and it’s ready to go with no fuss involved. Also, I already knew that MuseScore is a powerful notation editor with great multitrack capabilities, an easy and intuitive copy/paste interface, which produces the most beautiful musical scores you can find in the Free software world, so the Tux Guitar limitations I described above don’t apply to MuseScore.
I was pleasantly surprised at how well MuseScore does with tablature, although its feature set in this arena is not quite as rich as Tux Guitar’s. I couldn’t figure out a way to do pitch bending, hammer-on/pull-off, or slides, and the online documentation pulls up no results when I searched for these terms. The best I could do was put a slur between notes. Fortunately “My Abode” only has a few hammer-ons and pull-offs, so I was able to fudge it with slurs. The mandolin solo has a few slides, which I just glossed over. I’m glad there wasn’t any pitch bending in the song.
But the overall strength and ease of use, not to mention the aesthetics of the final product, more than overcame these minor limitations. Here is the final version of “My Abode” as programmed in MuseScore: [MuseScore file | PDF].
Linux has capable tools for guitar tablature. Tux Guitar is a specialized tablature editor with a rich palette for this type of notation. MuseScore is a general purpose musical notation editor with a capable though somewhat limited tablature tool set. At this point in time I think Tux Guitar is the better tool for notating complex guitar work, especially for solo guitar. But MuseScore clearly shines when you are trying to write longer multitrack scores, with mixed instruments. If MuseScore would fill in some of the blanks in its tablature notation palette then it would be the clear winner.
I just finished a new recording for the Lost and Found album, a song called “My Abode.” (Download the mp3)
words and music by Alan Sanderson
The road is my abode
It will love me — it will kiss me
The road is my abode
It will hold me — hold me
I turned to you when I was lonely
And you sent me on my way —
The street is my retreat
And I go there when there’s nowhere else to go
And I know
I belong — I belong
I turned to you when I was lonely
And you sent me on my way —
And I don’t know which way I’m going
And I don’t know which way I’ve been
And I don’t know where you want me to be
But I know I’m far from home
I turned to you when I was lonely
And you sent me on my way
You said my way was your way
But your way —
Your way is my way home
(Dedicated to the memory of Meggan Mackey, 1974-1999)
About the Song
This song was on the original track list for the Lost and Found album, and I feel that it is one of its most important songs. I wrote it when I was 18 years old, and it captures the emotions and thoughts I had during an important transition in my life. Over the previous year or so I had suffered the loss of some important friendships, including a girlfriend who broke up with me. The first verse is about the loneliness and bitterness I felt about this, and it was intended to be a little melodramatic.
After I graduated from high school I started cycling 50-100 miles a week, which I found to be very therapeutic for my loneliness. (I actually composed the song during my 7-mile ride home from work one day.) The second verse is about cycling, and the chorus is meant literally in that context.
The third verse is about the spiritual changes that were starting to happen in me as I studied the scriptures every day and prepared to serve a mission. It was becoming clear to me that the Lord wanted me to give up my pride and turn to him. I had been spiritually lost, but the Lord had found me and shown me the way home. This was the first song I ever wrote on a religious theme, and at the time I found it to be an uncomfortable subject to write about plainly, hence the somewhat obscure language.
On the day when I wrote the lyrics I went to visit my cousin‘s apartment and worked on the fingering for the song on her roommate Meggan’s guitar. Meggan died of cancer about a year later while I was serving my mission, and so I have always connected her memory with this song.
About the Recording
I made a rough analog multitrack recording of this song in early 1998, which had a faster tempo and more raw guitar sounds. (My nose was a bit stuffy from a head cold that day.)
My Abode – 1998 recording
In about 2005 when I was teaching myself fingerstyle guitar I reworked the fingering and slowed down the tempo, which turned the song into more of a ballad. The new arrangement sounded like it needed a mandolin part, so I bought one and learned to play it for the new recording. (I have been wanting to buy a mandolin for over a decade.) I think you will agree that the new recording beats the old one by a fair distance.
This is an American song written in 1868, with the beautiful text written by Phillips Brooks (1825-1893), an Episcopal priest. The original tune was composed by Lewis Redner (1831-1908), who was organist at Brooks’ church. In England the song is more commonly sung to an English folk melody arranged by Ralph Vaughan Williams (1872-1958) in about 1903. Our recording follows the English tune.
We hope you enjoy our song. Merry Christmas from our family to yours!
Andrew Vavrek is a major proponent of the Free Music movement, and this song was released under a Creative Commons license which specifically allows redistribution and even derivative works. One of the rules of this license is that derivative works also use the same or equivalent license, and so my recording is licensed using the same. Feel free to share, redistribute, and make derivative works, as long as you give appropriate attribution.
My idea to record this song dates back to about 2007-2008, when I was reflecting on the healing power of forgiveness because of a few personal experiences. I took the liberty of altering the song’s lyrics to reflect this. (For more info, read my story about Dr. Stang.)
This song was next on the list for recording in 2008, but my music hobby was derailed and all but extinguished by my busy schedule that year (and for the next several years). I did program the drum part in 2008 using Hydrogen, and when I decided to recommence work on the recording in 2017 I found the old Hydrogen file in my archive, dusted it off, and used it with only minor changes. This was my first recording which used Ardour from start to finish, and I learned a lot about the software during the recording. The more I use Ardour, the more I like it.
About the Album
While working on this recording I also struggled with a decision about the album, which had the working title of “Moldy Oldies.” This is not the most attractive name, so I toyed with some other options. Eventually it dawned on me that I could simply re-open work on the Lost and Found album, and finish the project I gave up on so long ago.
I reorganized the website to merge “Moldy Oldies” with “Lost and Found” and I have updated the mp3 tags for Alpha, Lullabye, and Omega to reflect this. The track list is currently in flux, but is starting to take shape. Right now it looks something like this:
Standing On High
Check back here for updates or follow the blog to hear new songs as they are finished!
In 2012 I found a keyboard midi controller at a yard sale for $10, and I couldn’t pass it up even though I’m not much of a keyboardist. Once I brought it home I had to find a way to use it, and that search led me to LMMS. There are many good tutorials and other documentation which cover every aspect of installing, configuring, and using LMMS, and I’m not trying to duplicate any of those efforts. This article is meant more as a review and a memoir than as a how-to guide.
LMMS is an obsolete acronym for “Linux Multimedia Studio,” which made for an awkward name when it became a cross-platform application. The website currently says “Let’s Make Music” in the top banner, which would work for the acronym if we could think of a word that starts with “S” to add to it. (Any suggestions? How about: “Let’s Make Music, Sonny?” Yeah, nevermind.)
Actually, I did a bit of reading before I settled on LMMS. The other option was to use JACK to connect Rosegarden or some other midi sequencer to a software synthesizer. But I didn’t want to mess around with different applications held together with duct tape and chewing gum in some MacGyver-ish Linux audio setup; I just wanted to plug in my new toy and play with it. So I opted for the all-in-one approach of a single application which acts as a sequencer and a synthesizer, which is LMMS.
As I have said before, this approach differs somewhat from the traditional Unix philosophy of connecting together small, modular tools. But the Unix philosophy applies to the design of a software application, not necessarily to the preferred behavior of its end users. The web browser you are using to read this article may or may not have a modular design under the hood, but you probably use it as an all-in-one solution and its modularity is transparent to you as the user. Would you rather open a terminal and use wget to retrieve the html document, then pipe it to some html rendering engine? Yeah, I didn’t think so.
Anyway, as I was saying, LMMS is a really nice environment for composing electronic music. I was impressed with it from the first time I used it, and I am still pleased at what a capable synthesizer it is. I am not an LMMS guru, or a sound engineer, or even a music theory expert, but let me give you a quick description of its tools and how I used them to make a few songs. When you first launch LMMS you are greeted with an empty project which has one each of the four possible track types: instrument, sample, beat+bassline, and automation. In the following sections we will review three of these four; I haven’t used sample tracks in a song yet, so I’ll revisit that topic in a future post.
WARNING: This article is for geeks only. You may have noticed that already. Proceed at your own risk. I do recommend that you download the LMMS song project files from the links below and open them in the program as you read my descriptions.
This was the first song I sketched out on LMMS, cutting my teeth on how to organize a project, edit the piano roll, make a drumbeat, and shape the waveform and envelope of the sounds. The tune was a fingerstyle guitar piece that really lent itself to decomposition into melody and arpeggio parts, and I felt like a kid in a candy store designing the sounds to use in the verse and chorus for those two parts. The Triple Oscillator synthesizer in LMMS is based on the manual controls of an old analog synthesizer (like the Mini Moog), and it is easy to recreate classic old-school sounds. I am an old fan of Kraftwerk and Switched-On Bach, so these old sounds take me to a happy place. Every sound in this song, including the percussion sounds, was produced with the Triple Oscillator.
I settled on a percussive buzzy sound for the arpeggios and a softer, more sustained sound for the melodies. My favorite sound was the verse melody, which used a low frequency oscillator to produce a delayed vibrato effect. The most versatile sound is the chorus arpeggio, which is also used to make the echo chords during the intro and adds to the bass texture during the song’s interlude. A fifth sound was added for the bass, and once I had the five voices I arranged the piece as a sort of instrumental folk song, with every voice taking a solo on the different melodies.
I have two quick tips for the beginning electronic composer, which are both illustrated in this song. They are both subtle things but they make a huge difference to the listener. First, separate your sounds in stereo space. Notice how the two arpeggio voices are sonically similar, but separating them into the left and right speakers makes them more discernible. Second, adjust the volume of individual notes to make the phrasing more expressive. I can’t overstate how much this helps the listener connect with your song. This phrasing is done naturally by trained musicians on an instrument, but must be done manually and intentionally when you are programming events in a piano roll editor. This tip applies equally to percussion sounds.
And speaking of percussion, I had a lot of fun shaping the drum sounds on this song. Two in particular are worth mentioning: The Rattle sound used on the backbeat was made using a low frequency oscillator, and I thought it sounded like a guiro or washboard sound. I stumbled upon the Ride sound by playing with different ways to combine the oscillators, and then I ran it through a HiPass filter to remove the lower frequencies from the sound. It sounds a bit like a tambourine to me.
Here is where we talk about the Beat+Bassline editor, which is the place to program your drum patterns. Notice that all of the drum sounds are here, and not in the Song-Editor window. Each pattern you create in the Beat+Bassline Editor appears as a separate track in the Song Editor window, where you simply control where each Beat+Bassline pattern appears in the composition, as you can see in the Song Editor window screenshot above. Sounds can be copied from the Song Editor window to the Beat+Bassline Editor window by holding the Ctrl button and dragging the handle on the far left of the track, so if you start making a sound in the wrong place you can move it later.
Once I felt a bit comfortable with using LMMS I had the idea to revisit this old song of mine that was never recorded very well in the past. The bass, organ, and guitar are very similar to the old recordings, but I added two sounds to this arrangement which I think added a lot of interesting texture and I will describe how I made them here.
The Zap sound was inspired by an echo effect used by Jason Hissong on the song “Perfect Machine.” With just a few notes you can fill the audio canvas with sound. The Triple Oscillator uses a square wave, and I used the Feedback Delay Line effect.
The bassvibe sound demonstrates a subtle stereo panning effect where the pitch of the note determines the stereo position of the note. This is the kind of ear candy that is easier to do in electronic music than in other types of recordings, and which acts as a little love note that only your audiophile listeners will appreciate. (Sometimes when I hear the opening sequence of Kraftwerk’s “Electric Cafe” on nice headphones I say, with tears in my eyes, “I love you too, Florian!”)
A final point about Alpha is the obvious fact that there is an organic instrument mixed in with the synthesizers. I did not record the guitar with LMMS, and as far as I can tell there is no way of doing that (although this song by Jens Hochapfel is an interesting approach to work around the same limitation). So I took a mixdown of the song from LMMS and imported it into Ardour to record the guitar and do the final mixdown. There was a bit of back and forth because recording the guitar part led to some changes in the overall composition and I went back to LMMS several times to revise my work. This made for an awkward workflow at times, especially as I found that LMMS didn’t play very well with JACK on my system, so I may try a different approach the next time I do a composition with mixed electronic and organic instruments.
This song came together quickly because the composition needed less work than the others, and because I knew my way around LMMS better by the time I started working on this. There are a few techniques I want to point out in this song project.
First, the Beat+Bassline function can be really useful for any sound sequence which repeats unchanged. I used it here for the drum pattern, and also for the Duck and Reverse sounds. I could have used it for the bass and flute sounds, which also repeat themselves, but it turned out to be easier to just copy/paste a pattern in the song editor in that case because they repeat with changes. The bass track on “Alpha” was a bit of a mess for that reason, which is why I took this approach on “Omega.”
Second, the Flute sound was produced by softening the attack on the sound envelope. A similar but more extreme technique produced the fade-in of the Reverse sound. When you are shaping sounds in your synthesizer, make sure you play with the envelope to see what it does to your sound, and you will be pleasantly surprised with how much you can do.
I also played with some different synthesizers on this project, branching out a bit from the Triple Oscillator. The Duck sound was made using the BitInvader plugin, which allows you to graphically draw the waveform you want with the mouse pointer. I actually used the “pluck.xpf” preset, and added the C* AutoWah effect. (Just for the record, I wasn’t trying to make this track sound like a duck. I named it the duck sound only after my wife pointed out the resemblance.) The Harp sound was made with the Opulenz plugin, which produces very clean and pleasing sounds. I will surely revisit that plugin in other projects.
Finally, I used automation to produce a fadeout at the end of the song. To do this I created an automation track, then clicked in that track to define the desired region. Then I found the control slider for the master volume and held the Ctrl key while click-dragging that control to the region in the automation track. Then I double-clicked on the region to draw the volume envelope I wanted for the fadeout. You can use discreet, linear, or cubic Hermite (curved) progression. A volume fadeout is a trivial use of automation, but it can be used to produce any dynamic change to any control in LMMS. (As Zombo.com says, “The only limit is yourself.”) Also, different regions in an automation track can be attached to different controls, and you can have as many regions as you want in a single automation track.
LMMS is a great tool for shaping electronic sounds, and allows you to get as granular and geeky as you want. There are a lot of preset sounds, and the library gets better with every release, but you are by no means limited to using the presets. The interface is easy enough for kids to use, although I will admit that my kids are kind of geeky. I have never had so much fun with sound engineering as I have since I started working with LMMS. It is not a general-purpose tool, and there are some important things that it can’t do, but it is very good at what it does do.
The overall verdict: Two thumbs up for LMMS, which has become a permanent fixture in my Linux home studio. I offer a big congratulations and thanks to the developers for making such a fantastic tool.
P.S. Please feel free to play with my songs, remix them, rewrite them, or whatever else you want to do. Please post a comment below with a link to your derivative work!
Last year in late November my family sang Away in a Manger together, and my wife commented on how good we sounded with the kids making up their own harmonies. She suggested that we should record it for our Christmas card and email it out to our friends and family.
I said, “Then we need a Focusrite Scarlett 2i2 USB audio interface!” She was taken aback at my sudden and specific declaration. So I showed her the website on my phone, which I happened to have open. “See? It’s The Best Selling USB Audio Interface in the World!”
“It sounds like you’ve been looking at this for a while,” she deduced, correctly.
“But clearly we really need one,” I insisted.
So it was settled, and I ordered it within the next week. It cost $151.19 from the Focusrite website, which I thought was a reasonable price. When it arrived we recorded our song and sent it out to our family and friends. The Scarlett 2i2 performed like a champ, and I was pleased with the sound quality of its input. Here is our recording:
A bit of back story: Through the early 2000’s I used my computer’s sound card as my audio input, with variable results. “Lint in my Pocket” and “Aurelia Aurita” were probably the worst results, as the dying old sound card produced a lot of crackling and popping on the recording. The other New Folder recordings were better because I had a new computer with a healthier sound card, and Lost and Found was better still because I started paying attention to the noise level and applying a graphic equalizer to filter out the worst of it. But right when I felt like my recording technique was starting to mature, three things happened to derail me for a decade: 1) Medical education took over my life like a cancer, sucking up all of my free time and strangling all of my hobbies, 2) my wife and I had a bunch of kids, and 3) I migrated to Linux.
The first two of these were certainly more important, but the third one was not inconsequential. I have been tinkering with Linux since 2000, and in about 2006 I made it my primary platform for home computing. I was a poor student, and Free Software was also generally free software. But audio on desktop Linux can be a bit temperamental, and I never could get my audio inputs working the way they worked on Windows or OS X. I did manage to use Linux applications to make music (the drum part from “Rising Sun” was programmed on Hydrogen, and I started playing with LMMS in about 2012), but these don’t require audio input to work. What I really wanted was to record guitar on Linux, and I didn’t have a good way to do that.
Enter the Scarlett 2i2. From what I read online, it was a low-latency USB audio interface with good Linux support. Several people on Linux audio discussion forums reported that the device worked for them, including a few using a software environment similar to mine (for the record: Linux Mint 17.1, Linux kernel 3.13.0-37-generic, Audacity 2.0.5, JACK 5, QjackCtl 0.4.5, Ardour 5.11.4).
I should add a disclaimer here that I am neither a sound engineer nor a software engineer. I am a mere hobbyist in both realms, knowing only enough to be dangerous, and I am really not an expert. But what my story lacks in authority it makes up for in authenticity, and hopefully some other mere mortal out there will be encouraged to learn that a knucklehead like me was able to figure this out. I will assume that you know the basics about JACK, and that you can install software packages on your system.
Using the Scarlett with Audacity
When the box arrived in the mail I eagerly opened it and plugged the Scarlett 2i2 in, but it took me a few minutes to figure out how to use it. For some reason I was expecting it to appear as an input/output device in JACK, and I was confused when nothing appeared in the qjackctl connections window. After a few minutes of poking around I stopped the JACK server and opened Audacity, which is my fall-back standalone audio application in Linux. JACK is nice, and is very powerful, but I admit that its full capabilities are a bit beyond me and I am really happy when I can just get it to work at all. But Audacity I can understand. All of its functionality is in one application, and no supporting software is required. I realize that the all-in-one approach is not very Unix-y but I am more of a pragmatist than a purist when it comes to software.
Audacity has its audio I/O configuration right on the menu bar, and the Scarlett 2i2 showed up in the drop-down menu automagically! I don’t like to use Audacity for big projects, but I used it for “Away in a Manger” because it worked and because I was in a hurry to get this song recorded before Christmas.
Using it with JACK
When I recorded the guitar parts for “Alpha” I wanted to use Ardour, which is a more serious tool. This is where I had to buckle down and figure out how to get the Scarlett 2i2 working on JACK. After reading several forums I stumbled upon the solution 2 or 3 screens into a comment thread, which described a setting buried deep in the qjackctl Setup configuration dialog, on the Advanced tab (see images). The Scarlett 2i2 shows up in the drop-down menu on the “Output Device” and “Input Device” settings.
Using it with Ardour
Next I had to connect the Scarlett 2i2 input to the Ardour track I wanted to record into. This took a bit of poking around and consulting the Ardour reference manual, but it turned out to be as simple as choosing “Inputs…” from the menu which appears when you right-click on the track name. This opens a new window, where you simply choose the “capture_1” input, as shown in the screen shot.
Quirks and Other Observations
While I had the Scarlett 2i2 designated as the input and output device in Jack and listened with my headphones plugged into the monitor output on the Scarlett it worked beautifully, but if I set the output device as the computer’s soundcard then I had intolerable latency. I don’t have external speakers on my computer (an all-in-one Dell Inspiron), so if I want to hear the song through the computer speakers I have to reconfigure JACK. This is easy to do by saving a configuration preset in qjackctl. Maybe I will invest in a set of external monitor speakers to plug into the Scarlett 2i2 so that I don’t have to dig through the settings every time I want to record something.
The overall verdict: The Focusrite Scarlett 2i2 is an awesome and affordable device which works well in a Linux environment once you learn how to configure JACK. It is now an indispensible part of my home studio, and I can’t imagine how I ever recorded without it.