Jay Taylor's notes

back to listing index

Advancing in the Bash Shell – ${me:-whatever}

[web search]
Original source (samrowe.com)
Tags: bash shell command-line samrowe.com
Clipped on: 2016-09-13

Skip to content

Advancing in the Bash Shell

If you’ve ever used GNU/Linux, chances are good that you’ve used bash. Some people hold the belief that using a GUI is faster than using a CLI. These people have obviously never seen someone who uses a shell proficiently. In this tutorial, I hope to show you just a few of the amazing features bash provides that will increase your productivity in the shell.

Bang Bang and history

Everyone knows about bash history, right? You’d be surprised. Most modern distributions come with bash history enabled and working. If you’ve never done so before, try using the up and down arrow keys to scroll through your command history. The up arrow will cycle through your command history from newest to oldest, and the down arrow does, well, the opposite.

As luck would have it, different terminals handle arrow keys differently, so the brilliant minds behind bash came up with additional methods for accessing and making use of the command history. We’ll start with history. This command simply gives you a numbered list of the commands you’ve entered with the oldest command having the smallest number. Simple right?

Here’s an example of history output:

  1. 190 ps -axu | grep htt
  2. 191 /www/bin/apachectl start
  3. 192 vi /usr/local/lib/php.ini
  4. 193 cat /www/logs/error_log
  5. 194 ps -auxw | grep http
  6. 195 pwd

This brings us to bang-bang or !!. !! tells bash "repeat the last command I entered." But the magic doesn’t stop there, if you order now, you’ll also receive !xyz. !xyz will allow you to run the last command beginning with xyz that you typed. Be sure to add enough to the abbreviation to make it unique or you could run into problems, for instance: In the example above, ps was ran twice, then pwd. If you typed !p you’d get the output of pwd. Typing !ps is just enough to be unique and will execute the ps -auxw | grep http entry in history. By typing just enough to make your history request unique, give you a much better chance of hitting your targeted command.

:p isn’t just an emoticon

If you need to be very sure of the command you’re targeting, :p can be a huge help. !xyz:p will print the command that would be executed rather than executing it. :p is also clever enough to add the printed command to your history list as the last command executed (even though it didn’t execute it) so that, if you decide that you like what was printed, a !! is all you need to make it happen, cap’n.

Bash provides a couple of methods for searching the command history. Both are useful in different situations. The first method is to simply type history, find the number of the command you want and then type !N where "N" is the number of the command you’d like to execute. (:p works here too.) The other method is a tad more complex but also adds flexibilty. ^r (ctrl-r) followed by whatever you type will search the command history for that string. The bonus here is that you’re able to edit the command line you’ve searched for before you send it down the line. While the second method is more powerful, when doing some redundant task, it’s much easier to remember !22 than it is to muck with ctrl-r type searches or even the arrow keys.

Bang dollar-sign

!$ is the "end" of the previous command. Consider the following example: We start by looking for a word in a file

  1. $ grep -i joe /some/long/directory/structure/user-lists/list-15

if joe is in that userlist, we want to remove him from it. We can either fire up vi with that long directory tree as the argument, or as simply as

Which bash expands to:

  1. $ vi /some/long/directory/structure/user-lists/list-15

A word of caution: !$ expands to the end word of the previous command. What’s a word? The bash man page calls a word "A sequence of characters considered as a single unit by the shell." If you haven’t changed anything, chances are good that a word is a quoted string or a white-space delimited group of characters. What is a white-space delimited group of characters ? It’s a group of characters that are separated from other characters by some form of white-space (which could be a tab, space, etc.) If you’re in doubt, :p works here too.

Another thing to keep in mind when using !$ is that if the previous command had no agruments, !$ will expand to the previous command rather than the most recent argument. This can be handy if, for example, you forget to type vi and you just type the filename. A simple vi !$ and you’re in.

Similar to !$ is !*. !* is all of the arguments to the previous command rather than just the last one. As usual, this is useful in many situations. Here’s a simple example:

  1. $ vi cd /stuff #(oops!)
  2. [exit vi twice]
  3. $ !*

Which bash expands to:

  1. $ cd /stuff

Circumflex hats

Have you ever typed a command, hit return and a micro-second later realized that you made a typo? Back when I still used the more pager I was always typing:

  1. $ mroe filename

Luckily, the folks who wrote bash weren’t the greatest typists either. In bash, you can fix typos in the previous command with a circumflex (^) or "hat." Consider the following:

  1. $ vi /etc/Somefile.conf #(oops!)
  2. $ ^f^F

Which bash turns into:

  1. $ vi /etc/SomeFile.conf

What happened there? The name of the file that I was trying to edit was /etc/SomeFile.conf (note the capital "F.") I typed a lower-case "f" and vi saw my error as a request for a new file. Once I closed out of vi I was able to fix my mistake with the following formula: ^error^correction. Also notice that it only changed the first instance of "f" and not the second. If you need a global replacement, you’ll need to use a different kind of history modifier that’s discussed in the Word Modifiers section below.

Hats needn’t be only used for errors… Let’s say you have a few redundant commands that can’t be handled with a wildcard, hats will work great for you. For example:

  1. $ dd if=kern.flp of=/dev/fd0
  2. $ ^kern^mfsroot

Which bash turns into:

  1. $ dd if=mfsroot.flp of=/dev/fd0

Aah, the good old days.

A few handy movement commands

Sometimes a mistake is noticed before the enter key is pressed. We’ve already talked about terminals that don’t translate cursor-keys properly, so how do you fix a mistake? To make matters worse, sometimes the backspace key gets mapped to ^H or even worse something like ^[[~. Now how do you fix your mistake before hitting the enter key?

Once again, bash comes through for us. Here are some of the movement keystrokes that I use most often:

  • ^w erase word
  • ^u erase from here to beginning of the line (I use this ALL the time.)
  • ^a move the cursor to the beginning of the line
  • ^e move the curor to the end of the line

There are more of course, but those are the ones you simply can’t live without. For those who don’t know the ^N notation means ctrl+N, don’t confuse it with hats mentioned above.

tab-tab

One of my favorite features of bash is tab-completion. Tab-completion works in a couple of ways, it can complete filenames in the current directory or in your $PATH. Like the !commands above, you just need to give bash enough of the filename to make it unique and hit the tab key — bash will do the rest for you. Let’s say you have a file in your home directory called ransom.note, consider the following:

  1. $ mor[tab] ran[tab]

Will expand to

  1. $ more ransom.note

Let’s say you also have a file named random in your home directory. ran above is no longer enough to be unique, but you’re in luck. If you hit tab twice, bash will print the list of matching files to the screen so that you can see what you need to add to make your shortcut unique.

Aliases

Using aliases is sort of like creating your own commands. You decide what you want to type and what happens when you type that. Aliases can live in a few of different places, ~/.bashrc ~/.bash_profile ~/.profile and ~/.aliases are some, but not all. In fact, you’re not really limited to keeping them all in one place. Those different files behave differently based upon what kind of shell you’re running, but that’s beyond the scope of this document. For the purposes of this discussion, we’ll settle on ~/.bash_profile (used for login shells.)

In that file, usually at the bottom, I assemble my aliases. Here’s some examples:

  • alias ud=’aptitude update && aptitude dist-upgrade’
  • alias ls=’ls –color=auto’
  • alias mroe=’less’
  • alias H=’kill -HUP’
  • alias ssh=’ssh -AX’
  • alias webshare=’python -c “import SimpleHTTPServer;SimpleHTTPServer.test()”‘

The bottom one will probably wrap, but it provides a great example of why aliases are great. A whole string of commands has been reduced to something short and easy to remember.

Brace Expansion

Everyone has done one of the following to make a quick backup of a file:

  1. $ cp filename filename-old
  2. $ cp filename-old filename

These seem fairly straightforward, what could possibly make them more efficient? Let’s look at an example:

  1. $ cp filename{,-old}
  2. $ cp filename{-old,}
  3. $ cp filename{-v1,-v2}

In the first two examples, I’m doing exactly the same thing as I did in the previous set of examples, but with far less typing. The first example takes a file named filename and copies it to filename-old The second example takes a file named filename-old and copies it to simply filename.

The third example might give us a clearer picture of what’s actually occuring in the first two. In the third example, I’m copying a file called filename-v1 to a file called filename-v2 The curly brace ({) in this context, tells bash that "brace expansion" is taking place. The preamble (in our case filename,) is prepended to each of the strings in the comma-separated list found within the curly braces, creating a new word for each string. So the third example above expands to:

  1. $ cp filename-v1 filename-v2<br />

Brace expansion can take place anywhere in your command string, can occur multiple times in a line and even be nested. Brace expansion expressions are evaluated left to right. Some examples:

  1. $ touch a{1,2,3}b
  2. $ touch {p2,pv,av,}p
  3. $ ls /usr/{,local/}{,s}bin/jojo

The first example will create three files called a1b, a2b and a3b In this case, the preamble is prepended and the postscript is appended to each string within the curly braces. The second example contains no preamble, so the postscript is appended to each string as before, creating p2p, pvp, avp and simply p The last string in the second example is empty, so p is appended to nothing and becomes just p The third example shows multiple brace expansions on the same line and expands to this:

  1. $ ls /usr/bin/jojo /usr/sbin/jojo /usr/local/bin/jojo /usr/local/sbin/jojo

The following example is an example of nested brace expansion.

  1. $ apt-get remove --purge ppp{,config,oe{,conf}}

The shell will expand it to:

  1. $ apt-get remove --purge ppp pppconfig pppoe pppoeconf

The preamble, "ppp" will be prepended to, (left to right,) nothing ({,), config, then a second expansion will take place and a new preamble, "oe" will be prepended to, first nothing ({,), and then conf which will then each be appended to the original preamble.

For more on brace expansion, including examples of nesting, read the bash man page.

Word Modifiers

In the first installment of Advancing in the Bash Shell, we learned about :p which is used to print a command, but not execute it. :p is an example of a "word modifier" and it has several siblings. Here’s a shortened list from the bash man page:

h
Remove a trailing file name component, leaving only the head.

t
Remove all leading file name components, leaving the tail.

r
Remove a trailing suffix of the form .xxx, leaving the basename.

e
Remove all but the trailing suffix.

Let’s say I’m reading a file nested deeply in a directory structure. When I finish editing the file, I realize that there are some other operations I want to do in that directory and that they would be more easily accomplished if I were in that directory. I can use :h to help get me there.

  1. $ links /usr/local/share/doc/3dm/3DM_help.html
  2. $ cd !$:h
  3. $ links !-2$:t

Our old friend !$ is back and is being modified by :h. The second command tells bash to cd to !$ or the last argument of the previous command, modifying it with :h which trims off the file name portion of the string, leaving just the directory.

The third command looks pretty crazy, but it is acutally quite simple. !-2 means the command N(in this case 2) commands ago. $ means the last argument of that command and the :t means modify that argument to remove the path from it. So, all told: run links using the last argument of the command preceding the most recent one, trimming the path from that argument, or links 3DM_help.html. No big deal, right?

In our next example, we’ve downloaded a tar ball from the Internet. We check to see if it is going to create a directory for its files and find out that it will not. Rather than clutter up the current directory, we’ll make a directory for it.

  1. $ tar tzvf jubby.tgz
  2. [output]
  3. $ mkdir !$:r

The third command will create a directory called ‘jubby’.

Word modifiers can be stacked as well. In the next example, we’ll download a file to /tmp, and then create a directory for the contents of that tar file in /usr/local/src.

  1. $ cd /tmp
  2. $ cd /usr/local/src/
  3. $ mkdir !-2$:t:r:r
  4. {creates directory called 'KickassApplicationSuite'}
  5. $ cd !$
  6. $ tar xvzf /tmp/!-4$:t

The first three commands are fairly common and use no substitution. The fourth command, however, seems like gibberish. We know !-2 means the command prior to the most recent one and that $ indicates the last argument of that command. We even know that :t will strip off the path portion of that argument (in this case, even the "http://".) We even know that :r will remove the file-extension to that argument, but here we call it twice, because there are two extensions (.gz is removed by the first :r and .tar is removed by the second.) We then cd into that directory (!$, again, is the argument to the previous command, in this case the argument to mkdir, which is ‘KickassApplicationSuite’.) We then untar the file. !-4$ is the last argument to the command four commands ago, which is then modified by :t to remove the path, because we added the path as /tmp/. So the last command becomes tar xvzf /tmp/KickassApplicationSuite.tar.gz.

There’s even a word modifier for substitution. :s can be used similarly to circumflex hats to do simple line substitution.

  1. $ vi /etc/X11/XF86config
  2. $ !!:s/config/Config-4/

We know that !! means the previous command string. :s modifies the previous command, substituting the first argument to :s with the second argument to :s. My example used / to delimit the two arguments, but any non-whitespace character can be used. It’s also important to note that, just like circumflex hat substitution, the substitution will only take place on the first instance of the string to be substituted. If you want to affect every instance of the substitution string, you must use the :g word modifier along with :s.

  1. $ mroe file1 ; mroe file2
  2. $ !!:gs/mroe/more

The second command substitutes (:s) more for all (:g) instances of mroe. Hint: :g can be used with circumflex hats too!

The final word modifer we’ll look at in this tutorial is &. & means repeat the previous substitution. Let’s say we’re examining file attributes with the ls command.

  1. $ ls -lh myfile otherfile anotherfile
  2. $ !!:s/myfile/myfile.old/

Seems simple enough. :s steps in and changes myfile to myfile.old so we end up with ls -lh myfile.old myfile2 myfile3. & is just a shortcut that we can use to represent the first argument to :s The following example is equivalent to the example above:

  1. $ ls -lh myfile otherfile anotherfile
  2. $ !!:s/myfile/&.old/

& is a bit of a tricky one, as it has different contexts in the shell. Remember that this use of & is as a word modifier.

Bash Functions

Earlier, we learned a bit about aliases. Aliases are simple, static, substitutions. This isn’t to say that one can’t have a very advanced and complex alias, but rather to say that no matter how complex the alias, the shell is simply substituting ^x for ^y. Shell functions are like aliases, but they have the ability to contain logic and positional arguments, making them quite powerful.

What is a positional argument? I’m glad you asked. A positional argument is an argument whose position is important. For example, in the following function the directory containing the data to be copied must come first and the destination directory must come second.

  1. function treecp { tar cf - "${1}" | (cd "${2}" ; tar xpf -) ; };

It’s certainly possible (and easy) to write functions that can accept their arguments in any order, but in many cases, it just doesn’t make sense to do so. Imagine if cp could take its arguments in any order and you had to use switches to designate which file was which!

Let’s look at the example function above. To let bash know that you’re declaring a function, you start your function with the word function. The first argument to function is the name of the function you want to declare. In this case, treecp. The next character, {, as above, indicates a list to the shell. The list, in this case, is a list of commands. After the curly brace, the logic of the function is defined until the function is closed with a semi-colon followed by a closing curly brace (}.)

The logic of this function is fairly simple, once you understand the two variables that it is using. "${1}" is the first argument to a given command. "${2}" is the second, and so on.These are positional arguments. Their number indicates their position. You might think that, "${0}" is the name of the command itself, but it’s actually the name of the current "environment". In a shell script, it will be the name of the shell script. In your interactive shell, it’ll be the shell name with arguments. If you want the name of the function you’re in, you can use ${FUNCNAME}.

So, in order to use our treecp function, we must supply it with two arguments, the source tree and the destination tree:

  1. $ treecp dmr ~/public_html

dmr becomes "${1}", and ~/public_html is expanded to /home/whomever/public_html which then becomes "${2}".

What happens if the user forgets to add either or both arguments? How can the function know that it shouldn’t continue? The function, as above, doesn’t. It’ll just continue on its merry way no matter how few arguments it receives. Let’s add some logic to make sure things are as we expect them before proceeding.

Before we can do that, we need to learn about another variable that is set, (like "${1}",) when a command is run. The "${#}" variable is equal to the number of arguments given to a command. For example:

  1. $ function myfunc { echo "${#}" ; } ;
  2. $ myfunc foo bar taco jojo
  3. [output is '4']
  4. $ myfunc *
  5. [output is the same as 'ls | wc -l']
  6. $ myfunc
  7. [output is '0']

So now that we can discover how many arguments were passed to our command, (in this case a function,) we can determine if we’ve received the two arguments necessary to make our command work. There’s still a chance that these arguments are garbage, containing typos or directories that don’t exist, but unfortunately the function can’t think for you. :)

  1. function treecp {
  2. if [ "${#}" != 2 ] ; then
  3. echo "Usage: treecp source destination";
  4. return 1;
  5. else
  6. tar cf - "${1}" | (cd "${2}" ; tar xpf -) ;
  7. fi ;
  8. };

I’ve made use of the [ (aka test) application to see if the number of arugments is other than the expected two. If there are more or less than two arguments, the function willl echo a usage statement and set the value of "${?}" to 1. "${?}" is called a return code. I’ll discuss return codes in a little bit. If there are two arguments, the command runs using the first argument as an argument to tar cf – and the second command as an argument to cd. For more information on [ read its man page (man [.)

Ok, so positional parameters are fun, but what if I don’t care about placement and I need to pass all arguments to a command within my function? "${*}" is just what you’re looking for.

  1. $ function n { echo "${*}" >> ~/notes; };
  2. $ n do the dumb things I gotta do, touch the puppet head.

No matter how many words are passed to n they’ll all end up concatenated to the end of notes in my home directory. Be careful to avoid shell-special characters when entering notes in this manner!

Above, we designated 1 as a return code for an error state. There are no rules about what number should be returned in what case, but there are some commonly used return codes that you may want to use or at least be aware of. 0 (zero) is commonly used to denote successful completion of a task. 1 (one), (or any non-zero number,) is commonly used to denote an error state.

If an function or shell script is quite complex, the author may choose to use any number of error codes to mean different things went wrong. For example, return code 28 might mean your script was unable to create a file in a certain directory, whereas return code 29 might mean that the script received an error code from wget when it tried to download a file. Return codes are more helpful to logic than to people. Don’t forget to include good error messages for the humans trying to figure out what’s going wrong.

The following is an example of checking a return code:

  1. function err {
  2. grep "${*}" /usr/include/*/errno.h;
  3. if [ "${?}" != 0 ] ; then
  4. echo "Not found."
  5. fi
  6. };

grep will return non-zero if no match was found. We then call test again (as [) to see if the return code from grep was other than zero. If [‘s expression evaluates to true, in this case if a non-zero number was returned, the command after then will be run. If grep returns 0, it will output the files/lines that match the expression passed to it, [‘s expression will evaluate false and the command after then will not run.

If you’re interested in learning more about the programming aspects of Bash, don’t miss Mike G’s BASH Programming – Introduction HOW-TO. Greg’s Bash Wiki is also an excellent resource.

I hope this tutorial has been useful to you. The most difficult hurdle here is not the learning curve, but simply becoming accustomed to using these built-ins. Just like learning vi, once you get good with these, you’ll be amazed you ever lived without them.

This is just the tip of the bash iceberg. If you enjoyed this, you might want to look around the Net for more bash information, or even buy a book!

Keep on bashin’!