Jay Taylor's notes

back to listing index

Interesting commands for the Linux shell | Hacker News

[web search]
Original source (news.ycombinator.com)
Tags: shell linux command-line shell-scripting news.ycombinator.com
Clipped on: 2017-09-16

Image (Asset 1/2) alt=


Image (Asset 2/2) alt=
The ones I get a lot of use out of are:

curly brace substitution:

  $ mkdir -p new_project/{img,js,css}
  mkdir -p new_project/img new_project/js new_project/css
  $ mv some_file.txt{,.old}
  mv some_file.txt some_file.txt.old
Caret substitution:

  # systemctl status mysql.service
  -- snip output --
  # ^status^restart
  systemctl restart mysql.service
Global substitution (and history shortcut):

  $ echo "We're all mad here. I'm mad. You're mad."
  we're all mad here. I'm mad. You're mad.
  $ !!:gs/mad/HN/
  we're all HN here. I'm HN. You're HN.
I have a (WIP) ebook with more such tricks on GitHub if anyone is interested: https://tenebrousedge.github.io/shell_guide/

Curly brace expansion is one of the most useful and overlooked topic in these kind of posts. And I'm always surprised it isn't mentioned with compiling.

g++ -o foo{,.cpp}


Worth noting that for the particular case you've cited, you can just use make, even without a makefile:

  ~ cat test.cpp 
  #include <iostream>
  
  using namespace std;
  
  int main()
  {
      cout << "hi\n";
      return 0;
  }
  ~ stat Makefile
  stat: cannot stat 'Makefile': No such file or directory
  ~ 1 make test
  g++     test.cpp   -o test
  ~ ./test
  hi

Well you learn something every day. That's pretty awesome! Is there an easy way to add options? Like -O2

yeah, reading make man, and also http://nullprogram.com/blog/2017/08/20/

also, useful to know - zsh supports tab-completion for curly brace expansion.

I should use history substitution a lot more. Thanks

Everyone should probably have

  $ sudo !!
ingrained in muscle memory. The other even more useful one is !$ to get the last word of the previous line. It's probably the terminal feature I use most.

Don't blindly retry with sudo. I cannot imagine in which kind of environment you would use 'sudo !!' so much, that it becomes muscle memory.

That sounds too much like my colleagues' favorite way of ruining their windows systems: 'It did not work so I ran it again as administrator'.


Well, I tend to assume that I'm not the only one who continually forgets to add 'sudo' in front of 'apt-get', or similarly that journalctl usually requires elevated privileges, or that I don't own the files I'm trying to chmod. I suspect that people who do recall such things perfectly are rare. As to muscle memory, well, probably my assumptions there are off. I've been using Linux exclusively both professionally and at home for more than ten years now, and I have a bad habit of coding at odd hours of the night. Other people may not have quite the same exposure or error rate.

I certainly wouldn't advocate blindly retrying; I only use 'sudo !!' when my shell tells me to. That unfortunately is pretty often :(


!$ is not merely the last word, but the last argument - particularly handy when it's a long path. Also love the way zsh expands these with a single tab press - great for clarity or further editing.

That's technically incorrect. !$ is a shortcut for !!:$, which is the last word of the previous command in history. A filename is treated as a single word, as you say. This is however distinct from the last argument, which is accessible with $_ . As an example:

  $ echo 'example' > file.txt
Here !$ would return the filename, and $_ would return 'example', which was the last argument to echo.

wow ok, neat... gonna have to spend some time with your book, Thanks!

oh I use !! but for a reason the ^old^new never stuck. I don't like ^ maybe and instead of !$ I use ALT + .

maybe it will stick this time


Disappointingly, this list treats "yes" like a toy that just prints things over and over, and doesn't mention actual useful uses for "yes", like accepting all defaults without having to press enter.

Practical example: when you are doing "make oldconfig" on the kernel, and you don't care about all those questions:

yes "" | make oldconfig

Or, if you prefer answering no instead:

yes "n" | yourcommand

Also, the author refers to watch as a "supervisor" ("supervise command" - his words). That is bad terminology. Process supervision has well defined meaning in this context, and watch isn't even close to doing it.

Examples of actual supervisors are supervisord, runit, monit, s6, and of course systemd (which also does service management and, er, too much other stuff, honestly).


The kernel also recently (4.8-ish?) introduce an `olddefconfig` command, so you don't have to mess around with yes.

"...("supervise command" - his words). That is bad terminology."

"Examples of actual supervisors are..."

1999-2001

http://cr.yp.to/daemontools/supervise.html

runit and s6 are copies of daemontools.

Otherwise I liked your comment about use of yes.


Unfortunately it doesn't work when trying to connect via ssh and warning about unknown key pops up.

That is intentional behavior on the part of `ssh`. You can still disable that prompt within `ssh` (though it's not recommended):

    ssh -oStrictHostKeyChecking=no 192.168.0.100
...or by adding rules in /etc/ssh/ssh_config or ~/.ssh/config

    Host 192.168.0.*
        StrictHostKeyChecking no

Some handy tips there but I would recommend changing some of the `find` examples:

    find . -exec echo {} \;      # One file by line
You don't need to execute echo to do that as find will output by default anyway. There is also a `-print` flag if you wish to force `find` to output.

    find . -exec echo {} \+      # All in the same line
This is think is a dangerous example because any files with spaces will look like two files instead of one delimited by space.

Lastly, in both the above examples you're returning files and directories rather than just files. If you wanted to exclude directories then use the `-type f` flag to specify files:

    find . -type f ...
(equally you could specify only directories with `-type d`)

Other `find` tips I've found useful that might be worth adding:

* You don't need to specify the source directory in GNU find (you do on FreeBSD et al) so if you're lazy then `find` will default to your working directory:

    find -type f -name "*.txt"
* You can do case insensitive named matches with `-iname` (this is also GNU specific):

    find -type f -iname "invoice*"

> * You can do case insensitive named matches with `-iname` (this is also GNU specific): > > find -type f -iname "invoice*"

This was added to OpenBSD 17 years ago. Other BSDs soon followed. Solaris & IllumOS support it too.

http://cvsweb.openbsd.org/cgi-bin/cvsweb/src/usr.bin/find/op...


For some reason I recalled -iname failing on FreeBSD but I've just logged onto some devs boxes (not that I didn't trust your post!) and it seems you're right as the option is there in the man pages.

Apologies for this. It just goes to show how fallible the human memory is. :-/


Another option instead of 'find' in BASH is the globstar option. If set then pathname expansion will include zero or more subdirectories:

  ls **/*.c
Results in something like:

  array.c                   helpers/gendec.c         msg.c
  awkgram.c                 helpers/mb_cur_max.c     node.c
  awklib/eg/lib/grcat.c     helpers/scanfmt.c        old-extension/bindarr.c
Turn on with:

  shopt -s globstar

The only forbidden characters in a Unix filenames are '/' and '\0'

Want to mess with such a script?

$ touch "$(echo -e -n 'lol\nposix')"


Indeed. This is one of the reasons why I wrote a shell that handles file names as JSON strings.

However for normal day to day usage, file names with \n are rare while files with spaces in their name are common. So returning an array of space delimited file names is a potentially dangerous practice for common scenarios where as find's default behaviour is only dangerous for weird and uncommon edge cases. (And if you think those are a likely issue then you probably shouldn't be doing your file handling inside a POSIX shell in the first place).


One of the cruelest things you can do is a filename that consists only of a combining diacritic (without a glyph that it could combine with). Will break outputs of various programs (starting with ls) in sometimes hilarious ways.

If you're trying it out now and cannot figure out how to delete it: "ls -li" to find the file's inode number, then `find -inum $INODE_NUMBER -delete`.


Wow, that's really horrible. I have a file sitting around with a couple of newlines in the name just so I can see how many programs don't cope with it, but I hadn't thought of using a lone combining diacritic.

If anyone wants a command to make one, try

    touch $'\U035F'
(using U+035F COMBINING DOUBLE MACRON BELOW for no particular reason, see [1] for more)

[1]: https://en.wikipedia.org/wiki/Combining_Diacritical_Marks


   grep -P "\t"
Not criticising the author here, as grep -P is good, but you might not also know that you can enter tabs and other codes in bash by pressing ctrl-v. So you could also type:

   grep "[ctrl-v]TAB"

I know Linux since 1992 (main OS since 2009) and I did not know that. thank you very much

The first alternative is still preferable, since it's actually readable.

You can also do $'\t' in at least Bash (and probably Zsh).


Oh, agreed. But the ctrl-v trick is useful in general anywhere where you'd like to put a special character in an input or command.

Under the presumption that he means `bash` when he says "the Linux shell", does the Ctrl-v trick work in both emacs and vi modes?

Yes; it's a readline command called quoted-insert and is by default bound to ctrl-V in both modes.

    $ set -o vi; bind -q quoted-insert
    quoted-insert can be invoked via "\C-v".

this works in vim also.

And Ctrl-V does not work in GVim on Windows, but Ctrl-Q does instead. E.g. if you have a text file with mixed Unix and DOS/Windows line endings, like Ctrl-J (line feed == LF == ASCII 10) and Ctrl-M (carriage return == CR == ASCII 13) + Ctrl-J, you can search for the Ctrl-Ms with /Ctrl-Q[Enter] and for tabs with /Ctrl-Q[Tab]. When you type the Ctrl-Q, it does not show as Ctrl-Q, but as a caret (^). And you can also use the same quoted Ctrl-M (Enter) pattern in a search and replace operation to remove them globally, by replacing them with a null pattern:

:%s/Ctrl-Q[Enter]//g[RealEnter]

or replace each tab with 4 spaces:

:%s/Ctrl-Q[Tab]/[4 spaces]/g[RealEnter]

where [RealEnter] means an unquoted Enter.

This is mainly useful for making such changes in a single file (being edited) at a time. For batch operations of this kind, there are many other ways to do it, such as dostounix (and unixtodos), which come built-in in many Unixen, or tr, maybe sed and awk (need to check), a simple custom command-line utility like dostounix, which can easily be written in C, Perl, Python, Ruby or any other language that supports the command-line interface (command-line arguments, reading from standard input or filenames given as command line arguments, pipes, etc.).


Doesn't matter how long you've been using a GNU/Linux shell, you'll always learn something new every day. Thanks for this.

Funny enough I was saying this to a work colleague yesterday when I was looking at a shell script and scratching me head wondering how it was working when a variable named $RANDOM wasn't assigned a value. After a little investigation it turned out Bash (and possibly other $SHELL's?) returns a random number with each call of the $RANDOM variable. Handy little trick.

There is also the inbuilt variable $SECONDS, which seconds since the instance of bash was started. Makes it really easy to do 'time elapsed' at the end of a bash script.

    $ echo $SECONDS
    83783
    $ echo $SECONDS
    83784
    $ echo $SECONDS
    83787

That's nifty.

    $ echo "$SECONDS / 60 / 60" | bc
    362.38500000000000000000

"bc", without the "-l" option, will give you integers only. So the response of the command above will be "362", and not "362.38500000000000000000"

echo "$(( SECONDS / 60 / 60 ))"

echo "scale=2; $SECONDS / 60 / 60" | bc

# will give 2 digits after the decimal point, instead of bc -l which gives more digits of precision after the decimal point, than we may usually want for this particular calculation.


Recently I needed results that included nothing after the decimal point, so resorted to awk to do the math:

   awk "BEGIN { print int($SECONDS /60 /60)}"

Interesting. I suppose you had to type Ctrl-D or Ctrl-Z to signal no input to awk, since it expects some in the above command? Never tried running awk without either a filename as input or stdin coming from a file or a pipe. Will check it and see what happens.

Modifying my example from above:

echo "scale=0; $SECONDS / 60 / 60" | bc

might also work, need to check.


Yeah, SECONDS is great. I just discovered this a few days ago to get a general idea of execution time for each process a script handles:

# At top of script

SECONDS = 0

# script commands

...

# At bottom of script

duration=$SECONDS

echo "$(($duration / 60)) minutes and $(($duration % 60)) seconds elapsed."


I prefer to use time when i want to know how much time some script took:

$ time sleep 2

real 0m2.003s

user 0m0.002s

sys 0m0.001s


Indeed! Also if $RANDOM is too small for your needs, you can use $RANDOM$RANDOM (and it will have different values)

$RANDOM$RANDOM doesn't have uniform distribution.

Use $((RANDOM + (RANDOM << 15))) instead.


Indeed, I was about to say this. I clicked thinking "No way I'll learn something here". Turns out there's 2 or 3 commands that I didn't know and might prove useful!

  > Doesn't matter how long you've been using a GNU/Linux shell
Or a Unix shell, even.

> 6. List contents of tar.gz and extract only one file

> tar -ztvf file.tgz

> tar -zxvf file.tgz filename

Tar no longer requires the modifier for the compression when extracting an archive. So no matter if it's a .tar.gz, .tar.Z, .tar.bz2, etc you can just use "tar xvf"


And the same for compression..

    tar caf foo.tar.xz foo/
The extension of the file after the `f` switch tells tar what compression to use.

c - compress

a - auto


As long as we are on the topic of Linux shell commands I would like to share a tip which has helped me, whenever you are using wildcards with rm you can first test them with ls and see the files that will be deleted and then replace the ls with rm. This along with -i and -I flags makes rm just a tad bit less scary for me. Kinda basic but hopefully somebody finds it helpful :)

As mbrock says, you can also use echo instead of ls for that. And "echo *" serves, in a pinch, as a rudimentary ls, when the ls command has been deleted and you are trying to do some recovery of a Unix system. Similarly dd can be used to cat a file if cat is deleted: dd < file

And in fact whenever you want to do some command like:

cmd some_flags_and_args_and_metacharacters ...

you can just replace cmd with echo first to see what that full command will expand to, before you actually run it, so you know you will get the result you want (or not). This works because the $foo and its many variants and other metacharacters are all expanded by the shell, not by the individual commands. However watch out for > file and >> file which will overwrite or append to the given file with the output of the echo command.


You can also use M-? to list all the files.

So typing 'rm foo* ' and pressing Alt-? (on non-OSX computers) will give you a list of what your wildcard would expand to.

M-* does the same thing but actually expands the wildcards, instead of just showing what that expansion would be.


Still works with ESC on OSX

Or echo instead of ls.

Using find to do that is a much better idea. The author shows this as the first examples in #9.

I probably should be embarrassed to admit I didn't know about much of the stuff on this page about parameter expansion:

http://wiki.bash-hackers.org/syntax/pe

Bash is insane.


It is a horrible language design, which was impressed upon me after implementing it for my shell Oil. Here are some pathological cases I pointed out:

${####}: http://www.oilshell.org/blog/2016/10/28.html

Each # means a different thing.

${foo//z//}: http://www.oilshell.org/blog/2016/10/29.html

There are three different meanings of / there.

"${a[@]}" -- http://www.oilshell.org/blog/2016/11/06.html

You need 8 punctuation chars in addition to the array name to correctly interpolate it. Every other way gives you word splitting issues.


MMV is the latest interesting one I've found recently: https://ss64.com/bash/mmv.html

For example, I can do the following:

   mmv "flight.*" "flight-new.#1"
and this will rename all of my files that start with flight. to flight-new. preserving the file extension. So useful, when you've got a bunch of different files with the same name but with different extensions such as html, txt and sms.

Check out qmv from renameutils.[1] Using it you can use your $EDITOR to batch-rename files. It does sanity checking like making sure you don't rename two files to the same name and lets you fix such mistakes before trying again.

[1] - http://www.nongnu.org/renameutils/


I use this `rename` (available from Homebrew as `rename`): http://plasmasturm.org/code/rename/

I guess I'm just overly sensitive (and maybe English is not the poster's first language), but I cringe at "the Linux shell"... I have used ksh, bash, (a little) csh, and zsh over the years, and love the architecture that makes "the shell" just another non-special binary that's not unique to the OS.

And yes, a lot of the things mentioned work in a lot of shells, but some don't, or act differently.


I would find it a little if it were called the GNU or the GNU/Linux default shell

Here are some that I use in Pet:

Description: Remove executable recursively for all files

    Command: chmod -x $(find . -type f)
Description: List files with permissions number

    Command: ls -l | awk '{k=0;for(i=0;i<=8;i++)k+=((substr($1,i+2,1)~/[rwx]/) *2^(8-i));if(k)printf("%0o ",k);print}'
Description: Output primary terminal colors

    Command: for i in {0..16}; do echo -e "\e[38;05;${i}m\\\e[38;05;${i}m"; done | column -c 80 -s '  '; echo -e "\e[m"
Description: NPM list with top-level only

    Command: npm list --depth=0 2>/dev/null
Description: Show active connections

    Command: netstat -tn 2>/dev/null | grep :80 | awk '{print $5}' | sed -e 's/::ffff://' |cut -d: -f1 | sort | uniq -c | sort -rn | head

While we're here sharing random cli-fu, you might like `find`'s native ability to spawn processes. Check out the `-exec` flag:

    find . -type f -exec chmod -x {} \+

This is more correct than the previous poster's approach, because it will work with files that contain space characters, while the previous poster's version breaks in this situation.

Another approach that works is

  find . type f -print0 | xargs -0 chmod -x
although find's built-in -exec can be easier to use than xargs for constructing some kinds of command lines.

You probably want a -r with that xargs. -r makes it exit without executing the command line if stdin is empty.

I totally forgot about the case where find gives no output.

Nice, thanks for that revision

Nice! Point 23 is a bit misleading though: comm only works on sorted input files. Also, "disown -h" in Point 21 works on bash but not on zsh. Also, in Point 22, "timeout" only works if the command does not trap SIGTERM (or you have to use -k).

> If a program eats too much memory, the swap can get filled with the rest of the memory and when you go back to normal, everything is slow. Just restart the swap partition to fix it: sudo swapoff -a; sudo swapon -a

Is this true? Not impossible, but I am surprised. If true, what is this fixing? In my naive view (never studied swap management), if at the current time a page is swapped out (and by now we have more memory -- we can kill swap completely and do fine), it should get swapped in when needed next time. As there is more memory now it should not, in general, be swapped out again.

If true we are exchanging a number of short delays later for a longer delay now, which to me hardly looks like a win.


I you run a very memory hungry program (like some specific scientific program), it can happen that 3GB of your memory from the browser, text editor, etc. is moved to swap. Then, when you kill the program and want to go back to normal, just changing the tab in your browser can take a long time.

By flushing the swap, you wait some time first but then it all runs smoothly. When using a hard drive and not SSD the difference is even bigger.


I've hit the same kind of problem before. Big task runs & eats all the memory. Later on, a login to the machine or existing shells are horribly slow until they get swapped back in.

However, swapoff/swapon only solves part of the problem - you still have binaries, libraries and other file-backed memory that were thrown out under the memory pressure and they won't be reloaded with the swapoff/swapon. Does anyone know how to force these kinds of things to be re-loaded before they are needed again?


It is useful in the case that you want a predictable and expected delay immediately, rather than unpredictable delays for an unknown length of time to come.

I get that part. I was wondering if by doing it all at once you somehow gain significantly over doing it normally. I doubt it, but prepared to learn otherwise. If it is much quicker to swap everything in maybe it is worthwhile to expose this functionality directly instead of doing hacks like swap off/on.

It depends on when you next need to use the system. If it is immediately after the memory hogging code exits, there's probably not much of a win.

But if you run the memory hogging program, then go to lunch, if the swapoff/swapon is triggered before you get back, you will be avoiding the delays entirely.


In my experience, it makes a big difference. My swap is on a hard drive, which can handle the sequential reading of the swap quite quickly. Whereas the random accessing of it in normal use is much slower per byte.

I haven't used a swap in the past 10 years and have not noticed any problems. Is it relevant anymore?

If you have a memory-constrained system, yes. There's a surprising amount of stuff that sits untouched in RAM for very long periods of time. I generally run fairly low-end systems, so it's not unusual for me to have a few GB in swap. (Though I just found out last week that I could buy another 4GB for $26 and free shipping, and I have to admit that was a worthwhile investment.)

Yes, if you have 8GB of RAM which is enough for me 99% of the time, but you'd rather your computer slow down 1% of the time rather than crash and have to start over.

Yes. You need swap to hibernate a laptop.

But is hibernate relevant? I've found suspend to be much more reliable, and good enough in terms of energy consumption (thanks to modern low-power states in CPUs).

Duh, yes, that is needed. I don't have hibernate enabled on my laptop though, nor more importantly, my server.

Never knew about readlink for files, but I use

  pwd -P 
all the time to get the real full path (no symlinks) of the current directory. Really easy to remember as well, Print Working Directory.

That is not the same use case.

You could use this to get the current location of the script being run (not the location where it runs).

   readlink -f $0

nice collection! upvoted for "15. `namei -l`" alone, which is far too little known.

a better "20. Randomize lines in file":

  shuf file.txt
instead of

  cat file.txt | sort -R
(sort -R sorts by hash, which is not really randomisation.)

> (sort -R sorts by hash, which is not really randomisation.)

I looked at the source code for GNU sort and what they're doing is reading 16 bytes from the system CSPRNG and then initializing an MD5 digest object with those 16 bytes of input. Then the input lines are sorted according to the hash of each line with the 16 bytes prepended.

Although they should no longer use MD5 for this, I don't think we know anything about the structure of MD5 that would even allow an adversary to have any advantage above chance in distinguishing between output created this way and an output created via a different randomization method. (Edit: or distinguishing between the distribution of output created this way and the distribution of output created via another method!)

The output of sort -R is different on each output and ordinarily covers the whole range of possible permutations.

  $ for i in $(seq 10000); do seq 6 | sort -R | sha256sum; done | sort -u | wc -l
  720

> namei -l /path/to/file

This is fantastic, a game changer for me.. I often give people a command like ls -lad /path /path/to /path/to/file

Thanks!


And prefer redirection to cat.

    whatever < file.txt 
not

    cat file.txt | whatever

Also, there is no need for - or z on GNU tar in t or x modes:

    tar tf whatever.tgz
    tar tf whatever.tar.bz2
    tar tf whatever.txz
    ...

    tar xf whatever.tar.gz
    tar xf whatever.tar.Z
    ...
all work just fine

I know whatever < file.txt is slightly more efficient, but there is value is keeping things going left to right with pipes in between. It makes it easy to insert a grep or sort, or swap out a part.

    <file.txt grep foo | sort

Ok, if that actually works, that's amazing. Wow.

It works because of the general principle that I mentioned here:

https://news.ycombinator.com/item?id=15249370

The shell (not the command) is the one expanding those metacharacters, so (within limits), in:

cmd < file

or

< file cmd

where you put that piece (< file) on the line does not matter, because the actual command cmd never sees the redirection at all - it is done [1] by the time the command starts. The cmd command will only see the command line arguments (other than redirections) and options (arguments that start with a dash). Even expansion of $ and other sigils (unless quoted) happens before the command starts.

[1] I.e. the shell and kernel set up the standard input of the command to be connected to the file opened for reading (in the case of < file).


Oh cool. Didn't know this was possible.

I prefer cat because > vs < is an easy typo to make but one clobbers the file. Happy to pay the performance penalty as insurance against that.

You can use the bash setting called noclobber to prevent such accidental deletion.

https://en.wikipedia.org/wiki/Clobbering


Nice tip! I updated the article.

Caching problems? :P

Not really a single command, it's a one-liner - a pipeline, but might be interesting, not only for the one-liner but for the comments about Unix processes that ensued on my blog:

UNIX one-liner to kill a hanging Firefox process:

https://jugad2.blogspot.in/2008/09/unix-one-liner-to-kill-ha...


I really like the "ag" command. Very convenient to grep in a bunch of files filtered by type. Example:

    ag --ocaml to_string
Very fast and simple syntax.

Now try rg (ripgrep). It has a much more robust file ignore handling than ag in my experience.

When I last tried, ag couldn't handle this in .ignore file:

    !foo.1
    foo.*
That would ignore all foo.* files except foo.1 in rg.

Also rg is a bit faster than ag for my use cases.


Is it possible to register new file formats in ripgrep though (say for example slim files so that I could do rg -tslim PATTERN)? Last time I checked it was somehow possible but way too complicated for so common a task.

From its README:

    rg --type-add 'foo:*.{foo,foobar}' -tfoo bar
---

You simply make the below into an alias:

    rg --type-add 'foo:*.{foo,foobar}'
Or, if the type you need is generic that other users can also use, submit a PR. The developer was kind to except mine.. https://github.com/BurntSushi/ripgrep/pull/107/files .. it's a trivial 1-line PR.



Not a command as such, but I recommend skimming the readline man page[0] and trying some of the bindings out to see if you're missing out on anything. I went years without knowing about ctrl+R (search command history).

[0] http://man7.org/linux/man-pages/man3/readline.3.html#EDITING...


Throw this in your ~/bin as a script named math:

  #!/bin/sh
  scale=4 # results will print to the 4th decimal
  echo "scale=$scale; $@" | bc -l
Now you can do math.

  $ math '1+1'
  2
  $ math '2/3'
  .6666
This is especially useful in shell scripts with interpolated variables:

  x=10
  x=`math $x - 1`

Alternatively, you could use an alias:

    # .bashrc
    alias bc='bc --mathlib'
and a .bcrc file:

    # .bcrc
    scale = 4
Actually, this is what my .bcrc looks like:

    scale = 39

    k_c = 299792458                   # Speed of Light
    k_g = 6.67384 * 10^-11            # Gravitation
    k_atm = 100325                    # Atmospheric pressure
    k_h = 6.62606957 * 10^-34         # Planck's constant
    k_hbar = 1.054571726 * 10^-34     # H Bar
    k_mu = 1.256637061 * 10^-6        # Vacuum permeability
    k_ep = 8.854187817 * 10^-12       # Vacuum permittivity
    k_epsilon = 8.854187817 * 10^-12  # Vacuum permittivity
    k_e = 1.602176565 * 10^-19        # Elementary charge
    k_coulomb = 8.987551787 * 10^9    # Coulomb's constant
    k_me = 9.10938294 * 10^-31        # Rest mass of an electron
    k_mp = 1.672621777 * 10^-27       # Rest mass of a proton
    k_n = 6.02214129 * 10^23          # Avogadro's number
    k_b = 1.3806488 * 10^-23          # Boltzmann's constant
    k_r = 8.3144621                   # Ideal gas constant
    k_si = 5.670373 * 10^-8           # Stefan-Boltzmann constant
    k_sigma = 5.670373 * 10^-8        # Stefan-Boltzmann constant
    k_mt = 5.97219^24                 # Mass of Earth (Tierra)
    k_rt = 6.371 * 10^6               # Mean radius of Earth (Tierra)

    pi = 3.1415926535897932384626433832795028841968

    # requires --mathlib
    define t(x) { return s(x)/c(x); }
    define as(x) { return 2*a(x/(1+sqrt(1-x^2))); }
    define ac(x) { return 2*a(sqrt(1-x^2)/(1+x)); }
    define at(x) { return a(x); }
    define csc(x) { return 1/s(x); }
    define sec(x) { return 1/c(x); }
    define cot(x) { return c(x)/s(x); }

It's not nearly as flexible as bc overall, but GNU units has lots more constants built-in and also trig functions.

I checked and it has all of the ones that you mentioned, sometimes under slightly different names. I was surprised that e is defined the elementary charge rather than Euler's constant!


units(1) is my go-to calculator for everything. Really nice tool. I recommend

  alias units="units --verbose"
because the verbose output is much less ambiguous.

Nice, thanks for the suggestion.

That might be useful if you want floating point math, but most shell scripts can just use the integer math in shell:

    $ x=10
    $ echo $((x - 1))
    9
Though if I need to do a floating point calculation at the shell, I start python or R, which both have their own interactive shells (with the same readline interface, which I like).

CentOS 7 specifies `man rename` : rename [options] expression replacement file...

The rename example fails with "rename: not enough arguments".


Unfortunately, there are two incompatible rename programs in the wild. :(

Debian ships this: https://metacpan.org/pod/distribution/File-Rename/rename.PL

Most(?) other distros ship the one from util-linux: http://man7.org/linux/man-pages/man1/rename.1.html


Don't read it like this: rename <expression> <replacement file> ...

Read it like this: rename <expression> <replacement> <file(s)...>

Argument 1 is a string (not a regex), argument 2 is string to replace the old string, and arguments 3+ is a file, or list of files, or a glob.

The man page includes a warning about the lack of safeguards. It is unfortunate that rename doesn't have an equivalent of mv or cp's -i flag because it's so easy to overwrite files if your expressions aren't exactly correct.

On some Ubuntu systems, I've seen another version of rename that's actually a Perl script that uses regular expressions. I think that version lets you set some safeguards, which are sorely lacking in the binary version of rename that ships with CentOS.


It's available in Fedora under the name "prename". I've asked the maintener to make it available for EPEL 7, will this work for you?

ctrl-r for fuzzy searching your history is all you really need to know.

Thanks, every once in a while these lists are useful!

You'll definitely want to check `free -m` before calling `swapoff` to make sure you really have enough memory for everything in swap, lest you want to invoke the wrath of the OOM killer.

.... or just use a modern os where all this stuff is intuitive and easy :P



Applications are open for YC Winter 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: