Jay Taylor's notes
back to listing indexbash - Collect exit codes of parallel background processes (sub shells) - Unix & Linux Stack Exchange
[web search]Unix & Linux Stack Exchange is a question and answer site for users of Linux, FreeBSD and other Un*x-like operating systems. It only takes a minute to sign up.
Sign up to join this communitySay we have a bash script like so:
echo "x" &
echo "y" &
echo "z" &
.....
echo "Z" &
wait
is there a way to collect the exit codes of the sub shells / sub processes? Looking for way to do this and can't find anything. I need to run these subshells in parallel, otherwise yes this would be easier.
I am looking for a generic solution (I have an unknown/dynamic number of sub processes to run in parallel).
The answer by Alexander Mills which uses handleJobs gave me a great starting point, but also gave me this error
warning: run_pending_traps: bad value in trap_list[17]: 0x461010
Which may be a bash race-condition problem
Instead I did just store pid of each child and wait and gets exit code for each child specifically. I find this cleaner in terms of subprocesses spawning subprocesses in functions and avoiding the risk of waiting for a parent process where I meant to wait for child. Its clearer what happens because its not using the trap.
#!/usr/bin/env bash
# it seems it does not work well if using echo for function return value, and calling inside $() (is a subprocess spawned?)
function wait_and_get_exit_codes() {
children=("$@")
EXIT_CODE=0
for job in "${children[@]}"; do
echo "PID => ${job}"
CODE=0;
wait ${job} || CODE=$?
if [[ "${CODE}" != "0" ]]; then
echo "At least one test failed with exit code => ${CODE}" ;
EXIT_CODE=1;
fi
done
}
DIRN=$(dirname "$0");
commands=(
"{ echo 'a'; exit 1; }"
"{ echo 'b'; exit 0; }"
"{ echo 'c'; exit 2; }"
)
clen=`expr "${#commands[@]}" - 1` # get length of commands - 1
children_pids=()
for i in `seq 0 "$clen"`; do
(echo "${commands[$i]}" | bash) & # run the command via bash in subshell
children_pids+=("$!")
echo "$i ith command has been issued as a background job"
done
# wait; # wait for all subshells to finish - its still valid to wait for all jobs to finish, before processing any exit-codes if we wanted to
#EXIT_CODE=0; # exit code of overall script
wait_and_get_exit_codes "${children_pids[@]}"
echo "EXIT_CODE => $EXIT_CODE"
exit "$EXIT_CODE"
# end
-
cool, I think
for job in "${childen[@]}"; do
should befor job in "${1}"; do
though, for clarity – Alexander Mills Apr 11 '18 at 12:31 -
the only concern I have with this script, is if
children_pids+=("$!")
is actually capturing the desired pid for the sub shell. – Alexander Mills Apr 11 '18 at 12:40 -
1I tested with "${1}" and it doesn't work. I'm passing an array to the function, and apparently that needs special attention in bash. $! is the pid of the last spawned job, see tldp.org/LDP/abs/html/internalvariables.html It seems to work correctly in my tests, and I'm now using in the in unRAID cache_dirs script, and it seems to do its job. I'm using bash 4.4.12. – arberg Apr 12 '18 at 15:51
-
Use wait
with a PID, which will:
Wait until the child process specified by each process ID pid or job specification jobspec exits and return the exit status of the last command waited for.
You'll need to save the PID of each process as you go:
echo "x" & X=$!
echo "y" & Y=$!
echo "z" & Z=$!
You can also enable job control in the script with set -m
and use a %n
jobspec, but you almost certainly don't want to - job control has a lot of other side effects.
wait
will return the same code as the process finished with. You can use wait $X
at any (reasonable) later point to access the final code as $?
or simply use it as true/false:
echo "x" & X=$!
echo "y" & Y=$!
...
wait $X
echo "job X returned $?"
wait
will pause until the command completes if it hasn't already.
If you want to avoid stalling like that, you can set a trap
on SIGCHLD
, count the number of terminations, and handle all the wait
s at once when they've all finished. You can probably get away with using wait
alone almost all the time.
If you had a good way to identify the commands, you could print their exit code to a tmp file and then access the specific file you're interested in:
#!/bin/bash
for i in `seq 1 5`; do
( sleep $i ; echo $? > /tmp/cmd__${i} ) &
done
wait
for i in `seq 1 5`; do # or even /tmp/cmd__*
echo "process $i:"
cat /tmp/cmd__${i}
done
Don't forget to remove the tmp files.
Use a compound command
- put the statement in parentheses:
( echo "x" ; echo X: $? ) &
( true ; echo TRUE: $? ) &
( false ; echo FALSE: $? ) &
will give the output
x
X: 0
TRUE: 0
FALSE: 1
A really different way to run several commands in parallel is by using GNU Parallel. Make a list of commands to run and put them in the file list
:
cat > list
sleep 2 ; exit 7
sleep 3 ; exit 55
^D
Run all the commands in parallel and collect the exit codes in the file job.log
:
cat list | parallel -j0 --joblog job.log
cat job.log
and the output is:
Seq Host Starttime JobRuntime Send Receive Exitval Signal Command
1 : 1486892487.325 1.976 0 0 7 0 sleep 2 ; exit 7
2 : 1486892487.326 3.003 0 0 55 0 sleep 3 ; exit 55
this is the generic script you're looking for. The only downside is your commands are in quotes which means syntax highlighting via your IDE will not really work. Otherwise, I have tried a couple of the other answers and this is the best one. This answer incorporates the idea of using wait <pid>
given by @Michael but goes a step further by using the trap
command which seems to work best.
#!/usr/bin/env bash
set -m # allow for job control
EXIT_CODE=0; # exit code of overall script
function handleJobs() {
for job in `jobs -p`; do
echo "PID => ${job}"
CODE=0;
wait ${job} || CODE=$?
if [[ "${CODE}" != "0" ]]; then
echo "At least one test failed with exit code => ${CODE}" ;
EXIT_CODE=1;
fi
done
}
trap 'handleJobs' CHLD # trap command is the key part
DIRN=$(dirname "$0");
commands=(
"{ echo 'a'; exit 1; }"
"{ echo 'b'; exit 0; }"
"{ echo 'c'; exit 2; }"
)
clen=`expr "${#commands[@]}" - 1` # get length of commands - 1
for i in `seq 0 "$clen"`; do
(echo "${commands[$i]}" | bash) & # run the command via bash in subshell
echo "$i ith command has been issued as a background job"
done
wait; # wait for all subshells to finish
echo "EXIT_CODE => $EXIT_CODE"
exit "$EXIT_CODE"
# end
thanks to @michael homer for getting me on the right track, but using trap
command is the best approach AFAICT.
Another variation of @rolf 's answer:
Another way to save the exit status would be something like
mkdir /tmp/status_dir
and then have each script
script_name="${0##*/}" ## strip path from script name
tmpfile="/tmp/status_dir/${script_name}.$$"
do something
rc=$?
echo "$rc" > "$tmpfile"
This gives you a unique name for each status file including the name of the script which created it and its process id (in case more than one instance of the same script is running) which you can save for reference later and puts them all in the same place so you can just delete the whole subdirectory when you're done.
You can even save more than one status from each script by doing something like
tmpfile="$(/bin/mktemp -q "/tmp/status_dir/${script_name}.$$.XXXXXX")"
which creates the file as before, but adds a unique random string to it.
Or, you can just append more status information to the same file.
script3
will be executed only if script1
and script2
are successful and script1
and script2
will be executed in parallel:
./script1 &
process1=$!
./script2 &
process2=$!
wait $process1
rc1=$?
wait $process2
rc2=$?
if [[ $rc1 -eq 0 ]] && [[ $rc2 -eq 0 ]];then
./script3
fi
Post as a guest
Required, but never shown
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged bash shell-script shell subshell or ask your own question.
Related
Hot Network Questions
-
Questions on Guitar Maintenance & New Gear
-
Prove finding k disjoint paths from n given paths in a directed graph is NP-complete
-
Why the instance name of SQL Server Express is changing?
-
Probability on Topological Space
-
Black Hole Power Plant
-
When to deviate from scale?
-
Can anyone tell me what these pieces are for?
-
Way to prevent the noise when 2 phones together
-
Why does the mass penalty of cantilever vs externally braced wing decrease with aircraft size?
-
What is this building and where is it? Europe in 1944-45 by a U.S. Army Air Force soldier. France, Germany, Belgium, or Netherland?
-
How is serfdom related to slavery?
-
I get this LF will be replaced by CRLF when using git-bash
-
In Wolfram Mathematica, who do I query the result of a Counts operation efficiently and conveniently?
-
A chart of daily cases of COVID-19 in a Russian region looks suspiciously level to me - is this so from the statistics viewpoint?
-
What is the mechanism for the reaction of alkenes with diborane and chloramine?
-
What does the word "evening-in" mean here?
-
More stable method of back substitution?
-
JSON API response flagged as XSS by vulnerability scanner. Is this a false positive?
-
If I cast Thunderous Smite and Booming Blade, hit, and use Destructive Wrath to maximize damage, how many uses of Channel Divinity are expended?
-
What is the purpose of classifying fighters?
-
Do replacement DVDs save progress? (XBox 360)
-
Benzene to phenol in a single step
-
Reverse factorial function
-
Pawn storm, should I start with h4 or g4?
This site is not affiliated with Linus Torvalds or The Open Group in any way.