-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathbash.txt
78 lines (66 loc) · 4.63 KB
/
bash.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
Scripting cheatsheet: https://devhints.io/bash
### looping through lines (from a textfile, say)
# read splits a line into columns on whitespace.
# read \e[38;5;208m-a\e[0m turns line into an array. you can add \e[38;5;208m-r\e[0m to \e[1mnot\e[0m interpret backslashed characters differently (safer)
while read -a line; do echo ${line[0]}${line[1]}; done < file.txt
# alternatively..
cat file.txt | while read -a line; do echo ${line[0]}${line[1]}; done
# use "read foo bar bazz" to read the first 3 words (columns) into those variables.
# the last variable always gets Everything until EOL, dump that in a '_' variable to ignore
while read foo bar _; do things with ${foo} and to ${bar}; done < file.txt # only interested in the first two columns
# \e[38;5;201mNB:\e[0m \e[1mif only the first line is being run,\e[0m it is possible that a script inside the do...done loop is accepting STDIN
# and gobbling up all the other lines. In this case, redirect its STDIN to dev null:
while read foo bar _; do hungryscript.sh -flags $foo $bar \e[32;1m< /dev/null\e[0m; done < file.txt
## Variation: reading from a process
while read foo _; do stuff.sh; done \e[32;1m<\e[0m \e[35;1m<(\e[0mgrep acc file.txt\e[35;1m)\e[0m
\e[35;1m<(..)\e[0m creates a pseudofile to read from
\e[32;1m<\e[0m is still needed to redirect the output from the pseudofile to the input of the script.
### Is my script being piped?
-t fd # True if file descriptor fd is open and refers to a terminal.
# fd can be one of the usual file descriptor assignments: 0 stdin, 1 stdout, 2 stderr
$ if [[ -t 1 ]] ; then echo "output is a terminal"; else echo "output is Not a terminal"; fi
# also works for detecting whether input came from the user, or another script/process (fd=0)
# here, stdin is not a tty, therefor the input has been piped or redirected.
$ if [[ ! -t 0 ]]; then ..
# you can grab it with input="$(cat)". according to some online AI the quotes are optional because of how cat behaves, but
# still a good habit.
### Is my script running from crontab?
`tty -s` has a non-zero exit status $? when the script is running from cron (tty returns the name of the terminal
connected to STDIN, which won't exist. -s surpresses output)
# shortest way to increment a variable (note no '$')
((i++))
# what's the deal with export?
export foo[=bar]
Basically it turns a regular variable into an environment variable, which means it gets copied to nested shells and
scripts. Some variables (like $TERM) are environment variables already and don't require an export when changing them.
You can see which variables are environment variables by checking for them in \e[38;5;208menv\e[0m, or easier, \e[38;5;208mprintenv <var>\e[0m
# redirecting output.
$ foo >/dev/null 2>&1
# > : redirects STDOUT (the default), which would usually point to the terminal (or parent process), to /dev/null
# 2>&1 : 2, aka STDERR, goes to where 1 (STDOUT) _now_ points, so also to /dev/null. & to indicate "not a file named 1"
# it's more useful to read 2>&1 as "errors go to where stdout is currently going" then "errors go to stdout"
$ foo 2>&1 >/dev/null
# in a different order: STDERR now goes to what STDOUT points at: the terminal or parent process
# Then, STDOUT is pointed at /dev/null, but this no longer affects error messages.
# ..... SOMETHING like that, anyway o.o;
# In modern(?) terminals there's a shortcut for redirecting both STDOUT and STDERR:
$ foo &> /dev/null
# regex in `if` uses ERE, unrelated to the general shell glob patterns in eg `case`
if [[ "string" =~ regex ]]; then..
# do not quote the regex. if it's complicated, store it in a variable first and use that (unquoted).
# see http://mywiki.wooledge.org/BashGuide/Patterns for the ugl^H^Hseful details.
# Deleting a million files, in order of fast to slow (as per 2023-10, on CentOS)
$ find ./filesToRemove -type f -delete
$ rm -rf ./filesToRemove \e[36m# never do rm -rf /dir/*, because * expands and the command will either be or take too long\e[0m
$ rsync -a --delete emptyDir/ filesToRemove/
# you can time any command by prepending it with 'time', like `$ time find ..`; the first entry (realtime?) is wall clock time,
# the other two (user/system) added up can be longer or shorter depending on what other processes were happening and the number
# of cores.
# Creating a million files (look, people need hobbies)
# breaks because too many args:
$ touch {1..500000}
# is silly slow because touch is called for each file
$ for (( i=0; i<500000; i++ )); do touch $i; done
# this is parallelized and fast! \o/
$ seq 1 500000 | xargs -n 100 -P 4 touch
# -n is number of args to pass per call, -P is number of simultaneous processes, so basically it handles 400 files per call instead of 1