I just learned about the --forest switch for ps. If you use it, you'll get a tree representing all of the parent and child processes running rather than just a straight up list.
Thanks! I've been using a tool called pstree for the same thing (I didn't even realize ps had such a feature). It looks like the ps utility on OS X doesn't support --forest though, so pstree is still useful there.
Metaphors are sometimes confusing. Call it what it is. "Reflection" should have been called "type information", for example, and "--forest" should have been called "--nested".
A process tree is known to almost everyone who uses Unix. A forest is, in my opinion, a good name for a collection of trees.
Reflection is the ability of something to introspect. The term "self-reflection" dates back to 1652[1], so why change it? "Type information" is inaccurate because reflection can mean more than just getting information about types, like interfacing with the runtime or getting information about the language implementation.
I don't think concepts should be misnamed at the expense of accuracy.
The term "self-reflection" dates back to 1652, so why change it?
I'm saying it shouldn't have been used in the first place. It's an awkward, confusing metaphor. Programs aren't "reflecting", because "self-reflection" is something that only a sentient creature can perform. Programs currently have no concept of "self". I'd say "reflection" is less accurate than "type information", which covers at least 95% of use cases, if not more.
I don't think concepts should be misnamed at the expense of accuracy.
Neither do I. Also, all else being equal, less metaphors are better.
the command 'tee': let's you see what is happening at an intermediate stage of a pipeline (I originally thought your blog post would be about tee)
the -S flag to objdump: If you're trying to learn how compilers work or assembly, the '-S' flag to objdumb is absolutely beautiful. If you compile a binary with debugging symbols (-g in gcc), 'objdump -S binary' will intersperse the assembly with the original code, letting you see what each line compiled into.
I use 'pushd' and 'popd' bash builtins. It lets you remember current directories and have a stack of locations that you've been. Very valuable when you're jumping around a large source tree.
Even though not a command, I usually follow ls by "-ltr". That arranges it such that the most recently modified file is on the bottom. Very easy to see what files were modified.
"du -sk * | sort -rn" for a nice listing of directory sizes.
Pressing "esc, ." (That's the Esc key, followed by a period. Not together) in bash would insert the previous command's argument.
i.e.:
bash# ls /some/really/long/path/to/a/huge/file.c
bash# vi [esc, .] # <-- that would insert the path
bash# vi $_ # <-- same effect
Looks like the Esc-. trick works in zsh as well. I didn't know about it. Cool. Thank you.
With regard to pushd and popd: these tricks work in zsh also. In addition, (again, zsh-specific) you can set these options:
setopt auto_pushd # automatic directory stack
setopt pushd_minus # swap +/- for pushd and popd
setopt pushd_silent # no output after pushd and popd
setopt pushd_to_home # pushd with no arguments goes to ~
setopt pushd_ignore_dups # no duplicates in directory stack
With these options set, every time you change directories using cd, the directory goes into your directory stack, basically executing an implicit pushd. Then, move around the directory structure a little, and try typing 'cd -<Tab>' (meaning type 'cd -' and press Tab for autocompletion). It'll list entries in your directory stack by number (so -1 is your previous directory, -2 is the one before, and so on). It's a handy wrapper for bare pushd/popd functionality.
Bash might offer similar functionality, but I don't know much about bash.
setopt pushd_to_home # pushd with no arguments goes to ~
I would personally dislike that, because without that option set, calling pushd (at least in Zsh) with no argument will actually rotate the top two elements on the directory stack, which can be extremely handy for switching between two places, rather than needing to constantly call `pushd +1` or `pushd -1`.
e.g.:
$ pu ~
/home/user
$ pu workspace/project
/home/user/workspace/project /home/user
$ pu /tmp/data
/tmp/data /home/user/workspace/project /home/user
$ pu
/home/user/workspace/project /tmp/data /home/user
$ po
/tmp/data /home/user
$ po
/home/user
note that I alias pu="pushd" and po="popd" for easier shell usage...
I doubt it. One of the big features of zsh over bash is the tab completion is fantastic (arguments, URLS, etc). With a good .zsh and .zshrc, it is an amazing experience.
My only complaint with Zsh's tab completion, which is usually far superb compared to Bash, is that package-name completion is much slower in Zsh as compared to Bash, at least on the Debian and Ubuntu systems that I use. Typing `aptitude install build-<Tab>` in Bash almost immediately completes "build-essential", but Zsh can take anywhere from 3-15 seconds to search through the package cache before completing the same package name.
Other than that, I absolutely love Zsh's completion system, and the awesome capabilities it provides, like context-sensitive Git argument completion, such as `git add <Tab>` listing only the files that are either changed since the last commit, or not yet tracked by Git. Insanely useful!
mmv (moves multiple files based on a simple pattern replacement language). Example:
mmv "*.mp3" "old_#1.mp3"
prefixes all files in the current directory ending with .mp3 with 'old_'. Btw., the quotes around the arguments are necesssary because mmv uses some of the same metacharacters as the shell does.
qmv, qcp - Rename or copy files quickly, editing the file names in a text editor
Found nothing better for bulk renames because you get to use all the pattern/replace power of your favorite editor instead of learning some wierd specialized grammar through trial & error.
This is how to remove all spaces from mp3 filenames using vim:
A lot of what I find lacking in a standard terminal, Emacs makes up for with its shell and terminal emulators. I find that an editable buffer with the contents of my entire session to be eminently powerful when combined with keyboard macros and the search/replace commands.
There's a 'rename' script that comes with Perl that also does this, which may be useful if you want some really fancy renames. E.g. rename '$_=lc' *.MP3 to change case to lowercase.
1. I often tend to use it when I want to pipe the output into a command. If it's a big transfer, I don't really want to have to stream several gigabytes of data onto disk, then off it, then back on again. Restoring a database dump is one example:
2. The other big advantage over scp is that sometimes, your target (deliberately) isn't accessible directly from an SSH login.
In the example above, I want to process that data as the 'postgres' user, but there's no way I'm allowing direct logins as that user. 'root' is, of course, the other common example.
Firstly, if the two machines are on the same switch this method is noticeably faster than scp'ing something large. The actual main reason though, is that this isn't limited to tarring over a network. What it comes down to, is you can almost seamlessly have a shell pipeline and one of those stages jumps a machine boundary.
I find xargs handy, it lets you use the output of one command as arguments for another command. As an example, I have a little alias that I use to grep through my ruby/rails source trees:
alias gr 'find . -name \*.rb -o -name \*.rhtml -o -name \*.erb -o -name \*.rjs | xargs egrep \!*'
(backticks can be used similarly, but can run into command line length limits)
This sounds useful but I don't like the fact that it's named "pv". I've always associated "pv" with physical volume and lvm... might be a little confusing.
And yet, it is vastly better than Windows actual command line and extremely useful for managing and developing on Windows boxes. I find your claim of it being a bloated piece of crap simply untrue.
Powershell is not a unix, just because it can do some of the same things and has aliases for some of the commands doesn't make it a drop in replacement. Cygwin is a drop in replacement that lets me run the same scripts I run on my Linux boxes, i.e. it is not redundant.
I'll take cygwin over powershell any and every day of the week.
I didn't state that Powershell was a Unix. Instead, I implied it's a superior way to control Windows from a command line, which it is.
Running Linux scripts is a double edged sword: you can use existing scripts, but they can't access the whole Windows API, are too long due to unnecessary regex work, and can't report as easily.
I don't want to control Windows from the command line, I want to control my own applications, access the file system, and run and manage services.
Of course powershell integrates with Windows better than cygwin, the power of cygwin is that it makes Windows a usable Unix, which is far more important to me than controlling Windows.
SUA seems to be what my windowsy friends talk about. It's in newer versions of windows. It natively supports many UNIX apps, and has source compatibility with UNIX.
Sometimes, pv(1) doesn't help. For example, `tar cf - foo | bzip2 -9v >foo.tar.bz2'. You don't know the size of the data that needs to pass down the pipe. But I sometimes find watching tar(1) open the files to read is handy; `strace -e trace=open $(pidof tar)'.