“In addition to viewing URLs in the standard Web browsers,
there are other useful ways of getting and using Web data on Linux
systems right now. Here are a few of them.”
“If you want to view an image file that’s on the Web, and you
know its URL, you don’t have to start a Web browser to do it —
give the URL as an argument to display, part of the ImageMagick
suite of imaging tools (available in the Debian imagemagick
package…)”
“If I want to read the text of an article that’s on the Web, and
I just want the text and not the Web design, I’ll often grab the
URL with the lynx browser using the -dump option. This dumps the
text of the given URL to the standard output; then I can pipe the
output to less for perusal, or use redirection to save it to a
file.”
“When I want to save the contents of a URL to a file, I often
use GNU wget to do it. It keeps the file’s original timestamp, it’s
smaller and faster to use than a browser, and it shows a visual
display of the download progress. (You can get it from the Debian
wget package or direct from any GNU archive).”