Sunday, December 13, 2009

ATI : Binary Driver Howto

By default Ubuntu will use the open source 'ati' or 'radeon' driver for cards manufactured by ATI. Some users however prefer the proprietary 'fglrx' driver for various reasons. The instructions on this page will tell you how to use this driver.

There are two ways you can install proprietary fglrx drivers. The preferred way is to use the drivers provided via the Ubuntu repositories. More advanced users can also try the drivers from ati.com. Both approaches are documented below and you need to take only one of them. The Ubuntu-provided ones are the safest bet, the ati.com ones however may be needed (eg: when you need hibernation).

As mentioned elsewhere, if you encounter bugs with these closed-source drivers, developers will not be willing or even able to assist you in resolving your issues. Use them at your own risk. We encourage our users to prefer open source drivers.

Prerequisites

Make sure the following things are true about your video card:

  • It is a 'Radeon' card
  • The model of the card is in the 9xxx series, 9500 or higher, or it is in the X series (e.g. X300), or it has TV-Out capability. The 'fglrx' driver does not support cards earlier than the 9500
  • The command lspci reveals a card with "ATI" in its name
  • You need hardware-accelerated 3D support, or display refresh rates higher than 60 Hz. The open source drivers are fine for all other areas
  • Some basic knowledge of a Linux command line (see UsingTheTerminal)

Note that if you own an ATI card from the R400 series or below, you already have working 2D and may have accelerated 3D with the default drivers. These cards include:

  • R400 series Xnnn (X800, X700, etc) (3D works)
  • R300 series (9300+) (3D works)
  • R200 and R100 series (9200 and below)

For specific chipsets and models, see the Xorg 7.0 Release Notes.

Install from Ubuntu repositories (easier)

Instructions for Ubuntu 9.04 (Jaunty)

Enable the accelerated ATI graphics driver in the 'Hardware Drivers' (System->Hardware drivers), then do:

sudo dpkg-reconfigure -phigh linux-restricted-modules-`uname -r`
sudo insmod /lib/modules/`uname -r`/volatile/fglrx.ko

Log out and log in.

JAUNTY NOTICE

Instructions for Ubuntu 8.04 (Hardy) and 8.10 (Intrepid)

Enable the accelerated ATI graphics driver in the hardware drivers menu (System->Administration->Hardware Drivers), then do:

sudo dpkg-reconfigure -phigh linux-restricted-modules-`uname -r`
sudo insmod /lib/modules/`uname -r`/volatile/fglrx.ko

Log out and log in.

Instructions for Kubuntu 7.10 (Gutsy)

First make sure linux-restricted-modules-generic and restricted-manager-kde are both installed

sudo apt-get install linux-restricted-modules-generic restricted-manager-kde

Open the restricted drivers manager from KMenu → System Settings → Advanced → Restricted Drivers and select "ATI accelerated graphics driver". This will hopefully enable fglrx in a painless way. If not, follow the instructions for Feisty.

Instructions for Ubuntu 7.10 (Gutsy)

  • Install linux-restricted-modules and restricted-manager provided in the restricted repositories:

sudo apt-get update
sudo apt-get install linux-restricted-modules-generic restricted-manager

Open the restricted drivers manager in "System -> Administration -> Restricted Drivers Manager" and select "ATI accelerated graphics driver".

Instructions for 6.06 (Dapper)

Install the kernel drivers. These drivers should be installed by default, but it's better to make sure they are installed. You need the package linux-$arch, where you replace $arch by the CPU architecture for the machine. This is 386 for Intel Pentium, 686 for Celeron, Pentium Pro, Pentium II, Pentium III, and Pentium 4 without Hyper-Threading. 686-smp for Pentium 4 with Hyper-Threading, or k7 or k7-smp for AMD athlon. On 64-bit systems, this may be amd64-generic, amd64-k8, amd64-k8-smp, or amd64-xeon.

sudo apt-get install linux-686
or
sudo apt-get install linux-k7
or
...

You also need to install the restricted-modules package that match ***exactly*** the kernel you are running, as well as specific required packages: (if you ran the previous command, make sure to reboot on your new kernel, otherwise this will install the wrong kernel modules !)

sudo apt-get update
sudo apt-get install linux-restricted-modules-$(uname -r)
sudo apt-get install xorg-driver-fglrx fglrx-control

Please note that the fglrx-control package is not compulsory as it seems to be buggy (but wont affect your machine in anyway :) ).

If the restricted-modules package for the kernel you are running is not available (it happens sometimes with K/Ubuntu), you may have to opt for running a kernel for which this package is available or to install the drivers directly from the setup script provided by ATI (https://support.ati.com/ics/support/default.asp?deptID=894&task=knowledge&folderID=27)

Once the above packages are correctly installed, run these commands:

sudo aticonfig --initial
sudo aticonfig --overlay-type=Xv

Then go back and edit xorg.conf with your favorite editor, perhaps:

gksudo gedit /etc/X11/xorg.conf

or:

kdesu kate /etc/X11/xorg.conf

and make sure that under the "Device" section, the Driver is set to

Driver "fglrx"

You will have 2 device sections related to your graphic card, one is the pre-aticonfig one, and should use the ati or radeon driver. No need to change this part as it is not used by xorg anymore. The other device section however will have to use the fglrx driver.

It appears that fglrx is often unstable, at least on AMD64. System may lock on 8.25 driver. 3D accell may not work on 8.28. The 8.26.18 driver may be your best bet, as of Sept2006. Instructions for updating drivers are at: http://wiki.cchtml.com/index.php/Ubuntu_Dapper_Installation_Guide#Method_2:_Generating.2FInstalling_Ubuntu_packages_for_the_8.28.8_drivers_in_Ubuntu_Dapper_Manually But I recommend 8.26 at this time, not 8.28.

Reboot.

Confirm it worked, by issuing the "fglrxinfo" command:

  • fglrxinfo/glxinfo may not work properly for you via SSH and via the console when logged in as root.

$ fglrxinfo
display: :0.0 screen: 0
OpenGL vendor string: ATI Technologies Inc.
OpenGL renderer string: RADEON 9700 Generic
OpenGL version string: 2.0.5755 (8.24.8)

Source: http://wiki.cchtml.com/index.php/Ubuntu_Dapper_Installation_Guide

Troubleshooting

You may see a message

Xlib:  extension "XFree86-DRI" missing on display ":1.0

If the line

load "dri"

in

Section "Module"

is missing from your /etc/X11/xorg.conf then add it. However this message does not necessarily indicate a problem.

If fglrxinfo gives you the following, your installation is not completed correctly:

  • fglrxinfo/glxinfo may not work properly for you via SSH and via the console when logged in as root.

$ fglrxinfo
display: :0.0 screen: 0
OpenGL vendor string: Mesa project: www.mesa3d.org
OpenGL renderer string: Mesa GLX Indirect
OpenGL version string: 1.2 (1.5 Mesa 6.4.1)

In this case, watch for these things:

  • Make sure that the resctricted-modules package installed correspond to the kernel your are running and that you can load the fglrx driver, wether by issuing the command "sudo modprobe fglrx" or by verifying that the module appears in the list of loaded modules, by issuing the command "lsmod";
  • It may be necessary to establish a symbolic link for the /usr/lib/dri folder, by issuing the following command: "sudo ln -s /usr/lib/dri /usr/lib/xorg/modules/dri";
  • You may have to deload the radeon and dri modules, by issuing "sudo rmmod radeon" and "sudo rmmod dri";
  • Make sure you deload the module ati-agp by issuing "sudo rmmod ati-agp" and blacklist it in /etc/modprobe.d/blacklist.
  • Check the /etc/X11/xorg.conf in Section "Module" to have this line: Load "dri" and it is not commented.

Install from ati.com (latest version of drivers)

IconsPage/warning.png WARNING: this method of installing the driver is not recommended and not supported, and any problems that occur after using the following instructions should not be reported to the Launchpad bug area.

Instructions for Ubuntu 8.04 (Hardy) with ATi 8.443.1-1 and above binary drivers

To begin first install the needed packages:

sudo apt-get install dpkg-dev debhelper libstdc++5 dkms build-essential cdbs fakeroot

You will then need to build the installation packages with the downloaded ATi drivers (ensure the ATi drivers have the execute flag set first):

./ati-driver-installer-8.443.1-x86.x86_64.run --buildpkg Ubuntu/ 

You can replace in the above with the codename for the version of Ubuntu you are running (gutsy, hardy, intrepid).

Then install the binary drivers:

sudo dpkg -i fglrx-kernel-source_.deb

Run the following command to install the Xorg driver

sudo dpkg -i xorg-driver-fglrx_.deb

Finally run aticonfig to build your new xorg.conf if you have not done so before:

sudo aticonfig --initial

Reboot and X.Org should start with the ATi binary drivers fully functional. To confirm the drivers are working from a terminal run:

fglrxinfo

You should get output similiar to the following:

display: :0.0  screen: 0
OpenGL vendor string: ATI Technologies Inc.
OpenGL renderer string: ATI Radeon HD 3870
OpenGL version string: 2.1.7170 Release

If you see any mention of the "MESA" in the output the drivers have not installed correctly. Look at instructions below for possible fixes.

Instructions for 6.06 (Dapper)

  1. Download the apropiate drivers from ati.com. You will need the ATI Driver Installer, not the seperate XFree86/X.org rpm packages. Save the installer into an empty directory (or at least one containing no *.deb files), since it will create several new files.

  2. Make sure the universe section of the Ubuntu repositories is enabled (See the AddingRepositoriesHowto)

  3. Perform the following commands (where is the version number of the installer):

    $ sudo apt-get install fakeroot gcc-3.4 module-assistant build-essential debhelper
    $ fakeroot sh ./ati-driver-installer-.run --buildpkg Ubuntu/edgy
    You may need to wait a few mintues for this to complete.

This will create a number of .deb files in the current directory. note: If you run Dapper, replace "edgy" (above) with "dapper".

1 sudo dpkg -i *.deb
2 sudo module-assistant prepare,update
3 sudo module-assistant build,install fglrx-kernel
4 sudo depmod
note: You need to repeat steps 2-4 - building the kernel module -
everytime you upgrade the kernel.

Seveas Repository

You do not need to take all these steps if you run an up-to-date Dapper installation on a 32 bit system. Dennis Kaarsemaker provides these packages in a repository. Add the following line to /etc/apt/sources.list:

deb http://mirror.ubuntulinux.nl/ dapper-seveas drivers

Then you can simply install the ubuntu-fglrx-$arch (see above for the meaning of $arch) package.

/!\ The fglrx driver on Dapper (8.26.18-1) can cause rss-glx screensavers to run very slowly.

Modifying xorg.conf

When you install from ati.com drivers or the dapper-seveas repository, you still need to change xorg.conf and add the fglrx module to /etc/modules as described under "Ubuntu provided drivers". There are scripts from ATI that may or may not work for you. They will backup xorg.conf before modifying it.

$ sudo aticonfig --initial
$ sudo aticonfig --overlay-type=Xv

/!\ Whether you install manually or from dapper-seveas, you MUST disable the Ubuntu-provided fglrx by performing these actions:

  • Disable fglrx in /etc/default/linux-restricted-modules-common
  • Run sudo /sbin/lrm-manager
  • Run sudo depmod -a
  • Reboot

There is a forum thread on installing ATI drivers from ati.com. Look there if you have trouble, and if your problem isn't already solved there, post a question.

More Troubleshooting

  • If after everything you still get indirect rendering when typing "fglrxinfo" run:

$ depmod -ae

Now reboot and hope for the best.

  • If you're using an AMD64 configuration and your Xorg.0.log mentions a 'duplicate symbol rol_long' message, comment out the 'Load "int10"' line in the Module section of /etc/X11/xorg.conf
  • If you are using an ATI Radeon Xpress 200M on an AMD64 CPU and the fglrx driver crashes with a blank screen on startup, change your BIOS settings to use the UMA+Sideport Video Mode with 128MB of Shared Video Memory. See http://ensode.net/ati_radeon_xpress_200m_linux.html

  • If you are going to compile 3d applications, you will want to install the fglrx-driver-dev package

  • The fglrx doesn't support 16 bit colour on some chip sets, if you have problems with X locking up on boot try setting this in your xorg.conf file to 24
  • If you are having problems related to DRI or 3d acceleration and the following lines show up in your /var/log/Xorg.0.log

    (WW) fglrx(0): Kernel Module version does *not* match driver.
    (EE) fglrx(0): incompatible kernel module detected - HW accelerated OpenGL will not work

    then make sure you installed either linux-$arch or ubuntu-fglrx-$arch.Another reason for either this error message, or incorrect driver information when running fglrxinfo (reports that the mesa driver is still being used) could be that the (K)Ubuntu fglrx drivers were not uninstalled before installing the ATI driver, or that the restricted-modules package is installed. To fix this issue, start Adept or Synaptic and remove the fglrx packages supplied with (K)Ubuntu as well as the restricted-modules package. Quit KDE and go to a console.

$ sudo modprobe -r fglrx
$ gksudo gedit /etc/X11/xorg.conf
or
$ kdesu kate /etc/X11/xorg.conf
  • Change the driver for the device to 'ati' instead of fglrx to use the standard Xorg supplied driver.

$ startx
  • Now re-run the ATI driver installation
  • If there are no obvious error messages in Xorg.0.log but 3D acceleration is still not working, you should look at glxinfo output in debug mode:

LIBGL_DEBUG=verbose glxinfo
  • Possibly there are some errors in the beginning concerning not found drivers in /usr/X11R6/lib/modules/dri/. This could be the case if you used the driver from ATI and are now using again the provided fglrx driver. ATI's fglrx driver installs a script in /etc/X11/Xsession.d/10fglrx which changes the search path for libraries, causing 3D-related errors. In this case just remove the script:

$ sudo rm /etc/X11/Xsession.d/10fglrx
  • Sometimes 2D acceleration with xv is not enabled. You need this for smooth video playback among other things. In this case you should check if your /etc/X11/xorg.conf contains the line Option "VideoOverlay" "on" in the corresponding section:

Section "Device"
Identifier "ATI Radeon"
Driver "fglrx"
Option "VideoOverlay" "on"
BusID "PCI:1:0:0"
EndSection

General Troubleshooting

Video-out

Black and White/Wrong Colours or Scrolling Picture

  • If you are having issues with S-Video out where the video is incorrectly displayed (such as black and white or scrolling), try adding the line
            Option      "TVFormat" ""

    into the Device section that lists your monitor/TV, as the output to the TV might be set to PAL or another incompatible format for your region/TV. Replace the with the region specific video type. The TVFormat choices are: NTSC-JPN, NTSC-M, NTSC-N, PAL-B, PAL-CN, PAL-D, PAL-G, PAL-H, PAL-I, PAL-K, PAL-K1, PAL-L, PAL-N, PAL-M, PAL-SCART.

Example:

Section "Device"
Identifier "ATI Radeon 9600/X1050 (Screen 1)"
Driver "fglrx"
Option "TVFormat" "NTSC-M" # NTSC-M is used in North America -- Canada/USA -- amoung other countries
BusID "PCI:2:0:0"
Screen 1
EndSection
Don't forget that most older TVs use a standard resolution of 640x480, so it may be wise to force that resolution:
Section "Screen"
Identifier "TV"
Device "ATI Radeon 9600/X1050 (Screen 1)"
Monitor "Toshiba 32in (Screen 1)"
DefaultDepth 24
SubSection "Display"
Viewport 0 0
Depth 24
Modes "640x480" # Force this standard resolution
EndSubSection
EndSection
More information about this issue is listed in the Ubuntu Forums and also through the MythTV Wiki. If you are unsure of your region, review this region list. The fglrx man page might be useful as well.

Thursday, November 26, 2009

Safely Creating Temporary Files in Shell Scripts

The basic idea of a symlink exploit is to predict where an application will create its temporary file and put a symbolic link (symlink) at that place. When the application now tries to work with a tempfile it will actually work with the file that the symlink points to. The most simplistic thing a (local) attacker can do with this is to have the symlink point to important files such as /bin/bash. This will cause the program (assuming it has root privileges) to overwrite /bin/bash with its temporary data, making the system unusable. . . .

Safely Creating Temporary Files in Shell Scripts



0. Abstract

This paper discusses how a programmer can write shell scripts that securely create temporary files in world/group writable directories. After explaining why it is important to be careful with temporary files I give some hints on how to identify and fix vulnerable shell scripts. This paper concentrates on how things are done. I intentionally leave out lots of gory details in order to make this document shorter and easier to understand for people that just want to write secure code with as little extra effort as possible.

1. Introduction

1.1 Why I wrote this

There are actually many documents and papers out there that describe how to securely create temporary files, so why yet another one? Unfortunately, most of these documents refer mainly to C programs even though many temporary files are created by shell scripts. Such shell scripts are wrappers for other programs or configuration scripts. They are inside Makefiles, they are install scripts included in RPM packages or just small tools written by the admin. Due to the nature of these scripts they very often handle temporary files. In contrast to C/C++ there is no standard mechanism for creating temporary files. In order to compensate for this, programmers have to reinvent the wheel time and time again which is very error prone. As a result many scripts handle temporary files insecurely, creating a security risk that is often underestimated.

1.2 Symlink Exploits: The Basic Idea

Programmers often do not think that creating temporary files in a secure way is important. After all it is just a tiny shell script. I will give a very brief introduction of how this can be exploited by an attacker in order to underline the importance of secure tempfile handling:

The basic idea of a symlink exploit is to predict where an application will create its temporary file and put a symbolic link (symlink) at that place. When the application now tries to work with a tempfile it will actually work with the file that the symlink points to. The most simplistic thing a (local) attacker can do with this is to have the symlink point to important files such as /bin/bash. This will cause the program (assuming it has root privileges) to overwrite /bin/bash with its temporary data, making the system unusable.

It is also possible to use this type of exploit to gain complete root privileges. If an attacker is somehow able to influence what data the insecure program writes to the tempfile (now /bin/bash) he can as a result modify the contents of /bin/bash or any other executable on the system. Obviously he will then get root privileges the next time the modified executable is run by root. There are more advanced concepts for exploiting insecure tempfile creation (not all of them involve symlinks), but these simple examples should demonstrate that insecure file creation may cause a lot of trouble.

2. Common mistakes

2.1 Predictable filename

In order to make it easier for you to identify vulnerable code I will show you some examples of how you should NOT create your temporary files.

Very often shell scripts create their temporary files without any checks at all:

 sort /some/file > /tmp/tempfile

This will make an exploit trivial: Just create a symlink called /tmp/tempfile and let it point to a file you want to overwrite.

Other vulnerable programs will create temporary files with a name like /tmp/tempfile.$$. The $$ variable contains the process ID (PID) of the shell running the shell script. This is very useful if you have several shell scripts running in parallel: Each one will have its unique tempfile name (because it has a unique PID) so there will be no conflicts. Unfortunately this does not improve security. An attacker knows that the PID of your program will most likely be between 1 and 33000. So the attacker will create 33000 symlinks, knowing that one of them will work.

2.2 Race conditions

The obvious solution to the problems from 2.1 seems to be this:

 rm -rf /tmp/tempfile.$$
sort /some/file > /tmp/tempfile.$$

"rm" will delete the symlink itself (not the file that the symlink points to), so that the "sort" command will be safe. It may sound good, yet the script is still insecure because now you have a race condition. Once the symlink gets deleted the attacker will know that "rm" was successfully run by the victim. Now he re-creates the symlink in the hope that the sort command has not yet created the tempfile. The attacker cannot be 100% sure that his exploit will be successful, but he can keep trying until it works.

2.3 Identifying vulnerable code

There are many variations of the above examples so one could think this kind of vulnerability is hard to find. Actually, it is much easier to detect than buffer overflows or format string vulnerabilities. Just do a

 grep -RH /tmp *

if you suspect that your package contains vulnerable code. If you feel really bored some day just try and scan your entire system, you might be surprised! Check if the files found by grep handle their tempfiles securely. If they do not, the next chapter will tell you how to fix this.

3. Writing secure code

3.1 Don't use tempfiles

This may sound a little stupid at first, yet there is some truth to it: If you don't do anything you can't mess up anything. Sometimes programs use tempfiles even though it is not necessary. Most commands can write their results to standard output instead of a file. Assuming that you have just a few bytes or kilobytes of temporary data it could actually be much more convenient for you as a programmer to store it in a variable rather than a file. This way you can sometimes avoid any risk of insecurely creating tempfiles simply by not creating them at all.

3.2 Use mktemp

If you have to use tempfiles then mktemp could be what you are looking for. Realizing that the creation of tempfiles in shell scripts is a security problem, Todd C. Miller wrote mktemp, a tool that will do this for you. The man page gives you some good examples of how to use it securely, so I am not going to explain that. If you write shell scripts for an environment where mktemp is available you should definitely use it to create your temporary files.

Just one little problem: Not all Linux distributions come with mktemp in their default install, so you cannot rely on mktemp to be available. Other Unixes (e.g. Solaris) don't necessarily have it either. If you want your script to be able to run on a wide variety of systems you will have to get along without mktemp. If maximum compatibility is not that much of an issue for you, you can just tell your users to install mktemp before they use your software. Mktemp will compile almost everywhere, so that should not be a big issue for most people.

3.3 Notes on mktemp

What you should know is that Suse Linux comes with a rather old version of mktemp. This older version of mktemp does not understand all of the command line options that are described on the mktemp website. It only understands the -d -q and -u options. If you try to use some of the newer options mktemp will simply exit and signal an error.

Similarly, the mktemp that comes with Mac OS X and FreeBSD 4.9/5.2 only supports -d -q -u and -t. Mktemp on Debian supports -d -q -u -t -p but not the -V option. So you should be careful when you use the new -V -p and -t options. It is probably better to stick to -d -q and -u for now to ensure portability. Optionally you may ask you distributor of choice to update his mktemp package.

3.4 Creating tempdirs with mkdir

Ok, so you are writing scripts for systems that may not have mktemp. Let's look for something to replace it that is available on all platforms. The mkdir command will do the job. "mkdir /tmp/tempdir.$$" will create the directory /tmp/tempdir.$$ and signal an error if it already existed. This is similar to what "mktemp -d" would do. But mkdir lacks some of the luxury that mktemp provides, so the code gets a little more complicated:

 (umask 077 && mkdir /tmp/tempdir.$$) || exit 1

Explanation: The umask command ensures that when mkdir creates the directory it will have an access mode of 700. Running chmod 700 afterwards would be a bit less secure. Because the umask is in parentheses it will not affect the umask for anything other than the mkdir command. If mkdir signals an error for whatever reason, the program exits. This approach is used by many shell scripts and is secure against symlink exploits. The code should work on every decent operating system that deserves to be called Unix-like.

Now that you have created yourself a secure temporary directory you can create new files/directories/whatever inside this tempdir without having to worry about security: You own the directory, you are the only one that has access to it. This makes it easy to fix existing code. All you need to do is add the code for the secure tempdir creation, adjust a few pathnames and you are finished! Just don't forget to clean up after yourself and delete the tempdir with "rm -rf" when you are finished with it.

3.5 Mkdir deluxe

While the above code is pretty good there is still room for improvements. First you should find out if the admin really wants you to use /tmp/ for your temporary data. For this purpose there is a shell variable called TMPDIR. If TMPDIR is set you should use this directory to create your temporary files. That way the admin can for example improve security by assigning each user a separate (=safe) directory for temporary data.

Using $$ is a convenient way of creating unique names for tempfiles. And even though the filename can be predicted the directory is still created in a secure way by the mkdir command. The question that remains is: Secure against what? Well, it is secure against symlink exploits and the like. But it is not secure against a DoS attack. An attacker simply keeps your application from creating its temporary directory by imitating (!) a symlink attack. That way your program always exits and never completely does what it is supposed to do. Depending on the program, this alone can be a security threat.

This problem can be avoided by appending $RANDOM to the filename. This variable contains a random integer between 0 and 32767. In order to get filenames that are as unpredictable as the ones created by mktemp you will need to append $RANDOM to your filename three times for a total of 35184372088832 (=2^45) different possible combinations. So here comes the code to create a temporary directory:

 tmp=${TMPDIR-/tmp}
tmp=$tmp/somedir.$RANDOM.$RANDOM.$RANDOM.$$
(umask 077 && mkdir $tmp) || {
echo "Could not create temporary directory! Exiting." 1>&2
exit 1
}

Explanation:
 Line one will set tmp to $TMPDIR if $TMPDIR is defined. If $TMPDIR is undefined "/tmp" is used as a default. Line 2 builds the actual directory name using $RANDOM three times. It is important that you put some kind of separator like the "." between the numbers. Without the "." there would be no difference between the random numbers 12, 34 and 56 (=123456) and the random numbers 123, 45 and 6 (=123456). As a result the number of different combinations would decrease considerably, thereby making your directory name more predictable. You should replace "somedir" with a more meaningful name in your own script. If anything goes wrong, an error message will be printed (to stderr, where it belongs) and the program exits. If everything is ok, then $tmp will contain the name of your temporary directory.

The only trouble you may have with this is $RANDOM. This variable is a feature of the Bourne Again Shell (bash) and not included in the normal Bourne Shell (sh). But on every Linux I have seen (and on MacOS X) both "sh" and "bash" are just the good old GNU bash that will happily provide you with $RANDOM. On other systems it may not be available.

But even then there is no reason to worry. In those cases $RANDOM will simply be an uninitialized variable that defaults to "". Hence your directory will be "/tmp/somedir....123" if your PID is 123. This is also the reason why I append the $$ to the directory name: To make sure I get unique directory names even if $RANDOM is not supported. On systems that don't provide $RANDOM this program will act just like the one given in chapter 3.4.

3.6 Mixing mktemp and mkdir

Many shell scripts try to use mktemp to create their tempdirs. If mktemp is not available they will resort to using mkdir. At first glance this makes perfect sense: try to use the secure mktemp and only use mkdir if you have to. But once you think about it you will realize that this is rather illogical. Because for you as a programmer there are two possible scenarios that you have to take into account when writing a shell script:

1. mktemp is always available
In this case there is no point in including the code that uses mkdir. It will never be used and is therefore dead code that should be removed.

2. mktemp may not be available.
That means now you have to include the mkdir code. Of course this code is secure enough for your purpose. If it is not, then you got a vulnerability that you should fix right now! So if you already have code that uses mkdir and is sufficiently secure, why would you want to use mktemp? Your program works perfectly fine without it and you get no security benefit from using it. Obviously, mktemp is unnecessary in this scenario and the code using it should be removed.

There may be exceptions to this, but 99% of the time you should use either mktemp or mkdir but NOT both.

If you really do need to mix mktemp and mkdir (which is unlikely) make sure you don't repeat a very common mistake:

 #try mktemp
tempfile=`mktemp /tmp/secureXXXXXXXX`

#if mktemp didn't work, use insecure name
test -n $tempfile && $tempfile=/tmp/insecure$$

do_whatever > $tempfile

You would expect this code to be secure on all systems that provide mktemp. Wrong! This one is vulnerable on every system. An attacker will prepare a symlink attack against the /tmp/insecure$$ file name. Then he will fill up the /tmp partition with data. As /tmp is full, mktemp will fail and the insecure file name will be used. You may now object that there are quotas and the like. But you as a programmer cannot expect everybody to set up quotas for /tmp. And even if there is a quota an intelligent attacker can still flood /tmp in most cases. A solution could look like this:

 #look for mktemp
mymktemp=`which mktemp`

test -n $mymktemp && {
#mktemp available
}

test -z $mymktemp && {
#no mktemp
}

This code checks if mktemp can be found and reacts accordingly. But as stated above, you should hardly ever need to do this.

4.0 Conclusion

As you have seen there are many mistakes one can make when creating tempfiles that can lead to anything from downtime to root compromise. Even though /tmp is the most prominent example for such security issues, it is certainly not the only dangerous directory. Whenever you work in group- or world-writable directories you have to make sure that your temporary files are created securely. You can protect yourself against such attacks quite easily if you just copy and paste a few lines of secure code into your script and adjust a number of lines. So you should take 5 minutes and check your scripts, it's definitely worth the time.

Monday, August 31, 2009

Remastering a Ubuntu Alternate CD

Remastering a Ubuntu Alternate CD

From COMPSA

Jump to: navigation, search
  • Warning! If you are not familiar with at least the basics of Linux or virtual machines it is suggested that you get some time using them before attempting to follow the instructions on this page.

Contents

[hide]

Overview

The goal of this page is to outline the process that we've come down to for creating an install CD for Ubuntu which is faster and easier to run on a large number of systems than the regular install is. This custom CD is also supposed to have all updates streamed into it (or at least a mechanism in place to automatically perform these updates) and should install common software not included in Ubuntu's default installation (such as the Flash player, MP3 support, etc.).

Useful tools

  • VMWare server is helpful for testing installations in a quick fashion (and can test remastered CDs without requiring you to use up physical media). VMWare is available at no cost from VMWare.com (you'll need a serial number from the same site, feel free to use mailinator or whatever instead of your real info if you'd like; they are a well-behaved company, though, in terms of sending out e-mail).
  • QEmu is a also available; it can perform the same function as VMWare server; but, also allows people on other architectures to make their machine emulate an x86 processor. QEmu website
  • You will require a Linux distribution to be able to remaster CDs. Just about any modern distribution is fine as the tools we use are pretty much universal (mount, rsync, a text editor, and mkisofs). The remastering process requires a significant amount of hard drive space (I'd suggest plan for it to use at least 10 GB assuming you use one or more virtual machines for testing your CD).

The Process

1) Fetch a copy of Ubuntu's alternate install CD if you don't have one already. Do not get the Live CD (which is the default one). Unless you know otherwise you want to get the i386 version (since that's what all of the machines we've been donated so far require).

2) Extract the files from the ISO image to a directory, let's call it buntu (note: you can't use a graphical tool like Archive Manager to do this-- it'll appear to succeed; but, will create a corrupt CD later on-- you must perform this command as root)

mount ubuntu-6.06.1-alternate-i386.iso ubuntu-iso/ -t iso9660,ro -o loop
rsync -azvb --delete ubuntu-iso/ buntu/

3) Change into directory buntu

4) Add to isolinux/isolinux.cfg directly before the LABEL install bit this (everything after append should be on the same line):

LABEL forcause
menu label Install Co^mputers for a Cause version
kernel /install/vmlinuz
append debian-installer/locale=en_US kbd-chooser/method=us
netcfg/get_hostname=ubuntu debconf/priority=critical
preseed/file/cdrom/preseed/cause.seed
initrd=/install/initrd.gz ramdisk_size=16384 root=/dev/ram rw --

(this adds a new option to the menu when the install CD starts up titled Install Computers for a Cause version and specifies language and location information. It also tells the installer where to look for information about what installation options to pick and tells the installer not to ask about anything but critical decisions)

5) Copy the contents of cause.seed to a plain text file preseed/cause.seed

6) Optional If you'd like edit isolinux/splash.pcx to be a pretty custom startup screen.

7) Change back up a directory so you're in the directory above buntu

8) Generate a new CD image called NewCD.iso

mkisofs    \
-r \
-V "Ubuntu 6.06.1 i386" \
-cache-inodes \
-J \
-l \
-b isolinux/isolinux.bin \
-c isolinux/boot.cat \
-no-emul-boot \
-boot-load-size 4 \
-boot-info-table \
-o ../NewCD.iso buntu/

9) Voila! You now have a customized Ubuntu CD. Feel free to test it in VMWare or QEmu or to burn it to a physical CD.

See also

These links contain more information or possible alternative approaches:

  • Instalinux - Create a custom network install image

Monday, August 17, 2009

Connect desktop apps using D-BUS

Helping applications talk to one another

Ross Burton (r.burton@180sw.com), Software Engineer, OneEighty Software, Ltd.

Summary: D-BUS is an up-and-coming message bus and activation system that is set to achieve deep penetration in the Linux® desktop. Learn why it was created, what it can be used for, and where it is going.

D-BUS is essentially an implementation of inter-process communication (IPC). However, several features distance D-BUS from the stigma of being "Yet Another IPC Implementation." There are many different IPC implementations because each aims to solve a particular well-defined problem. CORBA is a powerful solution for complex IPC in object-orientation programming. DCOP is a lighter IPC framework with less power, but is well integrated into the K Desktop Environment. SOAP and XML-RPC are designed for Web services and therefore use HTTP as the transport protocol. D-BUS was designed for desktop application and OS communication.

Desktop application communication

The typical desktop has multiple applications running, and they often need to talk to each other. DCOP is a solution for KDE, but is tied to Qt and so is not used in other desktop environments. Similarly, Bonobo is a solution for GNOME, but is quite heavy, being based on CORBA. It is also tied to GObject, so it is not used outside of GNOME. D-BUS aims to replace DCOP and Bonobo for simple IPC and to integrate these two desktop environments. Because the dependencies for D-BUS are kept as small as possible, other applications that would like to use D-BUS don't have to worry about bloating dependencies.

Desktop/Operating System communication

The term "operating system" here includes not only the kernel but also the system daemons. For example, with a D-BUS-enabled udev (the Linux 2.6 replacement for devfs, providing dynamic /dev directories), a signal is emitted when a device (such as a USB camera) is inserted. This allows for tighter integration with the hardware in the desktop, leading to an improved user experience.

D-BUS features

D-BUS has several interesting features that make it look like a very promising candidate.

The protocol is low-latency and low-overhead, designed to be small and efficient to minimize round-trips. In addition, the protocol is binary, not textual, which removes the costly serialization process. The use cases are biased towards processing on the local machine, so all messages are sent in the native byte ordering. The byte ordering is stated in each message, so if a D-BUS message travels over a network to a remote host, it can still be parsed correctly.

D-BUS is easy to use from a developer's point of view. The wire protocol is simple to understand, and the client library wraps it in an intuitive manner.

The library has also been designed to be wrapped by other systems. It is expected that GNOME will create wrappers around D-BUS using GObject (indeed these partially exist, integrating D-BUS into their event loop), and that KDE will create similar wrappers using Qt. There is already a Python wrapper that has a much simpler interface, due to Python's object-orientation and flexible typing.

Finally, D-BUS is being developed under the umbrella of freedesktop.org, where interested members from GNOME, KDE, and elsewhere participate in the design and implementation.


The inner workings of D-BUS

A typical D-BUS setup will consist of several buses. There will be a persistent system bus, which is started at boot time. This bus is used by the operating system and daemons and is tightly secured so that arbitrary applications cannot spoof system events. There will also be many session buses, which are started when a user logs in and are private to that user. It is a session bus that the user's applications will use to communicate. Of course, if an application wants to receive messages from the system bus, it can connect to it as well -- but the messages it can send will be restricted.

Once applications are connected to a bus, they have to state which messages they would like to receive by adding matchers. Matchers specify a set of rules for messages that will be received based on interfaces, object paths, and methods (see below). This enables applications to concentrate on handling what they want to handle, to allow efficient routing of messages, and to keep the anticipated multitude of messages across buses from grinding the performance of all of the applications down to a crawl.

Objects

At its heart, D-BUS is a peer-to-peer protocol -- every message has a source and a destination. These addresses are specified as object paths. Conceptually, all applications that use D-BUS contain a set of objects, and messages are sent to and from specific objects -- not applications -- that are identified by an object path.

Additionally, every object can support one or more interfaces. These interfaces appear at first to be similar to interfaces in Java or pure virtual classes in C++. However, there is not an option to check if objects implement the interfaces they claim to implement, and there is no way of introspecting an object to list the interfaces it supports. Interfaces are used to namespace the method names, so a single object can have multiple methods with the same name but with different interfaces.

Messages

There are four types of messages in D-BUS: method calls, method returns, signals, and errors. To perform a method on a D-BUS object, you send the object a method call message. It will do some processing and return either a method return message or an error message. Signals are different in that they cannot return anything: there is neither a "signal return" message, nor any other type of error message.

Messages can also have arbitrary arguments. Arguments are strongly-typed, and the types range from fundamental primitives (booleans, bytes, integers) to high-level structures (strings, arrays, and dictionaries).

Services

Services are the highest level of abstraction in D-BUS, and their implementation is currently in flux. An application can register a service with a bus, and if it succeeds, the application has acquired the service. Other applications can check whether a particular service exists on the bus and can ask the bus to start it if it doesn't. The details of the service abstraction -- particularly service activation -- are under development at the moment and are liable to change.


Use cases

Even though D-BUS is relatively new, it has been adopted very quickly. As I mentioned earlier, udev can be built with D-BUS support so that it sends a signal when a device is hot-plugged. Any application can listen to these events and perform actions when they are received. For example, gnome-volume-manager can detect the insertion of a USB memory stick and automatically mount it; or, it can automatically download photos when a digital camera is plugged in.

A more amusing but far less useful example is the combination of Jamboree and Ringaling. Jamboree is a simple music player that has a D-BUS interface so that it can be told to play, go to the next song, change the volume, and so on. Ringaling is a small program that opens /dev/ttyS0 (a serial port) and watches what is received. When Ringaling sees the text "RING," it uses D-BUS to tell Jamboree to turn down the volume. The net result is that if you have a modem plugged into your computer and your phone rings, the music is turned down for you. This is what computers are for!


Code examples

Now, let's walk through a few example uses of D-BUS code.

dbus-ping-send.c sends a signal over the session bus every second with the string "Ping!" as an argument. I'm using GLib to manage the bus so that I don't need to deal with the details of the bus connection myself.


Listing 1. dbus-ping-send.c
#include 
#include

static gboolean send_ping (DBusConnection *bus);

int
main (int argc, char **argv)
{
GMainLoop *loop;
DBusConnection *bus;
DBusError error;

/* Create a new event loop to run in */
loop = g_main_loop_new (NULL, FALSE);

/* Get a connection to the session bus */
dbus_error_init (&error);
bus = dbus_bus_get (DBUS_BUS_SESSION, &error);
if (!bus) {
g_warning ("Failed to connect to the D-BUS daemon: %s", error.message);
dbus_error_free (&error);
return 1;
}

/* Set up this connection to work in a GLib event loop */
dbus_connection_setup_with_g_main (bus, NULL);
/* Every second call send_ping() with the bus as an argument*/
g_timeout_add (1000, (GSourceFunc)send_ping, bus);

/* Start the event loop */
g_main_loop_run (loop);
return 0;
}

static gboolean
send_ping (DBusConnection *bus)
{
DBusMessage *message;

/* Create a new signal "Ping" on the "com.burtonini.dbus.Signal" interface,
* from the object "/com/burtonini/dbus/ping". */
message = dbus_message_new_signal ("/com/burtonini/dbus/ping",
"com.burtonini.dbus.Signal", "Ping");
/* Append the string "Ping!" to the signal */
dbus_message_append_args (message,
DBUS_TYPE_STRING, "Ping!",
DBUS_TYPE_INVALID);
/* Send the signal */
dbus_connection_send (bus, message, NULL);
/* Free the signal now we have finished with it */
dbus_message_unref (message);
/* Tell the user we send a signal */
g_print("Ping!\n");
/* Return TRUE to tell the event loop we want to be called again */
return TRUE;
}

The main function creates a GLib event loop, gets a connection to the session bus, and integrates the D-BUS event handling into the Glib event loop. Then it creates a one-second timer that calls send_ping, and starts the event loop.

send_ping constructs a new Ping signal, coming from the object path /com/burtonini/dbus/ping and interface com.burtonini.dbus.Signal. Then the string "Ping!" is added as an argument to the signal and sent across the bus. A message is printed on standard output to let the user know a signal was sent.

Of course, it is not good to fire signals down the bus if there is nothing listening to them... which brings us to:


Listing 2. dbus-ping-listen.c
#include 
#include

static DBusHandlerResult signal_filter
(DBusConnection *connection, DBusMessage *message, void *user_data);

int
main (int argc, char **argv)
{
GMainLoop *loop;
DBusConnection *bus;
DBusError error;

loop = g_main_loop_new (NULL, FALSE);

dbus_error_init (&error);
bus = dbus_bus_get (DBUS_BUS_SESSION, &error);
if (!bus) {
g_warning ("Failed to connect to the D-BUS daemon: %s", error.message);
dbus_error_free (&error);
return 1;
}
dbus_connection_setup_with_g_main (bus, NULL);

/* listening to messages from all objects as no path is specified */
dbus_bus_add_match (bus, "type='signal',interface='com.burtonini.dbus.Signal'");
dbus_connection_add_filter (bus, signal_filter, loop, NULL);

g_main_loop_run (loop);
return 0;
}

static DBusHandlerResult
signal_filter (DBusConnection *connection, DBusMessage *message, void *user_data)
{
/* User data is the event loop we are running in */
GMainLoop *loop = user_data;

/* A signal from the bus saying we are about to be disconnected */
if (dbus_message_is_signal
(message, DBUS_INTERFACE_ORG_FREEDESKTOP_LOCAL, "Disconnected")) {
/* Tell the main loop to quit */
g_main_loop_quit (loop);
/* We have handled this message, don't pass it on */
return DBUS_HANDLER_RESULT_HANDLED;
}
/* A Ping signal on the com.burtonini.dbus.Signal interface */
else if (dbus_message_is_signal (message, "com.burtonini.dbus.Signal", "Ping")) {
DBusError error;
char *s;
dbus_error_init (&error);
if (dbus_message_get_args
(message, &error, DBUS_TYPE_STRING, &s, DBUS_TYPE_INVALID)) {
g_print("Ping received: %s\n", s);
dbus_free (s);
} else {
g_print("Ping received, but error getting message: %s\n", error.message);
dbus_error_free (&error);
}
return DBUS_HANDLER_RESULT_HANDLED;
}
return DBUS_HANDLER_RESULT_NOT_YET_HANDLED;
}

This program listens for the signals dbus-ping-send.c is emitting. The main function starts as before, creating a connection to the bus. Then it states that it would like to be notified when signals with the com.burtonini.dbus.Signal interface are sent, sets signal_filter as the notification function, and enters the event loop.

signal_func is called when a message that meets the matches is sent. However, it will also receive bus management signals from the bus itself. Deciding what to do when a message is received is a simple case of examining the message header. If the message is a bus disconnect signal, the event loop is terminated, as there is no point in listening to a non-existent bus. (The bus is told that the signal was handled). Next, the incoming message is compared to the message we are expecting, and, if successful, the argument is extracted and output. If the incoming message is neither of those, the bus is told that we did not handle the message.

Those two examples used the low-level D-BUS library, which is complete but can be long-winded to use when you want to create services and many objects. This is where the higher-level bindings come in. There are C# and Python wrappers in development that present a programming interface far closer to the logical model of D-BUS. As an example, here is a more sophisticated reworking of the ping/listen example in Python. Because the Python bindings model the logical interface, it is not possible to send a signal without it coming from a service. So this example also creates a service:


Listing 3. dbus-ping-send.py
#! /usr/bin/env python

import gtk
import dbus

# Connect to the bus
bus = dbus.Bus()

# Create a service on the bus
service = dbus.Service("com.burtonini.dbus.SignalService", bus)

# Define a D-BUS object
class SignalObject(dbus.Object):
def __init__(self, service):
dbus.Object.__init__(self, "/", [], service)

# Create an instance of the object, which is part of the service
signal_object = SignalObject(service)

def send_ping():
signal_object.broadcast_signal("com.burtonini.dbus.Signal", "Ping")
print "Ping!"
return gtk.TRUE

# Call send_ping every second to send the signal
gtk.timeout_add(1000, send_ping)
gtk.main()

Most of the code is self-explanatory: a connection to the bus is obtained, and the service com.burtonini.dbus.SignalService is registered. Then a minimal D-BUS object is created and every second a signal is broadcast from the object. This code is clearer than the corresponding C code, but the Python bindings still need work. (For instance, there is no way to add arguments to signals.)


Listing 4. dbus-ping-listen.py
#! /usr/bin/env python

import gtk
import dbus

bus = dbus.Bus()

def signal_callback(interface, signal_name, service, path, message):
print "Received signal %s from %s" % (signal_name, interface)

# Catch signals from a specific interface and object, and call signal_callback
# when they arrive.
bus.add_signal_receiver(signal_callback,
"com.burtonini.dbus.Signal", # Interface
None, # Any service
"/" # Path of sending object
)

# Enter the event loop, waiting for signals
gtk.main()

This code is more concise than the equivalent C code in dbus-ping-listen.c and is easier to read. Again, there are areas where the bindings still need work (when calling bus.add_signal_receiver, the user must pass in an interface and an object path; otherwise, malformed matchers are created). This is a trivial bug, and once it is fixed, the service and object path arguments could be removed, improving the readability of the code even more.


Conclusion

D-BUS is a lightweight yet powerful remote procedure call system with minimal overhead costs for the applications that wish to use it. D-BUS is under active public development by a group of very experienced programmers. Acceptance of D-BUS by early adopters is rapid, so it appears to have a rosy future on the Linux desktop.


Resources

  • You'll find info, downloads, documentation, and more at the D-BUS home page.

  • D-BUS is developed as part of freedesktop.org/.

  • CORBA is a powerful, standardized remote procedure call specification.

  • ORBit is the CORBA implementation used in GNOME. (The GNOME component system Bonobo is built on top of ORBit).

  • KDE's remote procedure call implementation is DCOP.

  • Project Utopia aims to build seamless integration of hardware in Linux and uses D-BUS to achieve this.

  • Previously for developerWorks, Ross wrote Wrap GObjects in Python (developerWorks, March 2003), which shows how you can use a C-coded GObject in Python whenever you like, whether or not you're especially proficient in C.

  • The tutorials Bridging XPCOM/Bonobo: Techniques (developerWorks, May 2001) and Bridging XPCOM/Bonobo: Implementation (developerWorks, May 2001) discuss concepts and techniques required for bridging two component architectures so that the components from one architecture can be used in another environment.

  • Connect KDE applications using DCOP (developerWorks, February 2004) introduces KDE's inter-process communication protocol and how to script it.

  • Synchronizing processes and threads (developerWorks, October 2001) looks at inter-process synchronization primitives as a way to control two processes' access to the same resource.

  • CORBA Component Model (CCM) outlines the CORBA specification and CORBA interoperability with other component models.

  • Find more resources for Linux developers in the developerWorks Linux zone.

  • Browse for books on these and other technical topics.

  • Download no-charge trial versions of selected developerWorks Subscription products that run on Linux, including WebSphere Studio Site Developer, WebSphere SDK for Web services, WebSphere Application Server, DB2 Universal Database Personal Developers Edition, Tivoli Access Manager, and Lotus Domino Server, from the Speed-start your Linux app section of developerWorks. For an even speedier start, help yourself to a product-by-product collection of how-to articles and tech support.

About the author

Ross Burton is an average computer science graduate who by day codes Java and embedded systems. By night, to get away from the horrors, he prefers Python, C, and GTK+. You can contact Ross at r.burton@180sw.com.

Saturday, July 11, 2009

Three scripts for package management on Debian and Ubuntu systems

Five of the top 10 most downloaded distributions on Distrowatch use the Debian package system. It has developed a rich infrastructure of utilities -- not just the core commands apt-get and dpkg, but also such less well-known commands as apt-cache, apt-spy, and apt-listbugs. In addition, an array of other scripts, some mashups of existing utilities, and some original, are regularly available on sites like openDesktop.org. Such scripts help to streamline the process of keeping a Debian-based package system in working order, and provide information to help you make better decisions about software installation.

These scripts join a host of graphical front ends to apt-get and GUI tools for searching package repositories. However, because efficient package management is still done at the command line, they have a relevance that many command lines tools lack today. Some are simple, some are specific to one distribution (usually Ubuntu), and you might need to modify them before they suit your needs. However, all can be surprisingly useful if you believe in making hands-on decisions about the contents of your system instead of relying on the update applet in your notification tray. Here are three excellent examples.

Apt-utility

Apt-utility is a simple bash script designed for those who like to keep their systems constantly up to date, and would prefer not to enter the commands one at a time.

Unfortunately, the script is a little muddled in the apt-get commands that it issues. It begins, reasonably enough, by using apt-get update to make sure that the package repositories are current. However, it then runs apt-get upgrade, followed immediately by apt-get dist-upgrade -- which is a redundancy, since dist-upgrade does everything that upgrade does, as well as handling the dependencies in new versions of packages. Then it runs apt-get clean, followed by apt-get autoclean, leaving autoclean with nothing to do, since clean has already cleared out the /var/cache/apt/archives directory. The script ends with apt-get autoremove to remove packages that were installed as dependencies for packages that are no longer on the system.

You can fix those issues with a little judicious editing. While you are removing apt-get upgrade and apt-get autoclean from the script, you can also remove sudo from the start of every line of the script if you are not using the sudo command to access the root account (by default, Debian does not use sudo, while Ubuntu does).

You should also be check which repositories are enabled in /etc/apt/sources.list before you use this script or your own version of it. Used with the stable or even the testing repositories of Debian, Apt-utility should be safe, but automated updates with the unstable or experimental repositories enabled could result in broken dependencies and, in extreme cases, even major system problems.

Ubucleaner

Although it's intended primarily for Ubuntu, Ubucleaner is a grab-bag script that works -- mostly -- with other Debian-based distributions as well. The script cleans the apt-cache, removes the configuration files for removed packages, removes all kernels except the present one, and empties the trash for every user on the system. The kernel removal function works only with Ubuntu kernels.

All these tasks have in common is that they remove extra files from the system, so you might want to edit the script or comment out sections that you don't want. In particular, considering that a backup kernel may be useful if tinkering disables your current one, you might want to disable the kernel removal feature -- as well as the "Removing old kernels" message, so you don't have a heart attack when you run the script.

The script assumes that you are using the text-based Aptitude application, rather than apt-get and dpkg. If you are not using Aptitude, you should also replace the reference to aptitude clean with apt-get clean and the reference to aptitude purge to dpkg --purge.

daptup

Like Ubucleaner, daptup is intended for use with Aptitude. However, it is far safer than Ubucleaner, since it is purely informational, building for Debian-derived distributions lists of new packages, upgradeable packages, watched packages, and outdated packages that have upgrades available. You can use these lists to plan your software upgrades.

You can configure the content of these lists by editing the file /etc/daptup.conf. Here you can set such criteria as how many days old a package should be before it is listed as outdated, what packages on your system to watch, and what packages to watch that are uninstalled. You might, for instance, want to keep checking on whether the latest version of OpenOffice.org has arrived in a repository, or to wait until Gnash reaches its 1.0 release before you install it. The configuration file is heavily commented and includes examples, so you should have little trouble setting daptup to run exactly as you want.

If you do not automatically upgrade, or if you know that a large number of packages has recently flooded into the repositories you use, you might want to run daptup piped into either the less or more command (for instance, daptup | less); otherwise, the lists could easily be longer than your display buffer, so that the first entries disappear before the last one is visible. Alternatively, you might comment out some of the lists in /etc/daptup.conf, or change the number of lines in the display buffer in your terminal program's configuration settings.

Conclusion

There are many other scripts out on the Internet for Debian-based package management. For instance, UnusedPkg lists programs on the system that are not used and therefore might be removable, but the download is apparently no longer available. And for more advanced users who want to examine and compare dpkg status files, the awk script dpkg-diff might come in handy.

All these scripts help you gain more information about your system, and most are easy to modify even if you know little about any form of scripting. If you keep an eye on sites like openDesktop.org that list new applications, chances are that you will have no trouble finding utilities that allow you to make more intelligent decisions about the software you are running.

Bruce Byfield is a computer journalist who writes regularly for Linux.com.