Underline – A Minimalistic Readline Replacement

(Lib)underline is a small libreadline compatible commandline C library. Command line interface (CLI) may well be the first and arguably one of the most intuitive human-computer interfaces.

I wrote underline when I needed a CLI for an embedded system without operating system and a very limited (and horribly outdated) libc support. My original thought was to port libreadline, the de-facto standard. But in readline’s own words:
“It’s too big and too slow” (man 3 readline – Bugs section).

Underline implements a compatible (although partial) commandline library. The entire implementation is less than 500 lines of code and has some nice features in it:

  • Standard synchronic readline API for basic usage
  • Fully asynchronous API
  • History support (although quite limited)
  • Hotkeys support (for home/end etc.)
  • Basic VT-100 compatibility. Support standard termios library or custom VT implementation.
  • Fits embeeded systems – No memory allocations, small code size

The asynchronous API is suitable for systems requiring high performance. It was tested on Posix AIO, Arduino and RTOS like systems.

Clone the code from the github repo at: https://github.com/achayun/underline


The implementation is, well, minimal. There’s lots to do, especially if you have special needs.

Custom hotkey mapping – At the moment all hotkeys are hardcoded. Should be user override-able.

Command completion – Hitting TAB should call a user defined callback with the partial current word (and the full line).

Dynamic history buffer – The history buffer is static and not optimized. You could change the defines to get more history or implement a custom memory allocator or an arena if you need dynamic history size.

Dynamic prompt – Prompt can be used for fun stuff like showing the current path, Git branch or number of unread emails. Functionality to update the prompt with some helpers is a nice feature.

Terminal width/multiline support – Receive resize events from terminal and support multiline prompt (like bash’s ‘\’ or iPython’s Shift+Enter)

Thread safety – completely not written to be thread safe. The synchrous API has a static context and none of the functions was tested to be reentrant. If you want, make the context struct TLS or protect it and check the functions.

More modular code – History should probably be a module


Leave a comment

Filed under Uncategorized

Quick How-to: Automatically Enable Keyboard Wakeup From Suspend

If you like your power usage low, you probably put your computer to sleep (suspend) as much as possible when idle. If you want to wake up the machine by hitting a key on your keyboard you may need to break some sweat. Even in laptops where this feature is enabled by default, sometimes extra keyboards won’t work.

If you’re using a modern Linux kernel (3.2 and above) and have udev installed you can automatically enable wakeup upon connecting a keyboard by adding the following script to your system.
Simply put the following under your udev rules (e.g. /etc/udev/rules.d/90-keyboardwakeup.rules):

#Whenever a keyboard is connected to the machine, enable wakeup on its parent device
#Note: Conditions are *string* match, not numerical. Check your /sys files for the exact string 
#Enable wakeup on any keyboard connected. Keyboards are HID (class 3) protocol 1 devices.
SUBSYSTEM=="usb", ATTRS{bInterfaceClass}=="03", ATTRS{bInterfaceProtocol}=="01" RUN+="/bin/sh -c 'echo enabled > /sys$env{DEVPATH}/../power/wakeup'" 

The example is based on this post but instead of restricting to a single vendor/product the script should match any USB keyboard (I used a wireless Logitech to test it).

The above should work, but if you need to troubleshoot continue reading.

Note: Accessing device information in Linux (and especially USB devices) can be very confusing, proceed with patience and be ready to go reading around.

To test the script above, you can use udevadm (finding your sysfs /dev/bus/… file is a little tricky. You can go over all of them until you find your device).
Replace the /dev/bus index with the appropriate one:

sudo udevadm test /dev/bus/001/005

If the condition matches you should see in the command output something like (output truncated and highlighted):


run: ‘/bin/sh -c ‘echo enabled > /sys/devices/pci0000:00/0000:00:14.0/usb1/1-5/1-5:1.0/../power/wakeup”
Unload module index
Unloaded link configuration context.

Another test is to restart the udev service, disconnect and reconnect the USB keyboard and run (again, replace device index with appropriate one):

cat /sys/bus/usb/devices/1-5/power/wakeup

Output of the last command should be enabled.

Note that the device itself is 1-5:1.0  but the test is for device 1-5. The reason is that power control is on USB hub rather than the device (keyboard) itself. In this case the ‘hub’ was actually the IR receiver but in wired keyboard it will be one of the computer’s USB hubs

Note that I did not find it necessary to enable wakeup on the parent USB hub as recommended here. You may need to do so.

To do the ultimate test put your computer to sleep (example below assumes you have system.d installed. It takes control over power events over HAL):

sudo dbus-send --system --print-reply --dest=org.freedesktop.login1 /org/freedesktop/login1 "org.freedesktop.login1.Manager.Suspend
" boolean:true

The computer should put itsef to sleep. Hit the keyboard and check it comes back online.

Leave a comment

Filed under howto, linux

How to Fix Broken Steam Linux Client With Radeon Graphics Driver (Workaround)

Short Version
If Steam gives out the error: “OpenGL GLX context is not using direct rendering, which may cause performance problems.” and you’re sure the graphics driver is loaded properly try the following script to run Steam:

export LD_PRELOAD='/usr/$LIB/libstdc++.so.6' #Export so all child processes are affected as well
export DISPLAY=:0
#export LIBGL_DEBUG=verbose

Long Version
In the past 6 months my livingroom PC is a Linux box with AMD graphics card. I could never get AMD’s binary driver (FGLRX) to work with Steam reliably but the open source Radeon driver is surprisingly good and keeps improving. The downside is that every so often Steam client updates break for weeks until another update fix it. After going through this several times I decided to actually figure out the root cause and fix it. Here’s a quick walkthrough on how to debug your Steam client setup and how to fix a common problem that may pop up with open source graphics driver.

Running steam from the command line gives out the venerable ‘GLX direct rendering’ error but there’s a hint in the lines before it:

Running Steam on ubuntu 14.10 64-bit
STEAM_RUNTIME is enabled automatically
Installing breakpad exception handler for appid(steam)/version(1420770381)
libGL error: unable to load driver: radeonsi_dri.so
libGL error: driver pointer missing
libGL error: failed to load driver: radeonsi
libGL error: unable to load driver: swrast_dri.so
libGL error: failed to load driver: swrast
Installing breakpad exception handler for appid(steam)/version(1420770381)

Steam can’t open radeonsi_dri.so, the shared library responsible for communicating with the graphics driver. To rule out a problem with the driver itself, let’s verify an OpenGL enabled graphics driver is loaded by running:
DISPLAY=:0 glxinfo | grep -i direct
The output should be:

direct rendering: Yes

Next, to debug Steam load run the Steam client from commandline in verbose mode:
DISPLAY=:0 LIBGL_DEBUG=verbose steam

And now the output is:
Running Steam on ubuntu 14.10 64-bit
STEAM_RUNTIME is enabled automatically
Installing breakpad exception handler for appid(steam)/version(1420770381)
libGL: screen 0 does not appear to be DRI3 capable
libGL: pci id for fd 7: 1002:6798, driver radeonsi
libGL: OpenDriver: trying /usr/lib/i386-linux-gnu/dri/tls/radeonsi_dri.so
libGL: OpenDriver: trying /usr/lib/i386-linux-gnu/dri/radeonsi_dri.so
libGL: dlopen /usr/lib/i386-linux-gnu/dri/radeonsi_dri.so failed (/home/user/.local/share/Steam/ubuntu12_32/steam-runtime/i386/usr/lib/i386-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /usr/lib/i386-linux-gnu/dri/radeonsi_dri.so))
libGL: OpenDriver: trying ${ORIGIN}/dri/tls/radeonsi_dri.so
libGL: OpenDriver: trying ${ORIGIN}/dri/radeonsi_dri.so

The dlopen error tells us the file exist but there was an error with the LibC6 version the driver requires. Let’s check which LibC6 version is actually behind the libstdc++.so.6 that’s loaded:

~$ ls -l /home/user/.local/share/Steam/ubuntu12_32/steam-runtime/i386/usr/lib/i386-linux-gnu/libstdc++.so.6

lrwxrwxrwx 1 user user 19 Jul 19 00:52 /home/user/.local/share/Steam/ubuntu12_32/steam-runtime/i386/usr/lib/i386-linux-gnu/libstdc++.so.6 -> libstdc++.so.6.0.18

Let’s check what version is installed on the global /usr/lib path:
~$ ls -l /usr/lib/i386-linux-gnu/libstdc++.so.6

lrwxrwxrwx 1 root root 19 Oct 11 14:58 /usr/lib/i386-linux-gnu/libstdc++.so.6 -> libstdc++.so.6.0.20

So Steam loaded LibC6 with ABI version 18 where Radeonsi expects version 20.

Hold on for a second, why does this happen in the first place?

The answer is STEAM_RUNTIME. Steam for Linux packs a set of (standard) shared libraries called STEAM_RUNTIME. This allows the guys at Valve to optimize the client for specific dependencies without fearing the target machine will have different version. This is somewhat similar to static compilation where you compile the program’s dependencies in the main binary.

The problem is that Steam can’t come preloaded with ALL the libraries it needs and has to rely on the OS to supply some. This is exactly the case with radeonsi_dri.so. This library is in charge of the client’s OpenGL interface with the actual driver. Radeonsi also needs libstdc++ from LibC6 and gets updated independently every once in a while. Whenever an update changes the ABI, the global LibC6 and Steam’s gets out of sync.

So the only thing we need is to get Steam to load the newer version of the library.

Some forums advise you to remove Steam’s version of LibC6 by running:

rm /home/user/.local/share/Steam/ubuntu12_32/steam-runtime/i386/usr/lib/i386-linux-gnu/libstdc++.so.6

etc. This won’t work since Steam checks the integrity of its runtime and will fix the missing files on load.

A more elegant solution would be to have the proper LibC6 version loaded by the OS, by running:
LD_PRELOAD=/usr/lib/i386-linux-gnu/libstdc++.so.6 DISPLAY=:0 steam

This seems to work but puts out a lot of errors in the form of:
ERROR: ld.so: object '/usr/lib/i386-linux-gnu/libstdc++.so.6' from LD_PRELOAD cannot be preloaded (wrong ELF class: ELFCLASS32): ignored.

This error is an artifact of the fact Steam is a 32-bit binary running on a 64-bit machine. LibC6 has two versions (32-bit and 64-bit). The Steam binary needs the 32-bit version but other parts of the client run as native 64-bit and when they start they will get the LD_PRELOAD environment specifying a library with the wrong format. So how do we tell LD_PRELOAD to take the right one?

LD_PRELOAD='/usr/$LIB/libstdc++.so.6' DISPLAY=:0 steam

The somewhat non-intuitive $LIB parameter gets expanded in ld.so to the right path based on the platform of the process being started (man 8 ld.so for details).
The quotes around the environment are mandatory. It tells the shell not to try and expand the $LIB but rather pass it as is.

So finally Steam will load cleanly. A more permanent solution can be implemented in a loading script that checks if the global LibC6 version is newer than the one in STEAM_RUNTIME and only then LD_PRELOAD’s. I’ll leave this as an exercise to the reader.

Happy Linux beta gaming!


Filed under Uncategorized

Toggle KWin Composition Based in Power-Profiles on KDE 4.8

One of the changes in KDE 4.8 was KWin’s new composition / effect toggle handling. As a side effect the power management profiles no longer turn off composition and effects when the AC cord is plugged out.

This behaviour may be fixed in the future, in the meantime, below is a simple workaround:

1. Create a shell script something like this:

if [[ $1 == "on" ]]; then
    if [[ "$(qdbus org.kde.kwin /KWin org.kde.KWin.compositingActive)" = "false" ]] ; then
        qdbus org.kde.kwin /KWin org.kde.KWin.toggleCompositing
elif [[ $1 == "off" ]]; then
    if [[ "$(qdbus org.kde.kwin /KWin org.kde.KWin.compositingActive)" = "true" ]] ; then
        qdbus org.kde.kwin /KWin org.kde.KWin.toggleCompositing
    echo "Usage: $0 [on|off]"

2. Make the script executable (chmod +x) and test it from the command line

3. Go to ‘Energy Saving Settings’ and edit the default ‘On AC Power’ profile. Check ‘Run Script’ and type in:
/full path to your script/script_name on


4. Repeat the same for the “On Battery” profile, but this time pass the argument off

5. Finally test the script by disconnecting the AC power and note that KDE notifies that composition has been suspended. Also running the command:
qdbus org.kde.kwin /KWin org.kde.KWin.compositingActive
Should return ‘false’

Now you can enjoy extended battery life.


Filed under Uncategorized

Using Nvidia Optimus on Linux with Bumblebee

If you have an Optimus enabled laptop with Linux on it there are some good news around.

First, some background. Nvidia released laptops with Optimus based switchable graphics over a year ago but Linux support was missing. This means you must choose in the BIOS between battery guzzling dedicated graphics card or limited on-CPU graphics. On other OS users can choose to run specific application on the dedicated card and keeping it powered down when idle.

For some time now there’s been some unofficial work to get the feature working on Linux (see previous post on the topic). One of this efforts evolved to be the Bumblebee project.

Last week the project released version 0.3. The version features automatic stable and persistent power state. Simply put this means that even if your laptop goes to sleep mode and wakes up, the Nvidia card will stay powered down unless you decide to run an application on it. Aside from automatic power-state setting, the project also allows the user to run OpenGL applications on the Nvidia card with a command ‘optirun’.

Installation process has been greatly simplified, follow the link to instructions for your distribution on the release announcement page.

On the official side Nvidia started looking for ways to provide official support but it seems that this won’t happen in the immediate future.

Meanwhile I highly recommend installing the unofficial support so you can enjoy your switchable graphics card under Linux (for example check out Starcraft II under Wine).

Leave a comment

Filed under Uncategorized

Displaying Floating Point Numbers

What do you do when the system you’re working on doesn’t support printing a floating point variable? Below is a small introduction about floating point numbers and a couple of approaches on float-to-string implementation.

A common task when dealing with sensors is to display the read value as a real number with integer and fraction part. Most systems and languages provides the user with library functions to perform this. For example, in C on a PC one can simply write:
printf("Value: %f", val);
This will print value in the format of AAAA.BBBBB . Printf even allows the user to control the precision (number of digits after the decimal dot) etc.

When working with very small or very large numbers a preferred version would be to use Scientific Notation (i.e: 6.022141e23). In printf notation:
printf("Value: %e", val);

Thing is, that the printf function is very ‘heavy’ and many embedded system implementations choose to supply a limited version without floating point support. So how can we implement these facilities ourselves?

One common solution is to dump the number in its hexadecimal format. For example:
float val = 3.141592;
printf("Value: %08x", *((unsigned int *)&val));

The result would be: 40490fd8
The user can take the value and convert it back to float. The method’s main advantage is that automated systems can always read the value without complicated parsing. The obvious drawback is that it’s really not human-friendly.

The direct approach to display the number in real format would be to write a code like:

void strreverse(char* begin, char* end)
    char tmp;
    while (end > begin)
        tmp=*end, *end--=*begin, *begin++=tmp;

#define ABS(x) ((x) < 0 ? -x : x)
 * Based on one of the versions at:
 * http://www.jb.man.ac.uk/~slowe/cpp/itoa.html
 * Look there for multiple bases conversion
char *itoa(long value, char* str)
    char *p = str;
    static char digit[] = "0123456789";

    //Add sign if needed
    if(value < 0) *(p++)='-';

    //Work on unsigned
    value = ABS(value);

    // Conversion. Number is reversed.
    do {
        const int tmp = value / 10;
        *(p++) = digit[value - (tmp * 10)];   //like modulu 10, but fast
        value = tmp;
    } while(value);


    strreverse(str,p - 1); //Reverse back number
    return p;

    ftoa - Convert float to ASCII.
    f - Input floating number
    buf - Output string buffer, pre-allocated to sufficient size
    places - places after the decimal point
    Returns pointer to buf.
char *ftoa(float f, char *buf, int places)
    if (signbit(f))
        *(buf++) = '-';

    if (isnan(f)) {
        memcpy(buf, "nan", 4);
        return buf;
    if (isinf(f)) {
        memcpy(buf, "inf", 4);
        return buf;
    long int_part = (long)(f);
    const long prec = lpow(10, places);
    long frac_part = lround((f - int_part) * prec);

    //handle fraction round up to 1.0
    if (ABS(frac_part) == prec) {
        signbit(f) ? int_part-- : int_part++;
        frac_part = 0;

    buf = itoa(ABS(int_part), buf);
    *(buf++) = '.';

    //frac leading zeroes
    if (frac_part) {
        long tmp = ABS(frac_part) * 10;
        while (tmp < prec) {
            *(buf++) = '0';
            tmp *= 10;

    buf = itoa(ABS(frac_part), buf);
    return buf;
static inline long lpow(int base, int exp)
    long result = 1;
    while (exp) {
        if (exp & 1)
            result *= base;
        exp >>= 1;
        base *= base;
    return result;

This implementation is cross-platform and doesn’t rely on the internal format of the floating point number. It’s also limited and not very efficient… the function requires floating point multiplication and comparisons. Why isn’t there a straight forward to print a floating point number?
In almost all modern computer systems single precision floating point number (a.k.a float) is implemented according to the ieee-754 standard. The number’s internal make is as follows:

sign exponent significand (mantissa)
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

The format represents a normalized real number (similar to the scientific notation above). The format also encodes special values like infinity and not-a-number etc.
Still, the format seems very similar to the one we want to print, so why is it so complicated? The answer is its base. The number is coded in binary form. This means the number represented looks like 1.0010101e101. So just like integer form we need to base-convert it to decimal. The manual algorithm can be found here and here. Code example implementing this can be found here or fast low-precision form here.

The base-convert method is faster but also more bug-prone and machine dependent.

Perhaps unsurprisingly the ieee-754 standard also defines decimal-floating point storage, but I must admit that I haven’t seen a modern system with such implementation since binary floating point arithmetic is much simpler for hardware.

In the end I ended up using the simple direct method not requiring in-depth knowledge of the float format. Still there are some uses for the internal structure of floats to approximate inverse square root or exponents.

I’m still missing a fast implementation of function to print a floating point number in scientific notation. Can you recommend one?

For an in-depth look into the floating point implementation and common pitfalls I really recommend reading What Every Computer Scientist Should Know About Floating-Point Arithmetic. The paper covers rounding errors, best practices and removes some of the black magic around the topic.


Filed under code

Debian Squeeze Released

Just in time for the Super Bowl Debian 6.0 (Squeeze) was released earlier today. I’ve been using it on multiple installations fr servers, desktops and laptop and I can report it is rock solid as we’ve come to expect from Debian. Hardware support is very broad (although I don’t know if Intel’s latest Sandy Bridge chipset is supported completely). The software collection is mostly up-to-date, with the exception of desktop environments that are a bit lagged behind (Gnome 2.30, KDE 4.4.5 and Xfce 4.6), all in the name of stability and safe migration from the current stable release.

If you’re planning to install Debian 6.0, please note that this is the first time Debian is releasing their mainline image with no non-free firmware images. This is done to make sure the image is true to the Debian spirit. The step may cause problems to users with hardware that includes non-free firmware (Broadcom NIC’s for example). It is recommended to use the Debian alternative CD or to checkout Debian’s Firmware page.

Debian also launched an updated website with a new branding and good accessibility. The look is refreshing, but a couple of things might be nice to add to the new site. First, it will be useful to link instructions about non-free firmware on the main page and the download links. Second, an official Virtualbox image of Debian would be much appreciated.

While for some people this mark the dist-upgrade season for their servers, I’m planning to move my laptop to the new testing repository and get the all the bleeding-edge latest software releases that were held up due to Squeeze release.

Congratulations to the entire Debian team for this achievement and happy installation for everyone.

Leave a comment

Filed under linux