## Wednesday, March 28, 2007

### Quicksort

Sorting is something most programs have to do on some occasion. Quicksort is accepted as the best in-practice general purpose sorting algorithm available today. Standard C offers a built in Quicksort under the function name qsort(), with the following prototype:

`void qsort(void *base, size_t nmemb, size_t size, int(*compar)(const void *, const void *));`

Now while it may be built in, each C library implements it differently. Also, it's designed to work with any kind of type/structure, however one can generally optimize it a bit more for the kind that they're dealing with. Since sorting is important, and can take a while with many items, one generally wants their sorting to be optimized as much as possible.

I've looked at several implementations for various C libraries out there, however they all have one thing in common - messy. One I looked at didn't even implement Quicksort, but a Shakersort.

I'm wondering perhaps if some of the clever people who read this blog can come up with a clean but fast implementation.

To start off, here's a mostly unoptimized simple implementation of the standard qsort():

`static void swap_internal(char *a, char *b, size_t size){  if (a != b)  {    char t;    while (size--)    {      t = *a;      *a++ = *b;      *b++ = t;    }  }}static void qsort_internal(char *begin, char *end, size_t size, int(*compar)(const void *, const void *)){  if (end > begin)  {    char *pivot = begin;    char *l = begin + size, *r = end;    while (l < r)    {      if (compar(l, pivot) <= 0)      {        l += size;      }      else      {        r -= size;        swap_internal(l, r, size);      }    }    l -= size;    swap_internal(begin, l, size);    qsort_internal(begin, l, size, compar);    qsort_internal(r, end, size, compar);  }}void qsort(void *base, size_t nmemb, size_t size, int(*compar)(const void *, const void *)){  qsort_internal((char *)base, (char *)base+nmemb*size, size, compar);}`

Note, this implementation is ~40 lines and can be improved. It can be improved with a better pivot selection, removing the recursion, and throwing in an insertion sort for certain cases.

I'm challenging my readers to give me a good clean implementation, which is under 80 lines (bad indentation doesn't count, I'll be running submitted code through my formatter), compliant, and isn't a total wreck. Creating some helper functions to make it clearer is a good idea too. Almost every implementation I've seen makes use of gotos to all over the place, and has returns smack in the middle of the function.

It'd be nice to bring something so useful into the new millennium, instead of using messy or slow implementations from the 70s.

If I get some nice implementations in, I'll do another piece on how to optimize Quicksort for sorting certain types/structures.

## Monday, March 26, 2007

### Methods for safe string handling

Every now and then you hear about how a buffer overflow was discovered in some program. Immediately, everyone jumps on the story with their own take on the matter. Fans of a different programming language will say: "of course this wouldn't have happened if you'd have used my programming language". Secure library advocates will say: "you should have used that library instead". While experts of that language will say: "The guy was an idiot, he coded that all wrong".

I'd like to look at basic "C string" handling in C. We're talking about functions like strlen(), strcpy(), strcat(), and strcmp(). These functions report length, copy, concatenate, and compare C strings respectively. A C string is an array of characters, which is terminated with a null character to signify the end. strlen() would find the length by seeing how far from the beginning the null was. strcat() would find the null in the first C string, and then to that location copy characters from the second string, till it finds the null. So on and so forth with all the C string functions.

Now these functions are seen by some as inherently broken, since they can read/write data right off the end of the buffer, and the terminating nulls can sometimes vanish if one isn't careful. You see these kinds of issues with C++ programs too. Usually when a Java programmer hears about this, they tell you to use Java, since Java has a built in string class which can't get screwed up by any of these simple operations. Learned C++ programmers know that C++ also has a built in string class, just as good, if not better than Java's string class. Knowledgeable C++ programmers will use C++'s strings to avoid all these issues, and have really nice features, such as sane string comparison using ==.

Switching from C++ to Java, because people know that Java has a string class is kind of ridiculous. Yet I keep hearing from all kinds of people how programmers should switch to Java because of it. One using C++ should generally use the C++ class, unless they have a good reason otherwise. It's a shame most C++ programmers use C strings instead of C++ ones, probably due to them not knowing about it. However this doesn't help when programming in pure C.

For pure C, you can turn to one of the libraries for handling strings, such as SafeStr. But most people won't choose to go this route. Due to this, care has to be taken.

Now I won't kid you, if there's a buffer overflow in a C program, it is the programmer's fault. If s/he was more careful, it wouldn't have happened. However some areas of code are large, have many code paths, and are downright confusing. In those cases, it's easy to screw up. To help prevent screwing up, there exist some more C functions to handling strings properly, and some C libraries also provide extra non standard functions.

In answer to just overflowing a buffer when copying or concatenating, "n" versions are provided. These are strncpy(), strncat(). They take a third parameter to tell it how much they're dealing with. strncpy()'s n refers to how big the buffer is, and it'll only copy up to that amount of characters. strncat()'s n is up to how many characters can be stuck onto the end of the first string. In the case of strncpy(), if null isn't found in the copy process, the result is not null terminated. Leaving us with the other problem. strncat() on the other hand will always null terminate, because it attaches n+1 bytes whenever the second string's end isn't reached. Meaning that you have to tell strncat() length_of(remaining bytes in buffer)-1. This leads to confusion, because of different n meanings, and because strncpy() introduces another problem.

There's also strncmp(), for specifying the maximum amount of characters to compare, which you can use if one of the strings isn't null terminated. Surprisingly however, there is no strnlen(), to check how many characters a string is, without running off the end. Considering that strncpy() doesn't always null terminate, sounds like a useful feature to have.

Taking this into account, in some C libraries, you'll find strnlen() which returns the length, or instead will return the value of n, in the case where no null was found. Those needing it, it's an easy function to implement yourself:
`size_t strnlen(const char *s, size_t n){  const char *p = (const char *)memchr(s, 0, n);  return(p ? p-s : n);}`

Although it would be intelligent to follow up every call to "strncpy(s, n)" with a "s[n-1] = 0" to terminate it yourself. But this hardly helps the confusion. Also take into account that str[n][cpy/cat]() return the destination for their return value, so you'll sometimes see code like:
`if (!strcmp(strncpy(buf, entered_text, sizeof(buf)), param)){  do_something();}`

However, this code is broken as buf may not be null terminated. A correct version would perhaps be:
`if (!strncmp(strncpy(buf, entered_text, sizeof(buf)), param, min(sizeof(param), sizeof(buf)))){  do_something();}`

Which is ugly at best.

To solve and work around these issues, OpenBSD has invented the strlcpy() and strlcat() functions, which have been implemented in all BSD derivatives (including Solaris and Mac OS X). Manpage here.

Although I found the standard descriptions confusing at best. Here's my take on it after some study:
`size_t strlcpy(char *dest, const char *src, size_t size);Description:    The strlcpy() function copies the C string pointed to by src (including the null) tothe array pointed to by dest. However, not more than size bytes of src are copied. Meaningat most size-1 characters will be copied. The copy will always be null terminated, unlesssize was 0, in which case nothing is done to dest.Return Value:    Return is always the amount of characters needed to hold the copy. Meaning strlen(src).If the return value is <size, everything was copied.`

With strlcpy(), you can run it once with a size of 0 if you want to find out how much you need to allocate. Although pointless, you'd be better off with strlen(). Now this won't help much if src isn't null terminated, but it should avoid issues you have with misusing the return value like in the case offered above, or when there wasn't enough room. If you always pass strlcpy() the sizeof() the buffer, or the value passed to malloc() as the case may be, you should be safe.

If you read the manpage, you also see a usefulness to the return value. A problem with constantly using strcat() is that you have to keep iterating through the former strings, leading to a speed loss. With strlcpy(), you can do the following for concatenation:

`if ((n1 = strlcpy(a, b, sizeof(a))) < sizeof(a)){  if ((n2 = strlcpy(a+n1, c, sizeof(a)-n1)) < sizeof(a)-n1)  {    if ((n3 = strlcpy(a+n2, d, sizeof(a)-n2)) < sizeof(a)-n2)    {      etc...    }  }}`

A nice trick for mass concatination, although as the manpage points out, ridiculous, and negates strlcat().

Moving onwards, the manpage listed above for strlcpy() is also for strlcat(), yet as above, I found it a bit confusing too. Here's my take:

`size_t strlcat(char *dest, const char *src, size_t size);Description:    The strlcat() function appends the src string to the dest string overwriting the ‘\0’character at the end of dest, and then adds a terminating ‘\0’ character. However, not morethan size-strlen(dest) bytes of src are copied. Meaning a maximum of size-1 characters willfill dest in the end. The copy will always be null terminated, unless size was less than thelength of dest, or dest is not null terminated, in which case nothing is done to dest.Return Value:    Return is the amount of characters needed to hold the copy when dest initially is nullterminated and its length is less than size. Otherwise the return is size+strlen(src). Ifthe return value is <size, everything was copied.`

What's nice about strlcat() is that for the size param, you can pass it the sizeof() or the malloc() value like you do for strlcpy(). But beware the return value, the OpenBSD code is rightly commented as follows: "Returns strlen(src) + min(siz, strlen(initial dst))". Take a moment to comprehend that.

If you're not using a BSD and you want these functions, code is here and here. Be wary of some of the other implementations you find online. I looked at some of them, and they acted differently in some other corner cases. One I looked at even crashed in one of the corner cases.

However looking at that code there, it looks a bit messy. Reviewing our previous multiple concatenation case, which is also spoken about in the manpage, one sees these as a bit weak. If one wants nice multi concat without too much fuss, they'd normally use snprintf() (C99) with a bunch of "%s%s%s" as the format. I myself though prefer a more elegant solution to all of this.

I therefor have created the following logical extension of OpenBSD's l functions, I give you strlmrg():
`size_t strlmrg(char *dest, size_t size, ...){  char *s, *end = dest + (size-1);  size_t needed = 0;  va_list ap;  va_start(ap, size);  while ((s = va_arg(ap, char *)))  {    if (s == dest)    {      size_t n = strnlen(s, (end+1)-s);      needed += n;      dest += n;    }    else    {      needed += strlen(s);      if (dest && (dest < end))      {        while (*s && (dest < end))        {          *dest++ = *s++;        }        *dest = 0;      }    }  }  va_end(ap);  return(needed);}`

Pass strlmrg() the destination buffer, it's size (from sizeof() or the param to malloc()), and all the strings you want concatenated, followed by a null pointer.
Example 1:
`printf("%zu; %s\n", strlmrg(line, sizeof(line), "I ", "Went ", "To ", "The ", "Park.", NULL), line);`

It would print: "19; I Went To The Park."
Example 2:
`n = strlmrg(buffer, sizeof(buffer), a, b, c, d, e, f, (void *)0);`

Which would concatenate a to f inside buffer (given that it could fit), and return the amount of characters copied. Note, it returns how many characters would be copied, so you can use it to determine the size. See this example:
`size_t n = strlmrg(0, 0, a, b, c, (void *)0);char *p = malloc(n+1); //+1 for the nullstrlmrg(p, n+1, a, b, c,(void *)0); //Again, +1 for the null`

When strlmrg() returns less than size, everything was merged in. The result is always null terminated except when dest is null, size is 0, or it encounters one of the source pointers to match the location it is currently trying to copy to.
You should avoid passing one of the source pointers to be a location from the destination buffer. If you happened to pass in such an overlapping source pointer, and it's not null terminated prior to it reaching size, you will get size as the return value instead of the full size. Also don't try to pass it any non null terminated source pointer, or forget to pass the last null pointer.

Once we have strlmrg() implemented, it also paves the way for a simple and straightforward implementation for strlcpy() and strlcat().
`size_t strlcpy(char *dest, const char *src, size_t size){  return(strlmrg(dest, size, src, (void *)0));}size_t strlcat(char *dest, const char *src, size_t size){  return(strlmrg(dest, size, dest, src, (void *)0));}`

And unlike the official ones, these won't crash if dest or src is null. I tested these wrappers, and they seemed to match results with the official ones in every regular and edge case I tried.

I also tested strlmrg() in a variety of cases, and it seems to be very good and secure. If you find a bug, or have an improvement to offer, feel free to post about it.

Thoughts?

## Friday, March 23, 2007

### Are open source libraries written properly?

Every now and then you hear people discussing open source applications and libraries, and how a bug was found. The multitude of bugs being found makes one wonder are open source libraries being written properly? Are they secure?

It is incumbent upon one to realize that having the code open makes it easier for someone to find bugs. There can be many bugs in closed source libraries that we simply don't know about. When the source is open, bugs can be found and immediately fixed by anyone. It leaves the question which one is better?

I'd like to break down bugs being found to two different types. Type A bug is where the library is very complex, and one unique circumstance with a certain corner case of data will expose a flaw. Type B bug is where the author isn't very good at writing libraries, or not familiar with the underlying system calls he's working with, and writes the code all wrong. We'll look at both kinds of Type B bugs shortly.

Type A bugs can exist in open or closed source libraries, but since they're so hard to pinpoint, they rarely are found. In this instance, open source can be an advantage, as one can go over the code with a fine tooth comb and look for that rare case, or the user of an app may be getting some inexplicable behavior, and looking at the source with the given data can track it down. Open source helps the application developer with getting this kind of problem fixed. In a closed source library, a user can be experiencing an error, and the developer of the application has no way to track it down, leaving a bewildered developer and an annoyed user.

Type B bugs can exist in both as well. However a closed source library, which costs money, normally doesn't last long if it has stupid bugs in it. In this instance, it generally is a bit rarer to find this kind of bug in a successful established closed source library. Regarding open source however, if the library isn't popular enough that various groups are trying to attack it, these bugs can go unnoticed, and therefor unfixed for a long time. Perhaps the mentality when seeing such an obvious bug causes one to think: "it's open source, this probably is correct as many other people have reviwed it already", which is a very bad conclusion.

While looking at some shared library code that you can find in GNU libraries used by the latest versions of coreutils, tar, and others, I found some Type B bugs.

I was using Google Code Search the other day to see how various groups implemented certain functions. While looking up three different functions, I found two of these to have bugs in them.

The first was in "lib/save-cwd.c", I found this:
`  cwd->desc = open (".", O_RDONLY);  if (cwd->desc < 0)    {      cwd->desc = open (".", O_WRONLY);      if (cwd->desc < 0)`

This bit of code tries to first open the current directory in read only mode, and assign the handle to "cwd->desc", if that fails, it retries in write only mode, and if that fails, it does something else.
Now I have to wonder if the people who wrote and reviewed this file have a clue what they're doing. It isn't possible to open a directory in any mode but read only. Manual for open() clearly lists the following error: "EISDIR - pathname refers to a directory and the access requested involved writing (that is, O_WRONLY or O_RDWR is set).", meaning if write mode of any kind is specified when trying to open a directory, it will fail.
There is no reason in the world any developer should ever try to open a path they know is a directory with open() and any kind of write mode. What's worse is that they try to use it in an error handling condition, and on top of that have some if which acts like it may succeed.

The other bit of code I found was in "lib/atexit.c", here it is:
`intatexit (void (*f) (void)){  /* If the system doesn't provide a definition for atexit, use on_exit     if the system provides that.  */  on_exit (f, 0);  return 0;}`

For some background, here is the details for atexit():

The atexit() function registers the given function to be called at normal process termination, either via exit(3) or via return from the program’s main(). Functions so registered are called in the reverse order of their registration; no arguments are passed. The atexit() function returns the value 0 if successful; otherwise it returns a non-zero value.

And on_exit():

The on_exit() function registers the given function to be called at normal process termination, whether via exit(3) or via return from the program’s main(). The function is passed the argument to exit(3) and the arg argument from on_exit(). The on_exit() function returns the value 0 if successful; otherwise it returns a non-zero value.

The functions look pretty much similar, the only difference being that on_exit() seems to have an extra parameter. Now ignoring that extra parameter, shouldn't one be a pure wrapper to the other? Why is the code for atexit() calling on_exit(), then always returning success even when on_exit() failed? A more appropriate implementation would be:
`int atexit (void (*f) (void)){  return on_exit (f, 0);}`

Is that really so hard to write properly?

Seeing rampant stupidity and bad code in these GNU libraries, you know something over there isn't doing too good. I only found these issues in reviewing 3 functions in less than 5 minutes. I'd suggest avoiding GNU libraries like the plague.

If you're going to be using an open source library for a simple operation, I highly recommend reviewing the code yourself to make sure the original developers had a clue what they were doing.

## Tuesday, March 20, 2007

### File dialogs

The file dialogs we use on a day to day basis have changed significantly over the years. Sometimes for the better, sometimes for the worse.

Back in the dark ages of operating systems, we have the old Windows 3 file open dialog:

It might be old and considered outdated, but it was quite elegant. You had a drive selector, a directory selector, the file selector, a filter for files, and a box to type in the name of the file you wanted for quick access. It was all very nice, the only flaws being seemingly bad organization, and lacking some features the later ones added.

Windows 4 came along and offered a major reorganization with several new features:

Here all the drives, directories, and files were combined into one pane. A virtual parent called My Computer (renamable) was created to house all the drives, so it could all be dealt with in a uniform manner, as drives themselves were also just logical subdirectories. A drop down tree was added to easily jump back anywhere to one of the parent directories. Minor file management could be done here, such as renaming a file/directory, or creating a new one for whatever you wanted to save. You could also list files in the multi line scrolling list, or select a detailed view to see files with information such as dates and sizes, in case one of these would help you remember which file it was you were looking for. You also got icon support for viewing types and displaying executables.

But my favorite addition to all of this was the file name box. You could type in quickly which file/directory you were looking for, as opposed to just the filename like in Windows 3. But the best part was you could enter a path! If you knew the path you wanted to jump to, it was often quite quicker for those of us that know how to type to just enter the location manually, than to spend time navigating with point and click, and waiting for directories to load. You were able to type in relative or absolute paths, or even type in the full path of the file you wanted and have it work instantly with no time wasted. I absolutely loved it.

Then Windows 5 came along, and they offered some additions, nothing too different, but changes nonetheless:

It basically was the same file dialog from Windows 4, but they added a quick directory pane to the left side. Now one could easily jump to popular locations without having to do much more than a single click. Now I'm a bit sketchy on this detail, someone correct me if I'm wrong, but I recall you couldn't add or remove which quick location buttons appeared on the left, unless you installed Tweak UI, which makes the feature a bit useless by default.

Another highly annoying thing I found was the entire virtual directory setup. First there's My Documents. What exactly should one be storing in My Documents? When I first noticed that in Windows 98, I figured that's where various text files, papers, spreadsheets, and presentations might go. Should I stick source code to my app in there? Should I have Command & Conquer 3 store its save files there? If I was ripping a DVD would it go there? Should my virus scanner save log files there?

Now if I'm in My Documents, and I press directory up, instead of going to my home directory, I reach My Desktop. What's the logic behind this? And don't try to go up from My Desktop either, you won't go anywhere. Your home directory compared to UNIX also seems completely illogical. What does one put in their home directory? A casual glance at it and you see My Documents, internet browsing related directories, and hidden application settings. Would I put my personal applications here? What about stuff I want to share with all users of the computer, say pictures of my kids? Now I personally would set aside a whole drive for things like pictures of the kids and have it all organized neatly, and then have it symlinked to from the home directory of each family member. Yet Windows seems to not really have any provisions for this. Symlinks are non existent, and the shortcut system is a joke, which just constantly acts weird when you try to set a link to another drive or directory and you scratch your head wondering why did a new instance of Windows Explorer just launch?

I'm told Vista got some of this user stuff improved, but I've yet to see it, as I don't feel like shelling out several hundred for something I've already spent a considerable amount on for previous versions.

I'd appreciate if users of Windows 6 (Vista) could write in and tell me if we actually have sane home management, and that the virtual directory nonsense has been sanitized. Also would be nice to know if one can easily add or remove quick locations this time around.

Oh and before I finish on this one, Windows 5 also allowed thumbnail viewing of multimedia files in the file dialogs, in addition to other useful views added.

Now lets take a look at some of the UNIX counterparts.

First off we have the despicable GTK/GNOME file dialog. Sorry for those of you that now have to go poke their eyes out from seeing this, but it has to be shown:

Now while it looked pretty much the same as it does now compared to say, 3 years back, it has had some changes since then.
It displays a directory/file browser quite clearly along with the date of everything in a simple scroll down view. Don't bother looking for any way to change the view or to get any additional details to show up such as file size. I assume they think they make up for it though by allowing you to reverse the order of the alphabetical sorting, or by allowing sorting via date. Although like other open dialogs, they got the file type filter right. Now in style with a group which likes to copy everything Microsoft does with Windows, but try to put a new spin on it and say all their goals are to do everything different than Windows; they copied the quick locations browser on the left, right down to a completely useless irremovable desktop quick location. Although it's nice to see they included home, and surprisingly enough they have a section where you can easily add or remove your own additions to the quick locations (but not remove the built in ones).
One thing which might be an improvement is the crumb browsing on top. Unlike Windows 4+ where they offered a drop down of the directory tree, you now see each directory component in a button by itself, and you can easily immediately click on the one you want to jump to. This mechanism was also combined with the traditional up button which is no longer needed with this interface. A nice idea indeed, which I think might make life easier for more inexperienced users, and perhaps getting them more familiarized with what a full path is.

Although those of you who looked good at that file dialog must be scratching their head, wondering where is the box to type in the file quickly or to change paths? Now in earlier versions of GTK, it simply didn't exist, even though everyone who used a file dialog from the past decade had access to one. After enough pressure from users demanding one, they finally added it, albeit it's completely hidden and doesn't appear till you start typing something. Heaven forbid a new user should see an input box, it's of course much better to have a user think this is an old version of GTK, where they can't navigate quickly </sarcasm>.

Speaking of navigating quickly, for some reason this utter disgrace also takes forever to display any directory with many files in it. But it doesn't just stop there. When they finally got around to secretly adding the quick navigation field, they decided to put an auto complete feature in. Sounds great, right? Wrong! I'm in Firefox and want to tell it to open the file I just downloaded with KWrite. I go to input "/usr/bin/kwrite", after I type in "/us", it finished scanning for matches to "/u" as "/usr" and replaces it, leaving me with "/usr/s", forcing me to backspace in middle of typing, and then go through the same nonsense again with the "bin" component. But it hardly stops there, I've seen this thing freeze before when trying to auto complete whatever I was typing for a good 10 seconds or more, which is completely unacceptable. But then just when I got "/usr/bin/kwrite" entered and am hoping it's now going to launch KWrite for me, it instead freezes for 20 seconds loading up "/usr/bin" which contains ~2000 files (due to UNIX being made up of many small applications, and most executables being stored in one location), and just showing me the kwrite entry highlighted, where I further have to go ahead and click okay. Why the heck is this thing so freaking slow? And why the heck didn't it just load KWrite once it saw it was an absolute path to a file?

And if you haven't guess it yet, no this garbage can't do any kind of file management from it's file dialog.

After having to put up with this annoying broken piece of trash, I'm trying to find a replacement to every GTK application I use in UNIX where I might have to use a file dialog for some reason. If you know of a way to get Firefox plugins working in Konqueror, or of a GAIM replacement with a similar interface, drop me a line.

GTK/GNOME is just downright awful, and don't just take my word for it, even Linus Torvalds says so.

Next we have the Qt4 file dialog. Qt wraps to whatever native file dialogs are for that system, so on Windows you'll see whatever Windows does for that version, and on Mac OS X, whatever it does. For Linux, *BSD, Solaris, or any other UNIX, since there is no native GUI, it creates it's own. Let's take a look:

It looked pretty much like what you get from Windows 4. Everything Windows 4 can do, this can too. There really is only one difference (which I personally like a lot). The drop down on top instead of showing just the name of the recent path and offering a way to go up to previous components, it displays the full path every time, and allows it to be edited! I enjoy this a lot, as I can easily type in where I want to go, with it being obvious where to put this data. But this goes a step beyond anything we've looked at till now, for it has good auto complete! If it finds a match to what you were typing in, it displays the match, but for anything extra you entered gets properly inserted into the match replacement string. I think this is the file dialog that all others should be judged against.

Next we'll look at the modern KDE 3.5 file dialog. Although KDE 3.5 is based on Qt3, and KDE 4 will be based on Qt4, they reimplemented anything to do with files. They did this not just to perhaps improve on Qt's file dialogs, but because KDE can transparently work with files across all kinds of network protocols, which Qt or for that matter other APIs don't. Let's have a look:

Looking at this, it seems to be pretty much the same as Qt4, except it inherited Windows 5's location buttons on the left. Now while I don't care for some of the default buttons such as Desktop, everything here is fully editable - as it should be! One can right click on any of those locations to edit an entry, delete an entry, or change its icon. One is free to remove any of the default ones if they don't like them or feel they're useless. To add a new one, you can either right click, and select name, path, and icon, or you can just drag a directory from the browsing pane right into the location pane!

What's more though is that this thing is super configurable. You can click on that wrench icon to change the options as you want. If you don't want the quick location pane, you can easily turn it off. If you prefer to have directories and files split into two separate panes like Windows 3, you can do that too. You can also select to see regular or detailed view. During even the regular view though, you can tell it how you want things sorted from the wrench drop down. If you want to see multimedia files displayed as thumbnails as you browse like Windows 5, that's a configurable option as well. Yet this goes above and beyond to combine fast browsing and thumbnails. You can tell it show a thumbnail box on the right, which will only show a thumbnail for the selected file, so you can easily preview without having to slow down browsing by generating all those thumbnails. Thumbnails also go beyond multimedia files, for text files, it will display the first few lines of text.

Now regarding the path entering on top, you can enter any path as you like. Like Qt4, it offers very good auto complete, yet goes even a step beyond. When entering a path, it also tries to auto complete, but the drop down displays all matching paths (see image above). So you can easily press down to select the first one and continue typing, or you can select one of the others ones too. And as mentioned earlier, this works with all kinds of protocols. If you wanted to open a remote file from some website, you can easily enter http:// and the URL, or you can browse some FTP site with ftp://, and enter a user name and password when prompted if need be. It also works for Windows networks, or for browsing any system you can SSH to, or any other protocol you can think of, provided that KDE's I/O library supports it.

The only missing feature here is that you can't rename files from the file dialog like you can in Windows 4+ and Qt4. I always found that feature useful if you wanted to save a file as the same name as something existing, but wanted that old file to be backed up. Perhaps some KDE developers have been spending too much time with GNOME/GTK devs.

Speaking of which, I've been previewing some stuff for KDE 4. Now while I have no idea what the final product would look like, some changes as they currently stand are a bit disturbing. The KDE developers said they were changing the system to use the Dolphin interface. The Dolphin interface seems to be inspired by GNOME/GTK. They took a leaf out of their book and are providing a crumb based path above, so you can jump around like you can in GNOME/GTK's file browser. Although they mention they want to improve it by making each of those a drop down. A drop down would allow one to jump to sibling directories, although that wasn't in the build I was testing. Now the path editing I love is also there, although you need to click a button up top to switch to it. If they don't allow you to select the default method, or perhaps always display both, I will be quite upset. However their crumb browser seems to have adopted stupidity from Windows as well. It now adopted the whole virtual directory idea that Windows has. So say I'm in my home directory and want to go up one so I can select my spouse's home directory, no dice. One can't select a crumb before their home directory, as nothing exists above it when you jump to home. I don't know why KDE devs are adopting stupid GNOME ideas, or taking a step backwards to design mistakes and oddities from Windows, but I sure hope someone knocks some sense into them soon.

If I wanted to design a good crumb based editor, I think I would merge the various ideas. Have the kind of input box we're used to, but make it that each slash turns into a button which you can use to delete the path components after it, or drop down with a list of sibling directories.

Finally, let us take a look what Trolltech has in store for us next. They are redesigning the dialog for Qt 4.3, and one can download a development snapshot and play with the new file dialog. Although based on what I heard, they don't plan on changing it much from what they have in their repository at the moment . Here it is, direct from my personal compile of Qt 4.3's repository as of yesterday:

I don't know what they did. Perhaps Trolltech hired some bozos who work on GTK to come up with this. They seem to have more or less taken from KDE 3.5 the quick location pane on the left. It has sane defaults, and you can remove what is there, or add by dragging from the main browsing pane. Yet no editing of any sort, or adding by typing in a path is available. Your changes don't seem to be saved in any way either from one run to the next, making customizing it pointless. I don't know why they didn't just replicate what KDE 3.5 did here, as they had it perfect.

For your browsing, details view now seems to be the default, although you can change to the old default of list view. Now in detailed view, the only thing you see is file name and date, just like in GTK. The copying of stupidity is uncanny. It seems they removed features and changed defaults to make it resemble GTK more, for some absurd reason. Thankfully though you can rename and delete files here, but surprisingly enough, there seems to be no way to create a new directory.

And those of you who have been paying attention, of course will wonder, where is the path editing box? Yet again it seems they copied GTK and hid it by default, to reach it you have to click on the browsing pane, and then start typing. Not at all intuitive, and sadly, seems to be copying a bad Qt knock off. At least the path editor though has the improved auto complete seen in KDE 3.5.

I don't know what's becoming of KDE and Trolltech these days, they seem to be taking the bad from GTK/GNOME and throwing away their own good technology.
But that file open dialog from Qt 4.3 is really freaking me out. I can't even begin to describe what a major step backwards it is. What happened to the sanity? Where's the intelligence? Where's all the good stuff? Why am I looking at garbage from a lesser API, in the best cross platform one available?!? If they wanted to improve it, they should be taking what they can from KDE 3.5. Someone needs to smack somebody at Trolltech - hard.

If anyone has any more details, or knows of planned changes, please post about it in the comments. If I get more details, perhaps I'll do a part 2 in the future.

## Sunday, March 18, 2007

### Cool unknown libraries - Part 1

Every now and then someone asks me if there are any little unknown libraries to get some cool functionality.

So today I'd like to showcase ManyMouse.

This library, written in C, tries to do something new, and that is to provide a portable easy to use API to allow an app to work with multiple mice independently. It includes a nice demo game, a four player pong clone with each player controlled by a different mouse.

I find this quite nice if you want to implement certain games. I played it with a target shooting game, holding a gun (mouse) in each hand - literally. Currently it's implemented in ZSNES, a Super Nintendo emulator for use with games like Arkanoid or Terminator which allowed a mouse for each player to be plugged in on the original system.

More practical uses for it might be in VMWare, VirtualPC, QEmu, DOSBox, or the like, where one mouse could be bound to the application window (client OS) and another to be used within the host OS, so as not to have the two interfere. I currently find it annoying how you're using one of these apps, and find that it steals your mouse focus, and have no idea how to break out of it without shutting down the app (or at least that particular app within the client OS).

The API is also very nice and easy to use, I even find it easier than the APIs certain OSs provide for accessing mice. During each frame of your game or whenever you poll for input in your app, you ask ManyMouse for a list of actions that occurred since last poll, and it'll return a series of events telling you which mouse moved and what buttons were pressed, etc...
It can also tell you when a mouse was unplugged, and give you a list of the internal name for each device.

The only current drawback to ManyMouse is it's limited OS support. It as of this writing supports Windows NT 5.1+ (XP, 2003, Vista), Linux 2.4+ with evdev and read permissions to /dev/input/event* enabled, and Mac OS X. This leaves the various BSDs, Solaris, and some other less popular OSs unsupported. The Linux requirements may also require a user to get their administrator to install the evdev kernel interface or enable the read permissions to /dev/input/event*, as not all distros do that by default.

Although the author is currently working on getting it to work with X, which should support every modern UNIX, and remove any annoying setup requirements from Linux (under X) when completed. Someone also contributed Java bindings and it's now supposed to work with Java on any OS. If you have a cool app that could make use of independent support of multiple mice, give ManyMouse a whirl, or give it a shot if your native mouse reading API seems a bit scary. If you're able to, also consider giving the author a hand, and help finish up X support, or contribute functionality for another OS.

Some other time we'll have a look at some other cool unknown libraries.

## Saturday, March 17, 2007

### What to do about gets()

A friend of mine recently asked me if I was to make my own C library, how would I implement gets()?
An interesting question indeed.

When I first learned C quite a few years back, I was reading a book to find out what I/O functions were available. I saw gets() and it looked like the function to use to receive a line of input, but looking at the prototype:
`char *gets(char *s);`

I was wondering where one specified the length of the buffer s. Looking further in the chapter I found the usable function fgets() which seemed fine, except that I had to specify to it to use stdin, which seemed unnecessary, but no big deal.

Some time after that I was reading about how this worm spread around the world by buffer overflowing gets() in the UNIX finger command, and my first impression was: "Who in their right mind would use gets()?".

More recently, when I was teaching my C class at the local university, we reached the point where I felt I would discuss various forms of user I/O that day. I figured after explaining a couple of things, I would teach puts(), fputs(), gets(), and fgets(), and explain why to never use gets(). As soon as I wrote the gets() prototype on the board and said it was used for input, I was immediately bombarded by two students shouting out: "But how does one pass the buffer length?".

So I went on to explain they were right, and the old finger worm, and how it was obvious to me too when I first saw gets(). Yet the class wondered who could blunder so badly? And yes indeed I continue to wonder why isn't it obvious to some people the function is broken? Even students just learning how to program realized how obvious it was that the function is broken.

Thankfully, the manpage for gets() today shows:

Never use gets(). Because it is impossible to tell without knowing the data in advance how many characters gets() will read, and because gets() will continue to store characters past the end of the buffer, it is extremely dangerous to use. It has been used to break computer security. Use fgets() instead.

Interestingly though, when the C99 standard came out, they decided to leave gets() in, reasoning how it was already there, or it's okay to use for a test app or some such nonsense. Yet if I were to code my own library, I wouldn't want to break the standard either. So I wouldn't remove it, yet I wouldn't alter the prototype to take a size parameter either. So what would I do?

Then it hit me the solution to the problem, the return is specified as so:

gets() returns s on success, and NULL on error or when end of file occurs while no characters have been read.

So now I proudly present to you my implementation of gets():
`char *gets(char *s){  return NULL;}`

Very simply, it just always returns an error, so it's standards compliant AND secure.

Of course someone is going to wonder, but what about existing programs that don't check the return value? Which I must simply regard as broken for using gets(), and for using a function which can fail but never check for it.

Now this may be annoying at first to those beginners who use gets() and wonder why is it always failing, but I figure this is a good way to exorcise a broken function out of people's systems and to teach beginners to think clearly when choosing their building blocks to use.

## Thursday, March 15, 2007

### File Descriptors and why we can't use them

Those of you who have been programming for a UNIX based OS for a while have surely felt the power of file descriptors. One prevalent problem in programming is the TOCTOU (Time Of Check Time Of Use) race condition. Having a file descriptor to a file allows one to check or alter the status on it without worrying something changed in the mean time.

Take the following example:
`FILE *fp = fopen("myfile.txt", "r+b")if (fp){  if (verify_file_is_my_type(fp))  {    truncate("myfile.txt", 0); //Erase contents of file, change its size to 0  }  fclose(fp);}`

What if after I opened the file, while it was doing its lengthy check, the file was renamed, and a different file was renamed in place? I would be truncating the wrong file.

To solve problems like these, we can truncate the file directly via the file descriptor:
`FILE *fp = fopen("myfile.txt", "r+b")if (fp){  if (verify_file_is_my_type(fp))  {    ftruncate(fileno(fp), 0); //Erase contents of file, change its size to 0  }  fclose(fp);}`

With this method there is no TOCTOU race condition. Having a whole slew of file descriptor based functions such as fstat() and fchmod() for dealing with checking and setting permissions greatly alleviate security issues.

In certain cases, file descriptors can also seem to make certain operations easier. Say you're working with zlib and want to work with a gzip file but need to know how big the uncompressed file will be. Unfortunately, zlib doesn't provide a way to get this data, even though the data is quite simply the last 4 bytes in the file. One could open the file, read the last 4 bytes, close it, then open it with gzopen(), or one could take advantage of gzdopen() like so:
`uint32_t gzfile_size(const char *filename){  uint32_t size = 0;  FILE *fp = fopen(filename, "rb");  if (fp)  {    uint32_t fsize, gzsize;    gzFile gfp;    fseek(fp, -4, SEEK_END); //Seek to the uncompressed size info    gzsize = fread4(fp); //The gzfile is read in using fread4()                         //a custom function to read 4 bytes in little endian    fsize = ftell(fp); //At this point we can also get the external file size    rewind(fp); //Reset so zlib will start reading from the right place    //Open GZip file for decompression, use existing file handle    if ((gfp = gzdopen(fileno(fp), "rb"))) //Notice how we made use of gzdopen() and an fd    {      size = gzdirect(gzp) ? fsize : gzsize; //Since zlib can open non                                            //gzip'd files transparently, check for it      gzclose(gzp);    }    fclose(fp);  }  return size;}`

The above function can easily be modified to make it be more useful and actually read in the file to a buffer with gzread(), but this demonstrates how the file doesn't have to be closed and opened to get it's uncompressed size regardless if its a gzip file or not.

Another related area is what if directories changed while working with a particular path. Say for example the path to a set of settings files to my program was given as:
"/etc/program-a/" and in there I'd be dealing with files such as "net.conf", "sound.conf", and "ui.conf".
I could have my program strpcy()+strcat() or sprintf() the 3 needed files into a buffer large enough to hold any of those strings and use that buffer as needed to access any of those files, or alternatively, I could chdir() to "/etc/program-a/" and access the files directly as they are now part of the current directory.

The first method would be a problem if after I recieved "/etc/program-a/" it was renamed to something else, thus pointing to the wrong file, and it would also slow down the program a bit if for every file I wanted to access in a path, it would have to do many string operations.

The second method wouldn't be a problem if a path component was renamed, as UNIX based systems today can handle maintaining the representation of the CWD properly, nor would it need any relatively slow string operations to be performed. However having many instances of chdir() in a program could lead to maintenance hell, and would also be problematic or downright impossible to stay correct and secure when threads are involved.

To solve this series of problems, Solaris has invented the openat() and a whole family of *at() functions. Linux 2.6.16+ also now contains this nice family, and these functions are proposed for inclusion in a future revision of the POSIX standard.
In the above case, we'd do as follows:
`int dirfd = open("/etc/program-a/", O_RDONLY);`

Then to access any file:
`int fd = openat(dirfd, "net.conf", O_RDWR);int fd2 = openat(dirfd, "sound.conf", O_RDWR);`

Then you can read()/write() on fd and fd2, or promote to a FILE * like so:
`FILE *fp = fdopen(fd, "r+");`

Once we have dirfd from opening the directory in question, we never have to come up with any strings or change directories to interact with a file. If we wanted to stat a file, we could do:
`struct stat sb;fstatat(dirfd, "ui.conf", &sb, 0);`

Or:
`fstat(fd3, &sb); //fd3 acquired from openat() on dirfd and "ui.conf"`

One can also get a directory file descriptor off of a DIR * created from using opendir(), by using dirfd(), allowing one to parse the contents of a directory and directly stat each file with fstatat() or the like without needing to do any annoying or unsafe manipulation.

Once looking at *at(), I was able to rewrite some code I had which dealt with files all over the place and make it more secure, plus significantly faster as I was able to drop many string manipulating function calls.

Reading all this, one must be thinking to themselves one of two things. Either, wow this sounds great, I should look more into file descriptors, I hope all my supposedly secure apps use file descriptors and don't have any TOCTOU race conditions. Or, okay great I know all this already, what's your point?

However file descriptors have a general flaw, that being that THE FILE MUST BE OPENABLE. Say for example you have a file with the permissions of 000, you can run stat() or chmod() on it, but you can't alter the permissions with fchmod()! Now this might not seem so bad, but say you wanted to make the file writable then write something to it? Or worse, you want one single code base to do some operations in order not to commit the sin of code repetition, but you're faced with either using the safer file descriptor based function which doesn't always work, or the more dangerous file name based function which will work even if you can't open the file.

The problem gets even worse when one considers the new *at() functions Solaris and Linux added. In my case above, say my directory had permissions of 111 (--x--x--x), I can chdir() to it, or access file via the full path. But I can't call open() on "/etc/program-a/" as open() for directories only works if the directory has read permissions. If my program allowed one to pass a path to it to tell it where the config files were located, I would need to have two separate code paths, one the fast secure method using openat() and friends, and another calling open() and friends on the full path or chdir() to there and then open() on the file names directly.

Once this fatal flaw is realized, I see two possible solutions, one involving 3 new functions, or another involving the modification of 2 existing functions.

The first method would introduce the following:
`int pathfd(const char *pathname);int pathfdat(int dirfd, const char *pathname);int openfd(int fd, int flags);int openfd(int fd, int flags, mode_t mode);`

pathfd() would allow one to get a file descriptor to a file as long as the directories leading up to it where all marked execute, and allow one to get a directory file descriptor if it was all marked execute including the directory itself. This would allow me to access any files for information functions or get a directory file descriptor to use with *at().

pathfdat() would allow one to get a file descriptor based of off another file descriptor in cases where using openat() would fail because there wasn't sufficient permission.

openfd() would allow one to promote an info descriptor into one I can read/write from. After I pathfd() and fchmod() a file to be writable, I can then openfd() promote it so I can write to it, all without having to worry about TOCTOU, with a chmod() and then open().

An alternative method would be to enhance the existing open() and fcntl() calls. open() and openat() can get a new flag perhaps labled O_INFO, or O_NONE, which would only need enough permissions as mentioned above for pathfd() and pathfdat().

For promotion, fcntl() and cmd F_DUPFD should be allowed to specify the args of O_RDONLY, O_WRONLY, or O_RDWR, so one could promote or demote a descriptor as needed.

Thoughts? Feel free to tell me if you agree or disagree, or why we have to be so limited.
If you agree with my ideas, any chance we can get the ball rolling with getting an OS or C/POSIX library group to implement some of these?

## Tuesday, March 13, 2007

### Applications and the Difficulties of Portability?

I'm a software developer who writes a lot of freeware utilities in C/C++ which are all cross platform and work well. Lately some of my users have been pestering me to stop wasting precious development time supporting minority OSs like Linux, and get more work done for the majority — the Windows users. Now many of my utilities are simple tools that perform various operations on files such as compression or rearranging. I've also made a few frontends for them using the excellent Qt library to allow the user to select a file and process using a simple GUI. In dozens of applications I wrote, most of them several thousand lines long, I haven't written a single conditional for any particular OS. When I release, I just compile each app for all the OSs I have access to and post them on my website. I barely expend any effort at all to achieve portability. So the question I have to ask is: Why do the masses perceive portability as something that requires effort and a waste of time?

Most applications don't do anything fancy or need to talk to devices and therefor there is no need to do anything special other than compile them on a particular OS to run on that OS. So why are there so many simple apps using native APIs to do simple things like file reading instead of the standard ones? Why are we projecting an image that one must go out of their way or switch to a different language in order to achieve portability?