Sunday, April 8, 2007


Can you trust your security application?



Let me begin with a nice mathematical problem.

Say you have a divorced couple that are currently discussing over the phone how to divide up their possessions. They pretty much split everything equally, till they reach their car. They know they won't get nearly as much as it's worth by selling it, so they'd prefer if one of them would just keep it, but they can't agree to which one. They don't want to meet and they don't want to get another party involved. How can they determine which one should get the car?

They can flip a coin on it, but since this is over the phone, both parties won't be able to see it and agree. Ideally one side would decide what heads and tails means, and the other would say if it is heads or tails, but they need a secure method to transfer these secrets to each other, without either end fearing of some sort of compromise.

This problem is what today's public/private key secure communication technology is based upon, and we have come up with a solution.

One party would take two very large prime numbers, and multiply them and transfer this number to the other party. The first party keeps the two factors as a secret, the other party has no reasonable way to determine the two factors without months of computation. The first party then informs the second party that once the coin is flipped, they will take the larger factor, and look at it's fifth digit from a particular end. If that digit is odd it means the first party chose heads and the other tails, and if it is even, the winning conditions will reverse. Now the second party can flip the coin and report the status back to the first party.

At this point, everything needed to determine the winner has been transferred securely, now the secret can be transferred to decode who the winner was. The first party will tell the second what the two factors were, and they will have reached an agreement on who will retain the car.

Now provided that the duration of this transaction (phone call) was shorter than the time it would take for the second party to factor, and everything was done in the proper order, this entire exchange should be 100% secure. However, what if the second party happened to know that number already, and had the factors on hand? That party could cheat, and tamper with the exchange of information, rendering the transaction insecure.

Now when we look at secure communication technology today, they generally have each side come up with their own variation of the prime number examples above, and each side sends the other a "public key" to match with their own secret "private key". The private key can't be easily derived from the public key, but one will decode data encrypted with the other. Then when transferring, each side encodes and decodes data with their private and the other's public keys. An attacker can't jump in the middle, because it would need to get the private key which is never distributed in order to decode or impersonate one of them.

This security falls apart when the keys in use are older than their cryptographically secure time frame, or when the application doesn't follow the proper procedures. If keys are long and strong enough, and always replaced before their safe time frame limit approaches, and connections aren't opened indefinitely, one should be safe in relying upon the keys. However, a chain is only as strong as it's weakest link. If the application doesn't follow the proper procedures for key exchange, or has errors in its authentication and validation routines, it could be leaving it's users and their data compromisable.

If your security application is closed source, there is the chance that there is a backdoor programmed into it that will go unnoticed. When a company has few employees in comparison to the sheer amount of code it has in its products, there is little to stop one rouge employee to stick a tiny bit of code into a complex bit of security related code. When the amount of code greatly outnumbers the coders, very few people will bother to look and try to envision all the circumstances a bit of tricky code will have to handle and how it will end up handling it.

Consider how someone hacked into the Linux server and tried to embed a backdoor into the source that would later make up Linux releases around the world, read about it here. They inserted a bit of code that if two rare options were used together, an unprivileged user would gain administrator capabilities on that machine. Such a thing would go unnoticed if the person had the ability to modify the source at will without raising any eyebrows.

Let me offer an example. The SSH program allows a user to connect to a computer running an SSH server. When one wishes to connect to the other, they have to supply a user name and password, as well as tell SSH which modes to use.

Here are two existing options:

-1 Forces ssh to try protocol version 1 only.
-2 Forces ssh to try protocol version 2 only.


No person would normally consider to force SSH to try everything it was going to try anyway. Imagine if the developer of an SSH server stuck in a bit of code that if the client only wanted protocol version 1 and only wanted protocol version 2, to grant access even if the password was incorrect. This rouge developer could then gain access to every machine running his SSH server, and no one the wiser. Once this developer knows few at his company will see his work, he has nothing to lose to add such code. If he happens to be caught, and if the code was in a confusing section, he can say he was trying to handle an invalid case, and apparently didn't do it right, it was only an accident.

When your security program is closed source, do you really want to lay the security of your data in the hands of a disgruntled employee somewhere? Can you really trust the protection of walls that you can't see and has no outside review process? Many internal reviews miss things in tricky sections, especially when the group in question takes pride in their work. I'm reviewing code, and thinking to myself, hey we wrote this, we're world class programmers, we're good at what we do, this code is too tricky to really sit down with a fine tooth comb, I'm sure it's right.

Keep in mind though, just because something is open source, doesn't mean something can't be snuck in either. It's just less likely for that malicious code to remain there for too long if the application/library in question is popular and has enough people reviewing it.

But despite something being open source, malicious code can be snuck in without anyone ever noticing, read this paper for background. Now while the attack described where one edits the compiler to recognize certain security code and handle it in a malicious manner is a bit far fetched, similar attacks a bit closer to home are quite possible. If you're using a Linux distro which offers binary packages, what really stops a package maintainer from compiling a modified application and putting that in the distro's repositories? Those running a secure environment may want to consider compiling certain packages themselves and not trusting binaries that we really have no clue what is in them.

But based on this paper, do we have to worry that the compiler or the OS or other libraries would produce the proper binary when we compile this security application ourselves?

Lucky for us, despite what newbie programmers want, our programming languages aren't made up of a series of very high level functions such as compile(), handle_security(), and the like. Such would make it much easier for someone to make the compiler or the library do something malicious when it encounters such a request in the source. In order for such an attack to be really successful, it would have to understand every bit of code it's compiling to make sure the resulting program won't be able to detect the trojan, which is extremely tricky if not near impossible for a compiler to do. Not using a high level compiler or virtual machine gives us a layer of security in that it would be harder for one to pass out an "evil compiler" that would understand what the developer was trying to do and instead have it do something malicious.

But if such an attack were to take place, we'd have to pull out hex editors and disassemblers to see that such code has been snuck in (something which we must do with closed source applications). Take this a step further, it is theoretically possible if the OS were affected, or if the compiler was so smart that it intimately understood that it was compiling a hex editor or disassembler and the like to stick in code that would subvert file reading on executables and libraries to mask such malicious code even in the binary.

Now while some clever guy out there is thinking to himself: "Oh I'm going to do that, it'll be completely undetectable", such a course of action is much much easier said than done. I would be amazed if even a whole team of programmers would be able to pull such a monumental task off. I wouldn't worry too much that such magical wool has been invented to pull over our eyes when we try to decode a binary in a safe environment.

But I would worry if the application or the libraries it depends on are closed source. And even if we have the source, I would question where the binary we happen to be using comes from. If you're using even an open sourced application in something critical, I would advise to have your binary for your application and related libraries examined in a safe environment just to be sure. I just hope no one subverted an OS out there to alter non executable reading and writing on executable files, and have the OS strip/readd code when executable files are transfered.

Sunday, April 1, 2007


File Dialogs - Take 2



My previous article on file dialogs generated much feedback, and I got varied responses from all kinds of people. I'll go over the feedback I got, more data I've received, and what ramifications the last discussion produced.

In my previous article, I didn't discuss Windows Vista at all, as I don't have a copy of it, however several people contacted me with screenshots, and described the system a bit.

Lets take a first look:


There is a lot going on here. Up on top, we have a crumbs based directory browser stolen out of GTK, but of course this dialog is better than what GTK offers. It also provides a refresh button, and has a recent directory drop down. You also get a back and forward button to jump all over when looking for something. A nice addition though is a search box. Not sure where the file is? Then search for it! A nice new intuitive feature (taken from Mac OS X though).

Below this we have options to change what's shown, and the style it's presented in. The new directory button is also plainly visible. Then on the left, we have a quick location list like former versions of Windows had, but now in Windows 6, you can add and delete them to your heart's content. I'm not sure if you can rename them though, readers please write in regarding this. We then have the standard files listing from Windows 4+, with the ability to change the view like we expected. And of course to round it off nicely, we have the file input box to jump to files names quickly, and of course type in a path to move to like us power users want. File management features are also available.

But wait, we're not done yet, check this out:

As you can see, the "Folders" section on the left can be expanded to offer a tree view to browse your system. This borrows on the directory only browser (along side a file browser) from Windows 3, but offered in a more robust tree view. It seems a bit weird to see directories in both the directory and file browsers, but this should keep everyone happy. Many people were annoyed with Microsoft for combining the two in Windows 4+, as it was harder to navigate directories, and had to jump past directories to find files.

It seems like with this new version, Microsoft is trying to please everyone, offering every type of browsing possible, and I applaud them for that. I'd be interested to know if you can turn off the directory display in the main file list pane. If anyone knows, please write in.

I'd like to personally play with this to see how it stacks up against KDE 3.5's file dialog, but this looks really solid. The only problem seems to be they stills stuck with some of their virtual directory nonsense, such that you'll see Desktop/User and Desktop/Documents, when the actual tree is Users/User/Desktop and Users/User/Documents. Guess we can't have everything.

Next up, we'll be revisiting GTK. All the responses except for one to my last article agreed with me as to how bad GTK was. Some even wrote in offering demonstrations showing how it was worse than even I knew.

The one person who wrote in disagreeing offered some interesting data. No, he wasn't a developer telling me GNOME/GTK folks were improving it, and he didn't actually disagree with what I described as being bad. He wrote in to say that he has a completely different dialog!

Let us look at our first screenshot:


As you can see, a location bar is provided along with everything else we were familiar with, so one can quick jump, and this happens to work well. The quick locations on the left are also combined into one, so you can add and remove even the built in ones. Not sure about renaming though. But wait there's more!


As the above shows, it also has sane auto complete, instead of an auto complete where you write /usr and end up with /usr/src. I asked for the source of these changes, if perhaps it was from a new or in development version of GTK or GNOME. I was told that he had these dialogs since he setup his PC years ago, and that it was from a usability patch that he had installed. Unfortunately though, he wasn't sure where he got them from, so I guess I'm still stuck trying to replace FireFox and GAIM on my machine.

Let us take a moment to ponder though that there are usability patches out there to vastly improve GTK/GNOME, but we still have no hint of them making their way into the official versions. Perhaps if we start boycotting GTK apps, we'll see the developers do something sane for once. It'd also be nice if it wasn't as slow as molasses.

Next, we come to the Qt file open dialog. Last time, I showed a preview of what Qt 4.3 was going to offer. It seems I got no limit to the responses thanking me for alerting them to the impending disaster.

A friend of mine who has a neat app he wrote using GTK told me how he recently added file browsing support and was very annoyed at how he had to spend a lot of time writing a new file open dialog from scratch because of how utterly atrocious the built in one was. He told me he was considering switching over to Qt because he heard how superior it is, and how he won't have to put up with such stupidity as it has sane stuff built in. However when he saw what Qt 4.3 was planning, he promptly dropped any considerations he had, as he didn't feel like he needed to switch to a GTK knock off and reimplement the file open dialog again. Let us remember that GTK originally ripped Qt off and we don't need to go flip the tables, and pay attention to the $0.02 we get from developers who can't even figure out how to write a sane file dialog.

Another good friend of mine also took it upon himself to spread the word as much as possible. He mentioned it in #qt on Freenode, an IRC channel with many Qt developers. I'm told they were furious when they saw what changes were being planned.

Apparently all this criticism made its way back to Trolltech, and Ben Meyer quickly went to work to rectify the situation.

Here's what was in Qt 4.3's repository as of this past Friday:

As you can see, we're basically back to what Qt 4 had, except with quick locations added to the left. The quick locations allow adding and removing, and settings are saved. Unfortunately, no renaming though, so I'll likely end up with many directories labled "src" confusing me. Also, when using the file name box to browse, the bug from the former Qt 4.3's file save name box is here. If I enter "/usr/src", it'll switch to that path, but the name box will end up stupidly containing "src". Seems like someone forgot to do an S_ISDIR(st_mode) on stat(path) before blindly filling the box with basename(path) when enter is pressed.
I have great faith in the Trolltech guys though, these guys care, and fix things promptly. Lets hope they notice this and fix it before 4.3 is ready. One neat thing about the new version though is that you never need to refresh, as the dialog monitors the directory for changes. But don't worry, the thing is lightening quick, and doesn't seem to lag for anything. I even threw it against a directory with 20,000 files, and it displayed it instantly.

Finally, regarding the KDE 3.5 dialog, I wrote last time how it was the best thing I reviewed, my only disappointment was no renaming. However I was informed that you can rename with it. When you right click on a file, the rename option is labeled "properties". Once the properties come up, you can immediately rename, however the additional benefit here is that you can also click check boxes to change the permissions on a file too! I never thought to look in properties before, as I figured it would just give me info on the file, not actually allow me to change anything. Perhaps there should be some better naming go on over there to make it more intuitive, but it is now apparent that the KDE 3.5 dialog is definitely the superior dialog I have actually reviewed.

I really like the idea of adding a search feature though, and crumb supports usefulness is debatable. So I'll toss it up between Windows 6 and KDE 3.5 as to which is the best till I get a chance to get my hands on Vista.
However, KDE 4 will probably add a search to their file open, and I expect the clever guys at Trolltech to improve further if they receive enough feedback.

If you want developers of your favorite API/OS/Desktop Environment to improve, why not point them to this and the previous file dialog reviews. The guys at Trolltech are definitely open to feedback. Just make sure you're ready for rejection if you try talking to the GTK/GNOME guys, they don't care about anything.