C++ is arguably the best primary language available today for non web based applications that are large, complicated, and advanced. But what should one use to compile their C++ code? Can you even name a single C++ compiler?
C++ was created by Bjarne Stroustrup in 1983 at AT&T. But instead of making a compiler for it, they opted to make a C++ to C translator and dubbed it Cfront. In 1985, Bjarne Stroustrup released a book on his language, moving the language onto center stage. But still there was no compiler to be found, only a translator.
Now translators aren't so bad. After all, they give the programmer many additional features over the base language, such as new types, operators, and other goodies. These extras make it easier to breathe in the art of programming. In fact, there exist many preprocessors and language extensions today using translators to make life easier. Qt for example brings their own C++ extensions to the table such as "slot functions" which works with a preprocessor, translating code to pure C++ and adding any extra functions and data as needed. Now for minor additions, or for translators to C++ which deal with clearly defined objects, generated C++ code is pretty good. But when dealing with entirely new constructs added to a language, generated code can be very bad depending on the circumstance it's in, or not be as optimized as it could be if the compiler understood which high level constructs the code is coming from.
For these reasons, having a real compiler is better than a translator. But what was the first actual compiler for C++? It might come as a surprise to some, but the first C++ compiler was actually GCC, in 1987, 2 years after C++ was exposed to the world. However what C++ itself was - was determined by each new version of Cfront, and GCC did what it could to produce identical results (with extra features on top of course). Over time other C++ compilers came out, each bringing its own slew of extra features to the table, and some of them even acting differently than Cfront did.
In 1992, Microsoft's Visual C++ was released, and it understood code quite a bit differently than Cfront, causing many compatibility issues, and earning a new meaning to MSVC++ - Microsoft Vs. C++. Adding to these compatibility problems was many new features being proposed all the time such as templates, and Cfront development (the de facto standard) stopping in 1993, as it was deemed impossible to keep building upon a translation framework for such a complicated language. Since there was so many different implementations out there, and the "standard" one decided to throw in the towel, the International Organization for Standardization (ISO) decided to step in and standardize C++ as it did with C in 1989.
The ISO finished the standardization of C++ in 1997, and published the new specification in 1998. From this point on, the only real C++ is the one standardized in 1998 (although a second standard of C++ is being worked on to be completed in the next few years), and nothing else can be accurately labeled C++. So now we're about a decade later, guess how many C++ compilers there are?
And the correct answer to the above question is none! If you want to turn to GCC, it still doesn't support the "export" keyword, nor does it handle certain complicated classes entirely right. If you turn to MSVC, until very recently, their implementation of the Standard Template Library was quite shoddy, and was pretty bad with templates in general. More recent MSVCs still have issues though, see here for a short list from Microsoft itself. Aside from Microsoft's list, I'm told by some that they also found some other issues that aren't listed. And for some strange reason, MSVC doesn't even handle scope properly by default for variables created in headers to a block, and instead has an option to enable proper scoping separately. Whose bright idea was it to make improper scoping the default?
So if you want to write a real C++ program today, what exactly should you use? Well, lucky for us, Edison Design Group decided to do what AT&T deemed impossible, and make an up to date fully standards compliant C++ to C translator. Although EDG themselves don't distribute it to the general market. If you want to be able to build your C and C++ code properly, currently, your only choice is to find a complete setup around EDG's translator, such as Comeau C/C++.
Drawbacks with Comeau C/C++ though is that it isn't free. GCC is free, and Microsoft now also offers free versions of their C/C++ compiler, while Comeau C/C++ costs $50 per platform. Comeau C/C++ also isn't as portable as GCC, and doesn't operate properly on a fraction of the OSs that GCC does. Being a translator, it also suffers from the less optimal code generation issues, not to mention longer build times due to more memory usage and more steps involved. Comeau C/C++ also doesn't work by itself, but on install has to configure itself on top of an existing compiler already installed such as GCC or MSVC.
Scary how we don't even have a compiler for the first standardization of C++ and they're already working on a second standardization. C also had a second standardization done in 1999, but is mostly supported by GCC, and will probably be fully supported soon enough. C99 of course is also supported by Comeau C/C++ and works by translating C99 to C89 as needed.
Although the important question we have to ask is not "where are the C++ compilers?", but will we ever see a C++ compiler? GCC for example has their own propaganda written up over how much they support the in progress second C++ standard, without even worrying about the first standard, or even showing a page of how far that is completed yet. Even Microsoft has a page describing some of their caveats. The only little snippet GCC has is this.
Even more disconcerting is that RMS, the idiotic hypocritical redefinitional head of the FSF says he prefers not to have support for "export" in GCC, and prefers it if GCC doesn't follow the standard. His chutzpa is so bad, that he even asked the standards committee if they can change the standard to reflect what he's implemented in GCC, and remove the export keyword altogether, since he doesn't want it to be there.
Of course someone may wonder what is the big fuss about the export keyword anyway. Good C/C++ programmers know that when breaking up functions into different files, it's best to put sizable functions in the C/C++ file, and prototypes for them in the header files for other functions elsewhere to use. The export keyword is supposed to tell the C++ compiler that the current function uses templates, and it needs to have a special uncompiled (but perhaps middle representation) version of the function saved separately to be able to link with other code that calls that function properly.
The problem with export here is that some of the code won't be compiled until link time, meaning a more annoying link process, and the possibility of compile errors at link time. So we have to decide which is the lesser of two evils, either start sticking code in headers files (as people are now forced to do), or have code compile at link time (which people like RMS don't even want to give us the freedom to do). Or do we? Is there perhaps another implementation option?
For MSVC, where Visual Studio itself, annoyingly, won't let you compile a file without having a whole project for it can in fact use this to its advantage. Since it has a full file list, and also keeps class lists and other things such as function prototypes handy for auto completion and tool tip hints, it already knows where it has to grab actual code from. An IDE like Visual Studio just has to pass its compiler the actual file to compile, and a list of other files to pull templates from, and it's all transparent to the programmer. Any compiler could do this, although it would probably get quite annoying if you have to specify all the locations by hand. Of course a sane default might be to try to find '.cpp' files to match any template function prototype in a header file included by the current C++ file that is being compiled.
Post your thoughts on this subject, and any other ideas you might have to sanely implement the export keyword in C++ without running into some of the cons listed above.
Tuesday, June 19, 2007
Wednesday, June 13, 2007
Hashing out Hashing and Hash Libraries
Posted by
insane coder
at
Wednesday, June 13, 2007
This topic is one that I enjoy quite a bit, but unfortunately there is a lot of problems out there.
I'm sure everyone has heard of hashing before, but many are not quite sure what it is, how it works, why it's used, or the various uses for it, and why are there so many hashing algorithms.
The hashing idea started when programmers determined a need to various situations to generate a "key" or a "unique identifier" for chunks of data. Say you had 20 strings, each one being 300 characters long, to find out if a new string is in the list (and if so, which one) you'd need to go through 20 entries, and depending on how similar they are, you'd have to do up to 300 comparisons on each one. Needless to say, finding such a needle in a haystack is slow. It also means that to compare, one needs an exact copy stored in the database. So for example, if I was passing you a file, and we wanted to know if it transfered correctly, it would be annoying (to say the least) for me to send you a second copy for comparison. Furthermore, there is no guarantee that any transfers would work right.
For all these reasons above (and lesser reasons as well), hashing was invented. The idea in hashing is that we apply a truncating transform to the data in question. Imagine we have a number 0 to 99999999999999999999, we could have a truncation where we add up all the digits over and over till we get a single digit. So if our number was 123456 we could add that up to 21 and that to 3. Although converting any arbitrary number size to a single digit isn't very unique, as we'll only have 10 possibilities, this follows the basic idea of hashing. A good hashing algorithm will have drastically different results for similar numbers, and wouldn't have a situation where 123456 and 654321 and 123465 all converted to the same hash. What we need to realize though is three key points in hashing.
1) There is truncation. One can't determine the original number from a single hash alone.
2) There is a limited size to the hash itself, and we can't uniquely identify all data with a smaller key. We only have as much uniqueness as the amount of possibilities of the hash itself. Meaning for our hash above, we'd get unique hashes if the input was a single digit, once we're past that point, we have "collisions". If the hash is 4 bytes for example, the uniqueness is only guaranteed on inputs block of up to 4 bytes.
3) Hashing algorithms are completely arbitrary mathematical transforms. Normally, when you read a good programming novel (source code), you like to see some sense in the code, like having objects that model real world objects, or control flow that matches what the user sees. When you look at hashing code, you normally just see a bunch of shifts and rotates, xor'ing data with all sorts of seemingly random values, and producing results that look like A4B31AAD49879FE which doesn't mean anything to anybody. While good hashing algorithms try to focus on transformations that make the hash/key end up looking drastically different for similar inputs, the code to do so really is just gibberish from any standpoint. One shouldn't ask, but why is he adding up all the digits? Why not flip every other bit here? Why not rotate every 3rd bit 4 over?
Now that we have a basic understanding what hashing is, let's look at various hash uses more in depth.
One of the most common uses in programming is the Hash Table. Say you have a programming language with exactly 64 keywords. You want to write a compiler for this language, as you parse the input source code, you'd have to compare every word to one of your keywords. You can store your keywords in a table in lexicographical order, and binary search each word to see if the word is in there, but this generally means you'll bump into on average 5 keywords which you'll have to do string comparisons on before you find for certain if the word you have in question is a keyword or not, and if so, what to do with it. While 5 isn't much, string comparisons are slow as they involve many comparisons internally, and the number is multiplied by the amount of words in the source code. Now while our example has 64, you may find other examples where the amount is 1024 bumping up the amount of failure checks to 10, put together, this ads up to a slowness in your program where there doesn't have to be.
The trick in this case would be to devise a hashing algorithm for your table that produces 64 unique possibilities, with each key mapping to exactly one keyword. Then instead of binary searching with many string comparisons, one can hash the word in question, jump directly into the array and then do the last comparison to see if it was a keyword or not. If the hash algorithm was 3 operations, and the average string comparison was 10 operations, you just cut down keyword check overhead from 50 operations to 3. Of course some of you are thinking that finding a hash algorithm to map so perfectly for this particular situation is probably non existent. However with hashing, we have many tricks up our sleeve to approach this ideal situation. If say our hashing algorithm produced 80 possible key results but otherwise gave each keyword a unique key, we'd still be able to reach our goal, as we'll just have 16 holes in the array that if jumped into, we know immediately that the word in question was not a keyword. There's also techniques available where you could have more than one key occupying a slot in the array, as it's really an array of arrays, and hashing turns into a kind of more optimal binary search, where instead of cutting the array in half, you eliminate 90% or more of the array each time. One could also setup a situation where we instead store both the hash and the strings in an array, and then sort the arrays by the hash, and then one only need to binary search for the hash, thus turning lengthy string comparisons into single integer comparisons (provided that the hash result is an integer of a fixed width).
Using the idea of hashing to save against lengthy string comparisons also works well with all kinds of data structures. You could for example change the key in a binary tree from being a string into a hash of the string, to speed up tree traversal. Hash Tables are also very useful if you need to keep track of data that is processed on the fly, using an array of array method touched on above.
That is your basic data structure use of hashes, now I'd like to focus more on the "signing" aspect of it. Today, there is a multitude of "cryptographic hash functions" which are designed mathematically that it should be very time consuming for a program to just determine how to generate a series of data to match the hash, forcing one to resort to brute force. Once brute force is required, and the possible data set size is well over the trillions, it wouldn't be feasible (using conventional methods at least) to determine a matching set of data for the hash, creating the illusion of uniqueness of the key, making it a way of signing a set of data so one can validate its integrity after transfer. That's why you quite often see websites listing the "MD5" or "SHA-1" of a file you want to download.
This idea can also be expanded to databases, and provide a way to identify data. Say you want to setup a used DVD buy and sell store. When someone comes into your store with a DVD in some case, you want to make sure it's a real DVD and not some copy (or if it is a copy, at least true to the original), that it's not scratched to the extent that some of it can't be read, and you need to know which edition and for which region the movie is in. How does one setup his used DVD store in a way that he can verify a DVD is good before he buys it, and know which edition it is to label it properly?
To have an archive of each DVD somewhere isn't practical, and if it has to be compared over a network, the transfer time would just be too long. If one were to setup a network where some hashes of the data alongside some data on the DVD such as track name, language lists, etc..., a used DVD store could make use of this database to identify and validate DVDs. The people running these stores can all use a program which they pass new DVDs they buy through. The program would collect 2+ cryptographically secure hashes from the full data of the DVD, and rip all other info that can be obtained, using a program such as MPlayer. Then the user can enter in additional info such as what name is on the box, and which edition the box says it is. This data is then placed on a table on a network (website). If one wants to identify a DVD, after hashes are built off of it, these hashes are then scanned for in the table. The primary key of the table should be the hash which has the smallest length, and sorted by it, so the correct section of the table can be binary searched for very quickly. Then the other stats such as extra hashes and track titles can be compared against to make sure this is the right DVD, and not one which happened to match a single hash.
Once such a setup is in place, someone running a DVD store just has to stick a DVD someone is trying to sell them in a machine, and in a few minutes, they can know if it's real, can be read fully, and which edition it is. If it's not in the database, they don't have to make a mistaken purchase, or spend time having to watch the whole thing to make sure it's complete and doesn't stop in the middle anywhere. And with the information pulled from the web, they know if it has to be labeled wide screen edition or deluxe edition or whatever (which isn't always written on the DVD itself, just the box which may not be part of the sale, or may even be the wrong box with the intention to fool the buyer).
Some key points to remember listed above is multiple hashes, sorted by the smallest (even perhaps weakest) hash, and have auxiliary data to verify against, and use cryptographic hashes. If hashes were chosen which could be replicated quickly, people who want to beat the system could make a DVD which just has some matching track lables or whatever data, along with random data to match the hashes in question. Using 2+ hashes also makes it much harder to generate false matching data, as methods to crack hashes are very different from each other, and increase the computational complexity exponentially.
There are various hashes today with varying levels of security, complexity, and length which are good for many of these purposes mentioned above and others. One popular family is the CRC-16, CRC-32, CRC-64, with the numbers referring to how many bits are in hash. CRC is quick and easy to compute (as well as to crack), but since it fits into native integer sizes on CPUs, it's very good for short tables you keep in RAM that you want to deal with quickly and repeatedly. However it's not good for verifying data, as how easy it is to crack, not being cryptographic at all. Focusing on cryptographic hashes, next up which is nice and popular MD-5. MD-5 was an old favorite which has been considered broken for a while now, yet is still in wide use for signing data. MD-5 is 128 bit, and is a decent choice to use along side other keys for extra protection, but shouldn't be used by itself. Another very popular choice today is the SHA series, which offers truncated and non truncated flavors. Personally, I find truncating a hash to be pointless, so we'll just focus on SHA-1 (160 bit), SHA-2 32 bit words (256 bit), and SHA-2 64 bit words (512 bit). SHA-1 was considered a step up on MD-5 and 'the thing to use', but it isn't that much more secure, and while not busted wide open as of the time of this writing, it has been weakened significantly and shouldn't be used by itself either. SHA-2 is significantly more secure, as well as giving long bit sizes to provide more uniqueness of keys. If speed is a consideration, using the 32-bit or 64-bit variants might factor into which CPUs your hashing use is focused on, although of course either can be computed with any kind of CPU, it's just a matter of speed. SHA-512 is regarded as one of the strongest hashes today, but it's hardly the only game out there. There's also RIPEMD (160 bit) as an alternative to SHA-1 being the same size, and built along the same lines as MD-5, which also has its problems, but is a favorite in certain circles. Looking for more unusual but strong hashes, there is Tiger which is 192 bit, and quite popular in certain P2P applications. Tiger being less popular, it hasn't seen the scrutiny some of the other hashes have, so it might not be as good as the SHA series, or it might be better, hard to say, but to my knowledge, doesn't have any significant attacks on it. Tiger2 is also in the works which is supposed to tighten any issues discovered in the first edition, and is supposed to be out in the not so distant future. Lastly, there is Whirlpool which like the 64 bit SHA-2 is 512 bits, and uses heavy transformations making it an excellent choice for hashing, and may be the best popular one currently available, although like Tiger it hasn't seen as much scrutiny as the SHAs. Although Whirlpool is currently at v3, which strengthened any possible issues pointed out since its invention.
Once you decide which hashing algorithm to use, you may wonder where exactly should you get the code from? Since hashing code isn't really supposed to make sense to someone looking at it, clarity isn't what one should be looking for when trying to find a good library. One should focus more on correctness, completeness, portability, usability, and speed.
If you're looking for BSD licensed version of MD-5, RIPEMD, SHA-1 and 2, you can grab good accurate source right out of OpenBSD's CVS. Unfortunately though, the source has some BSDisms in it which will have to be removed if you want to use it elsewhere. It also doesn't make use of the C99 types which can be used to get integers of exact sizes, which is very important when you want portable accurate code with hashing.
There's Botan which has good accurate hashing as well, of every hash we discussed here, and even some assembly optimized versions, all under the BSD license, but it's tangled up with other code making it a bit annoying if you just want to grab some source for a single hash algorithm to use. The portable code is also in C++, which isn't good for C only applications.
Mhash is another library one can use under LGPL which covers all the hashes. I notice though that its output of Tiger is backwards, which also makes me worry about endian issues in general with it. It also suffers of other issues mentioned above.
Crypto++ is a popular C++ library for hashing (and encryption) placed in the public domain. This library is rather excellent, although annoying to use if you only want to grab one source file for a single hash. It also has some assembly optimized versions available, but doesn't use the function pointer methods explained in my previous blog post, and instead checks each time whether your CPU has the instructions needed to compute the hash before computing it. Also in a test I ran, the SSE2 optimized version of SHA-512 crashed under Mac OS X (Intel of course), which may be due to unaligned memory accesses.
If you just want SHA hashing, Brian Gladman has a popular library to keep you covered. It's written in C and dual licensed as BSD and GPL. But it doesn't have any of the type safety of C99 mentioned above.
If you're looking for Whirlpool, be careful, if you look on Wikipedia, it links to source which seems to work at first, but results drastically changed based on what size chunks the data is passed in. I noticed a buffer size of 1-8191 bytes produces different results than hashing the data in 8192 byte chunks.
LibTomCrypt, another implementation under C and for the public domain seems very nice on the surface. But on further testing, I found that if the length of data passed to it for Tiger or Whirlpool hashing is not a multiple of 64, it gives improper results. Problems could be in other situations which I did not test. I really liked this though, as the code was pretty clean, and many implementations were rather fast.
OpenSSL which is very popular and in C with an Apache style license is a good choice if you want the popular hashes, as it doesn't cover Tiger or Whirlpool (but has MD-5, RIPE, and SHA). It also has assembly implementations in certain cases, but the assembly code is really a nightmare, and it's hard to integrate just part of OpenSSL into a project.
If you're looking for another implementation of SHA in C under BSD, and written rather well (but perhaps not as fast as Brian's, look no further than Aaron Gifford's implementation. If I'm not mistaken, it's also his SHA-2 in OpenBSD (but messied up a bit).
A good source which covers our favorite hashes properly is the md5deep project, only thing missing is RIPEMD code. I rather like this one and think it's a good place to start.
Of course, there's also official implementations, for Tiger, Whirlpool, and RIPEMD. Although the official Tiger code seems to be one pass only, no simple hash_init(), hash_update(), hash_finalize() methods. And the official Whirlpool and RIPEMD implementations are rather slow. Some of the other implementations of Whirlpool pointed out above were 25% faster.
It'd also be nice to have some kind of consistency when it comes to which functions to use. Before choosing, make sure you test well on various machines of different endian, bit width, and on various data sizes and chunk sizes. You may be surprised how many libraries just aren't written right and produce wrong data.
I'm personally working on C99 implementations of these, which I'm testing on Win32, Linux x86-32, Linux x86-64, Mac OS X PPC, Mac OS X Intel, in a large variety of situations to certify correctness and portability. Of course I also plan on having a consistent interface for all of these, and each being contained to a single .h/.c pair so it can be easily dropped in anywhere. Right now I'm currently trying to optimize my Tiger implementation more, and adding more assembly variations using the function pointer method documented previously to give the best speed, while providing portability and usability. Let me know if you're interested in using it when I'm done.
While it is ridiculous for yet another person to make yet another hash library, it is rather appalling how many out there are broken or have very bad bugs in them, or act very differently on another system. Don't make the mistake of using a library which hasn't been tested heavily in a variety of situations.
I'm sure everyone has heard of hashing before, but many are not quite sure what it is, how it works, why it's used, or the various uses for it, and why are there so many hashing algorithms.
The hashing idea started when programmers determined a need to various situations to generate a "key" or a "unique identifier" for chunks of data. Say you had 20 strings, each one being 300 characters long, to find out if a new string is in the list (and if so, which one) you'd need to go through 20 entries, and depending on how similar they are, you'd have to do up to 300 comparisons on each one. Needless to say, finding such a needle in a haystack is slow. It also means that to compare, one needs an exact copy stored in the database. So for example, if I was passing you a file, and we wanted to know if it transfered correctly, it would be annoying (to say the least) for me to send you a second copy for comparison. Furthermore, there is no guarantee that any transfers would work right.
For all these reasons above (and lesser reasons as well), hashing was invented. The idea in hashing is that we apply a truncating transform to the data in question. Imagine we have a number 0 to 99999999999999999999, we could have a truncation where we add up all the digits over and over till we get a single digit. So if our number was 123456 we could add that up to 21 and that to 3. Although converting any arbitrary number size to a single digit isn't very unique, as we'll only have 10 possibilities, this follows the basic idea of hashing. A good hashing algorithm will have drastically different results for similar numbers, and wouldn't have a situation where 123456 and 654321 and 123465 all converted to the same hash. What we need to realize though is three key points in hashing.
1) There is truncation. One can't determine the original number from a single hash alone.
2) There is a limited size to the hash itself, and we can't uniquely identify all data with a smaller key. We only have as much uniqueness as the amount of possibilities of the hash itself. Meaning for our hash above, we'd get unique hashes if the input was a single digit, once we're past that point, we have "collisions". If the hash is 4 bytes for example, the uniqueness is only guaranteed on inputs block of up to 4 bytes.
3) Hashing algorithms are completely arbitrary mathematical transforms. Normally, when you read a good programming novel (source code), you like to see some sense in the code, like having objects that model real world objects, or control flow that matches what the user sees. When you look at hashing code, you normally just see a bunch of shifts and rotates, xor'ing data with all sorts of seemingly random values, and producing results that look like A4B31AAD49879FE which doesn't mean anything to anybody. While good hashing algorithms try to focus on transformations that make the hash/key end up looking drastically different for similar inputs, the code to do so really is just gibberish from any standpoint. One shouldn't ask, but why is he adding up all the digits? Why not flip every other bit here? Why not rotate every 3rd bit 4 over?
Now that we have a basic understanding what hashing is, let's look at various hash uses more in depth.
One of the most common uses in programming is the Hash Table. Say you have a programming language with exactly 64 keywords. You want to write a compiler for this language, as you parse the input source code, you'd have to compare every word to one of your keywords. You can store your keywords in a table in lexicographical order, and binary search each word to see if the word is in there, but this generally means you'll bump into on average 5 keywords which you'll have to do string comparisons on before you find for certain if the word you have in question is a keyword or not, and if so, what to do with it. While 5 isn't much, string comparisons are slow as they involve many comparisons internally, and the number is multiplied by the amount of words in the source code. Now while our example has 64, you may find other examples where the amount is 1024 bumping up the amount of failure checks to 10, put together, this ads up to a slowness in your program where there doesn't have to be.
The trick in this case would be to devise a hashing algorithm for your table that produces 64 unique possibilities, with each key mapping to exactly one keyword. Then instead of binary searching with many string comparisons, one can hash the word in question, jump directly into the array and then do the last comparison to see if it was a keyword or not. If the hash algorithm was 3 operations, and the average string comparison was 10 operations, you just cut down keyword check overhead from 50 operations to 3. Of course some of you are thinking that finding a hash algorithm to map so perfectly for this particular situation is probably non existent. However with hashing, we have many tricks up our sleeve to approach this ideal situation. If say our hashing algorithm produced 80 possible key results but otherwise gave each keyword a unique key, we'd still be able to reach our goal, as we'll just have 16 holes in the array that if jumped into, we know immediately that the word in question was not a keyword. There's also techniques available where you could have more than one key occupying a slot in the array, as it's really an array of arrays, and hashing turns into a kind of more optimal binary search, where instead of cutting the array in half, you eliminate 90% or more of the array each time. One could also setup a situation where we instead store both the hash and the strings in an array, and then sort the arrays by the hash, and then one only need to binary search for the hash, thus turning lengthy string comparisons into single integer comparisons (provided that the hash result is an integer of a fixed width).
Using the idea of hashing to save against lengthy string comparisons also works well with all kinds of data structures. You could for example change the key in a binary tree from being a string into a hash of the string, to speed up tree traversal. Hash Tables are also very useful if you need to keep track of data that is processed on the fly, using an array of array method touched on above.
That is your basic data structure use of hashes, now I'd like to focus more on the "signing" aspect of it. Today, there is a multitude of "cryptographic hash functions" which are designed mathematically that it should be very time consuming for a program to just determine how to generate a series of data to match the hash, forcing one to resort to brute force. Once brute force is required, and the possible data set size is well over the trillions, it wouldn't be feasible (using conventional methods at least) to determine a matching set of data for the hash, creating the illusion of uniqueness of the key, making it a way of signing a set of data so one can validate its integrity after transfer. That's why you quite often see websites listing the "MD5" or "SHA-1" of a file you want to download.
This idea can also be expanded to databases, and provide a way to identify data. Say you want to setup a used DVD buy and sell store. When someone comes into your store with a DVD in some case, you want to make sure it's a real DVD and not some copy (or if it is a copy, at least true to the original), that it's not scratched to the extent that some of it can't be read, and you need to know which edition and for which region the movie is in. How does one setup his used DVD store in a way that he can verify a DVD is good before he buys it, and know which edition it is to label it properly?
To have an archive of each DVD somewhere isn't practical, and if it has to be compared over a network, the transfer time would just be too long. If one were to setup a network where some hashes of the data alongside some data on the DVD such as track name, language lists, etc..., a used DVD store could make use of this database to identify and validate DVDs. The people running these stores can all use a program which they pass new DVDs they buy through. The program would collect 2+ cryptographically secure hashes from the full data of the DVD, and rip all other info that can be obtained, using a program such as MPlayer. Then the user can enter in additional info such as what name is on the box, and which edition the box says it is. This data is then placed on a table on a network (website). If one wants to identify a DVD, after hashes are built off of it, these hashes are then scanned for in the table. The primary key of the table should be the hash which has the smallest length, and sorted by it, so the correct section of the table can be binary searched for very quickly. Then the other stats such as extra hashes and track titles can be compared against to make sure this is the right DVD, and not one which happened to match a single hash.
Once such a setup is in place, someone running a DVD store just has to stick a DVD someone is trying to sell them in a machine, and in a few minutes, they can know if it's real, can be read fully, and which edition it is. If it's not in the database, they don't have to make a mistaken purchase, or spend time having to watch the whole thing to make sure it's complete and doesn't stop in the middle anywhere. And with the information pulled from the web, they know if it has to be labeled wide screen edition or deluxe edition or whatever (which isn't always written on the DVD itself, just the box which may not be part of the sale, or may even be the wrong box with the intention to fool the buyer).
Some key points to remember listed above is multiple hashes, sorted by the smallest (even perhaps weakest) hash, and have auxiliary data to verify against, and use cryptographic hashes. If hashes were chosen which could be replicated quickly, people who want to beat the system could make a DVD which just has some matching track lables or whatever data, along with random data to match the hashes in question. Using 2+ hashes also makes it much harder to generate false matching data, as methods to crack hashes are very different from each other, and increase the computational complexity exponentially.
There are various hashes today with varying levels of security, complexity, and length which are good for many of these purposes mentioned above and others. One popular family is the CRC-16, CRC-32, CRC-64, with the numbers referring to how many bits are in hash. CRC is quick and easy to compute (as well as to crack), but since it fits into native integer sizes on CPUs, it's very good for short tables you keep in RAM that you want to deal with quickly and repeatedly. However it's not good for verifying data, as how easy it is to crack, not being cryptographic at all. Focusing on cryptographic hashes, next up which is nice and popular MD-5. MD-5 was an old favorite which has been considered broken for a while now, yet is still in wide use for signing data. MD-5 is 128 bit, and is a decent choice to use along side other keys for extra protection, but shouldn't be used by itself. Another very popular choice today is the SHA series, which offers truncated and non truncated flavors. Personally, I find truncating a hash to be pointless, so we'll just focus on SHA-1 (160 bit), SHA-2 32 bit words (256 bit), and SHA-2 64 bit words (512 bit). SHA-1 was considered a step up on MD-5 and 'the thing to use', but it isn't that much more secure, and while not busted wide open as of the time of this writing, it has been weakened significantly and shouldn't be used by itself either. SHA-2 is significantly more secure, as well as giving long bit sizes to provide more uniqueness of keys. If speed is a consideration, using the 32-bit or 64-bit variants might factor into which CPUs your hashing use is focused on, although of course either can be computed with any kind of CPU, it's just a matter of speed. SHA-512 is regarded as one of the strongest hashes today, but it's hardly the only game out there. There's also RIPEMD (160 bit) as an alternative to SHA-1 being the same size, and built along the same lines as MD-5, which also has its problems, but is a favorite in certain circles. Looking for more unusual but strong hashes, there is Tiger which is 192 bit, and quite popular in certain P2P applications. Tiger being less popular, it hasn't seen the scrutiny some of the other hashes have, so it might not be as good as the SHA series, or it might be better, hard to say, but to my knowledge, doesn't have any significant attacks on it. Tiger2 is also in the works which is supposed to tighten any issues discovered in the first edition, and is supposed to be out in the not so distant future. Lastly, there is Whirlpool which like the 64 bit SHA-2 is 512 bits, and uses heavy transformations making it an excellent choice for hashing, and may be the best popular one currently available, although like Tiger it hasn't seen as much scrutiny as the SHAs. Although Whirlpool is currently at v3, which strengthened any possible issues pointed out since its invention.
Once you decide which hashing algorithm to use, you may wonder where exactly should you get the code from? Since hashing code isn't really supposed to make sense to someone looking at it, clarity isn't what one should be looking for when trying to find a good library. One should focus more on correctness, completeness, portability, usability, and speed.
If you're looking for BSD licensed version of MD-5, RIPEMD, SHA-1 and 2, you can grab good accurate source right out of OpenBSD's CVS. Unfortunately though, the source has some BSDisms in it which will have to be removed if you want to use it elsewhere. It also doesn't make use of the C99 types which can be used to get integers of exact sizes, which is very important when you want portable accurate code with hashing.
There's Botan which has good accurate hashing as well, of every hash we discussed here, and even some assembly optimized versions, all under the BSD license, but it's tangled up with other code making it a bit annoying if you just want to grab some source for a single hash algorithm to use. The portable code is also in C++, which isn't good for C only applications.
Mhash is another library one can use under LGPL which covers all the hashes. I notice though that its output of Tiger is backwards, which also makes me worry about endian issues in general with it. It also suffers of other issues mentioned above.
Crypto++ is a popular C++ library for hashing (and encryption) placed in the public domain. This library is rather excellent, although annoying to use if you only want to grab one source file for a single hash. It also has some assembly optimized versions available, but doesn't use the function pointer methods explained in my previous blog post, and instead checks each time whether your CPU has the instructions needed to compute the hash before computing it. Also in a test I ran, the SSE2 optimized version of SHA-512 crashed under Mac OS X (Intel of course), which may be due to unaligned memory accesses.
If you just want SHA hashing, Brian Gladman has a popular library to keep you covered. It's written in C and dual licensed as BSD and GPL. But it doesn't have any of the type safety of C99 mentioned above.
If you're looking for Whirlpool, be careful, if you look on Wikipedia, it links to source which seems to work at first, but results drastically changed based on what size chunks the data is passed in. I noticed a buffer size of 1-8191 bytes produces different results than hashing the data in 8192 byte chunks.
LibTomCrypt, another implementation under C and for the public domain seems very nice on the surface. But on further testing, I found that if the length of data passed to it for Tiger or Whirlpool hashing is not a multiple of 64, it gives improper results. Problems could be in other situations which I did not test. I really liked this though, as the code was pretty clean, and many implementations were rather fast.
OpenSSL which is very popular and in C with an Apache style license is a good choice if you want the popular hashes, as it doesn't cover Tiger or Whirlpool (but has MD-5, RIPE, and SHA). It also has assembly implementations in certain cases, but the assembly code is really a nightmare, and it's hard to integrate just part of OpenSSL into a project.
If you're looking for another implementation of SHA in C under BSD, and written rather well (but perhaps not as fast as Brian's, look no further than Aaron Gifford's implementation. If I'm not mistaken, it's also his SHA-2 in OpenBSD (but messied up a bit).
A good source which covers our favorite hashes properly is the md5deep project, only thing missing is RIPEMD code. I rather like this one and think it's a good place to start.
Of course, there's also official implementations, for Tiger, Whirlpool, and RIPEMD. Although the official Tiger code seems to be one pass only, no simple hash_init(), hash_update(), hash_finalize() methods. And the official Whirlpool and RIPEMD implementations are rather slow. Some of the other implementations of Whirlpool pointed out above were 25% faster.
It'd also be nice to have some kind of consistency when it comes to which functions to use. Before choosing, make sure you test well on various machines of different endian, bit width, and on various data sizes and chunk sizes. You may be surprised how many libraries just aren't written right and produce wrong data.
I'm personally working on C99 implementations of these, which I'm testing on Win32, Linux x86-32, Linux x86-64, Mac OS X PPC, Mac OS X Intel, in a large variety of situations to certify correctness and portability. Of course I also plan on having a consistent interface for all of these, and each being contained to a single .h/.c pair so it can be easily dropped in anywhere. Right now I'm currently trying to optimize my Tiger implementation more, and adding more assembly variations using the function pointer method documented previously to give the best speed, while providing portability and usability. Let me know if you're interested in using it when I'm done.
While it is ridiculous for yet another person to make yet another hash library, it is rather appalling how many out there are broken or have very bad bugs in them, or act very differently on another system. Don't make the mistake of using a library which hasn't been tested heavily in a variety of situations.
Subscribe to:
Posts (Atom)