Know your system calls for reading directory entries
You'll want to create some sort of library around the native directory functions in order to use the most efficient path on each operating system. Operating systems vary widely in what they they provide.
Sometimes you have multiple APIs, or a single API with many options you can turn on and off. Unless the provided API is in itself doing everything as efficiently as possible (such as efficiently returning the file names in a sorted matter), you'll want to use the lowest level API, or turn off all options it provides.
If bidirectional or scanning options are provided, turn them off, as you'll want forward only. The overhead for some of these features can significantly slow down the process of obtaining a file list. Same goes for anything else that adds overhead.
Be aware of what information is provided other than file names. Some OSs will return a structure which may also contain file size or file permissions alongside each file. If you're filtering by any of these, use the information directly there, instead of using an additional call to stat() or similar.
In addition to the above, if you need to filter by file size or file permissions, and the API does not provide that information directly, avoid the need for concatenating strings to get your information. A call to stat() would require concatenating a file name onto the directory name if it's not in the current directory. If your OS provides for it, use fstatat() or similar instead (see "OpenGroup Extended API Set Part 2").Process file names in an optimized manner
Once you start getting back file name(s), before anything, determine the length of a file name, as all other operations will need to know the length. Pretty much every operating system / file system limits the length of a file name to 255 bytes, the length of that can fit in a single byte. Compute the length (if need be) and keep reusing this cache.
If you want to do extension filtering, now is the time to do so. Do not use strrchr(), or anything of the sort to find where the extension begins (searching for a period). Functions like strrchr() will first travel to the end then look back, effectively being O(2N). Since you already know the length of the string, use an O(N) function to look back from the end. Perhaps even using one which can scan multiple bytes (whole CPU words) at a time.
For extension filtering, the most common method is generally one that breaks down to a series of compares. Put a little more thought into your algorithm usage! Either sort your list of extensions and binary search it for a match, or use perfect hashing. In either case, make sure the sorting or hashing phase is performed just once.
For copying the files names you want elsewhere, use memcpy() instead of strcpy() and similar. Don't forget to pass around the length for other uses too.Use efficient data structures efficiently
The next question is how to keep track of your list of files. The most two common ideas are to use a vector and just keep adding to it, or use a linked list. Both those solutions are wrong.
A vector while great in theory, has some performance and memory issues. When a vector can no longer contain what is being added to it, it doubles in size. The doubling in size can cause up to 99% more allocation than is actually needed during the growing process, which also comes with a lot of copying around. This may suit many workloads, but won't always suit the file list workload well.
A linked list on the other hand is one of the worst possible data structures you can use. Every entry has 100-200% extra overhead for pointers, and often means many series of allocations. Many small allocations leads to performance issues, and a lot of memory fragmentation.
The solution I would recommend is either a tweaked vector, or a combination of a vector plus linked list. Tweaking a vector starts off with reserving a set amount of space up front. Have your vector by default start with room for say 512 entries, and only then grow as needed. When you see your vector is full, instead of doubling in size, reserve another 512 entries. Smaller requests are more likely to be fulfilled for a realloc() in the same location, than a doubling in size. Very large directories beyond this point also becomes increasingly rarer at each new milestone. On top of all this, have your program remember how many files were in each of the past 10 or so directories it looked in (with filtering options of course). Then when loading a directory you recently loaded in the past, you can allocate an amount which is usually just what you need, or only needs one more chunk allocated onto it. Keep in mind that most users will keep using the same few directories over and over again with each application.
The combination of the vector and linked list would instead build a structure out of the two ideas combined. Create your array of 512 entries, along with a fill pointer and and linked list pointers. When your array is full, allocate another block of this structure as the next chain in the linked list. This ensures no need to ever reallocate, or copy data around during the growing phase. Of course keep a global pointer to the current link being filled, instead of iterating to it each time you add a new entry.
Which of these two options you use depends on how you want to sort and display your files. Each data structure has pros and cons for different use cases. Before we get into that, know how to store your file names themselves.
The above description was for the pointers to the arrays holding the file names, the file names should NOT have a separate block of allocated memory for each. That's inefficient. Instead allocate large chunks of memory, say 16KB each. Store your file names in this memory pool. Forget the silly trailing null if you can, store the length before the name itself. The two data structures explained above for containing lists of files will point into these pools. If a pool can't contain another entry you're about to add, allocate another 16KB chunk, point to it from the previous one, and start allocating in this new one. If the need for a new one was for a rare overly large file, say 230 bytes, when 200 bytes were free, you can implement an algorithm to save some overhead by seeing if the next couple of files fit into the end of any previous pools. The pools in general will probably waste less than your OSs malloc() for small file names, and you'll certainly get better performance. Deallocation will also be a quick (backwards) traversal for large chunks instead of many small ones.Sort efficiently
The first part of sorting is comparison. strcmp()/strncmp() is not the function to use. You want something based on memcmp() as that works faster thanks to having the length of the file name already cached. Of course that's only if you want standard sorting.
If you want case insensitive sorting, other methods will have to be used. If you're only dealing with ASCII file names, and you're not worried about file names containing control characters or some symbols end up being of equal value, you can compare 4 bytes at a time or'ing each with 0x20202020 (0x2020202020202020 for 8 bytes at a time). You're probably out of luck otherwise for case insensitive, unless someone knows of any good tricks to use.
If you need "natural sorting", where numbers are compared to each other, so file1 precedes file15, and file2 follows file1 not file15, there's a few implementations floating around online and most are just copies of one another. Sadly, the most popular one I see, which is present in two operating systems, and the native implementation in a particular programming language, happens to incorrectly handle numbers past a certain magnitude (did anyone even test the implementations or just accept them at face value?), and has a worse case of O(3N). So beware of what you use or base your code off of.
I'd recommend something along these lines:
int strnatbase(const char *s1, const char *s2)
{
for (; *s1; ++s1, ++s2) //No need to check s2, as we compare it to s1, and it won't equal if it's null
{
if (isdigit(*s1) && isdigit(*s2)) //If they're both digits, use this special handling
{
while (*s1 == '0') { ++s1; } while (*s2 == '0') { ++s2; } //First skip leading zeros
register bool d1 = isdigit(*s1), d2 = isdigit(*s2); //Are we still in a run of digits for both strings?
if (d1 != d2) { return(d1-d2); } //If for only one of them, return that as the difference
for (; d1 && (*s1 == *s2); ++s2) { d1 = isdigit(*++s1); } //Keep going while we have matching digits
d2 = isdigit(*s2); //If we broke the above loop because a single string ran out of digits
if (d1 != d2) { return(d1-d2); } //Return that as the difference
for (const char *p1 = s1, *p2 = s2; d1; ) //Otherwise, difference in the digits themselves, clarify magnitude
{
d1 = isdigit(*++p1); d2 = isdigit(*++p2); //First string to run out of digits first
if (d1 != d2) { return(d1-d2); } //Will lose right here
} //If loop breaks, both strings are out of digits, the difference found above will be handled below
}
if (NOT_EQUAL(*s1, *s2)) { break; } //In all other cases, fall through with difference in current position - if any
}
return(*s1-*s2);
}
Modify as necessary for unicode, and replace NOT_EQUAL() with your function (or macro) of choice for comparing two letters (accounting for case differences if you want).
As for the sort functions themselves, it will depend on which data structure you chose. You'll probably want Quicksort for the vector. Make sure your Quicksort sorts the pointers to the data and not the data itself! Same for any sorting algorithm sorting this. Swapping pointers is quick and easy. For the latter structure, you might want Merge sort, or perhaps Smoothsort. What you choose can depend on several factors.
You may or may not want to utilize threading. Depending on how you filter, the OS may be giving you mostly sorted file names which can also play a role in the algorithm you choose. In the case of the linked lists of arrays, you might want to sort each array with one algorithm, then sort the whole. If you're utilizing threading, you can begin sorting the array(s) as you're still obtaining files from the operating system, as disk reads are slower than memory reads (when the current directory contents is not (yet) cached by the OS).Display directly
This point is rather simple. Have your file dialog display entries directly from whichever data structure you chose, without copying anything around. Know that most existing file dialog routines provided by various libraries will not be compatible with the data structures described above. So roll your own, or choose structures which can be passed directly to your library.
Another possibility is to copy entries as they fit in the dialog window. The dialog is unable to show an infinite amount of entries at once, so convert from your structure to its structure just for the current files being viewed. Update as scrolling is performed.Search efficiently
Many file dialogs provide the option to type in some characters, and the file dialog jumps to the first file that begins with those characters. Since your file dialog is sorted, use binary search. Some people don't realize this, but binary search can find more than just an exact match, binary search can also be used to find the transition from entries "less than" to "first equal match" (in other words, binary search can also be used to find the transition between entries before the first match and the first match itself). Use this to find the first matching file in O(log(N)).
This basically wraps up this discussion on how to optimize file dialogs. I once was annoyed that loading up a directory with 20,000 files in it on a particular OS with a particular file browser took a good torturesome 20-30 seconds. Now using my own, I can load a directory with 100,000 files in 1-2 seconds on the same setup.
Thoughts, improvements, complaints, and every other kind of comment is welcome. Be sure to comment if you take of these ideas to optimize your library or application. I'd love to hear about it.