Saturday, February 15, 2014

HTTP 308 Incompetence Expected

Internet History

The Internet from every angle has always been a house of cards held together with defective duct tape. It's a miracle that anything works at all. Those who understand a lot of the technology involved  generally hate it, but at the same time are astounded that for end users, things seem to usually work rather well.

Today I'm going to point out some proposed changes being made to HTTP, the standard which the World Wide Web runs on. We'll see how not even the people behind the standards really know what they're doing anymore.

The World Wide Web began in the early 90s in a state of flux. The Internet Engineering Task Force, as well as major players like Netscape released a bunch of quasi-standards to quickly build up a set of design rules and techniques used until HTTP v1.0 came out in 1995. Almost immediately after, HTTP v1.1 was being worked on, and despite not being standardized until 1999, it was pretty well supported in 1996. This is around the same time Internet Explorer started development, and a lot of their initial work was basically duplicating functionality and mechanics from Netscape Navigator.

Despite standards and sane way of doing things, implementers always deviate from them, or come up with incorrect alternatives. Misunderstandings, and ideas on how things should work is how things were shaped in the early days.

Thankfully though, over the years, standards online are finally being more strictly adhered to, and bugs are being fixed. Exact precise specifications exist for many things, as well as unit-tests to ensure adherence to standards. Things like Internet Explorer 6 are now a distant memory for most (unless you're in China).

Existing Practice

A key point which led to many standards coming into existence was existing practice. Some browser or server would invent something, and the others would jump on board, and a standard would be created. Those who deviated were told to fix their implementation to match  either the majority, or what was correct and would cause the least amount of issues for the long term stability of the World Wide Web.

Now we'll see how today's engineers want to throw existing practice out the window, loosen up standards to the point of meaninglessness, and basically bust the technology you're currently using to view this article.

HTTP Responses

One of the central designed structures of HTTP is that every response from a server has a code which identifies what the result is, and servers and clients should understand how to work with the particular responses. The more precise the definition, the better online experience we'll all have.

HTTP v0.9 was in a constant state of fluctuation, but offered three basic kinds of page redirects, permanent, temporary, and one which wasn't fully specified and unclear. These were defined as status codes 301, 302, and 303 respectively:

Moved 301: The data requested has been assigned a new URI, the change is permanent.
Found 302: The data requested actually resides under a different URL, however, the redirection may be altered on occasion.
Method 303: Note: This status code is to be specified in more detail. For the moment it is for discussion only. 
Like the found response, this suggests that the client go try another network address. In this case, a different method may be used.

The explanation behind a permanent and temporary redirect seems pretty straight forward. 303 is less clear, although it's the only one which mentions the method used is allowed to change, it's even the name associated with the response code.

Several HTTP methods exist, for different kinds of activities. GET is a method to say, hey, I want a page. POST is a method to say, hey here's some data from me, like my name and my credit card number, go do something with it.

The idea with the different redirects essentially was that 303 should embody your requested was processed, please move on (hence a POST request should now become a GET request), whereas 301 and 302 were to say what you need to do is elsewhere (permanently or temporarily), please take your business there (POST should remain POST).

In any case, the text here was not as clear as can be, and developers were doing all kinds of things in general. HTTP v1.0 came out to set the record straight.

301 Moved Permanently

   The requested resource has been assigned a new permanent URL and
   any future references to this resource should be done using that
   URL. Clients with link editing capabilities should automatically
   relink references to the Request-URI to the new reference returned
   by the server, where possible. 
       Note: When automatically redirecting a POST request after
       receiving a 301 status code, some existing user agents will
       erroneously change it into a GET request. 
302 Moved Temporarily

   The requested resource resides temporarily under a different URL.
   Since the redirection may be altered on occasion, the client should
   continue to use the Request-URI for future requests.
       Note: When automatically redirecting a POST request after
       receiving a 302 status code, some existing user agents will
       erroneously change it into a GET request. 
HTTP v1.0 however did not define 303 at all. Some developers not understanding what a temporary redirect is supposed to be thought it meant hey, this is processed, now move on, however if you need something similar in the future, come here again. We can hardly blame developers at that point for misusing 302, and wanting 303 semantics.

HTTP v1.1 decided to rectify this problem once and for all. 302 was renamed to Found and a new note was added:

      Note: RFC 1945 and RFC 2068 specify that the client is not allowed
      to change the method on the redirected request.  However, most
      existing user agent implementations treat 302 as if it were a 303
      response, performing a GET on the Location field-value regardless
      of the original request method. The status codes 303 and 307 have
      been added for servers that wish to make unambiguously clear which
      kind of reaction is expected of the client.

Since 302 was being used in two different ways, two new codes were created, one for each technique, to ensure proper use in the future. 302 retained its definition, but with so many incorrect implementations out there, 302 should essentially never be used if you want to ensure correct semantics are followed, instead use 303 - See Other (processing, move on...), or 307 Temporary Redirect (The real version of 302).

In all my experience working with HTTP over the past decade, I've found 301, 303, and 307 to be implemented and used correctly as defined in HTTP v1.1, with 302 still being used incorrectly as 303 (instead of 307 semantics), generally by PHP programmers. But as above, never use 302, as who knows what the browser will do with it.

Since existing practice today is that 301, 303, and 307 are used correctly pretty much everywhere, if someone misuses it, they should be told to correct their usage or handling. 302 is still so misused till this day, it's a lost cause.

HTTP2 Responses

Now, in their infinite wisdom, the new HTTP2 team has decided to create problems. 301 status definition now brilliantly includes the following:

      Note: For historical reasons, a user agent MAY change the request
      method from POST to GET for the subsequent request.  If this
      behavior is undesired, the 307 (Temporary Redirect) status code
      can be used instead.

Let me get this straight, you're now taking a situation which hasn't been a problem for over a decade now, and asking it to begin happening anew by now allowing 301 to act as a 303???

If you don't think that paragraph above was problematic, wait till you see this one:

 |                                           | Permanent | Temporary |
 | Allows changing the request method from   | 301       | 302       |
 | POST to GET                               |           |           |
 | Does not allow changing the request       | -         | 307       |
 | method from POST to GET                   |           |           |

301 is allowed to change the request method? Excuse me, I have to go vomit.

It was clear in the past that 301 was not allowed to change its method. But now, I don't even understand what this 301 is supposed to mean anymore. So I should permanently be using the new URI for GET requests. Where do my POSTs go? Are they processed? What the heck am I looking at?

To add insult to injury, they're adding the new 308 Permanent Redirect as the I really really mean I want true 301 semantics this time. So now you can use a new status code which older browsers won't know what to do with, or the old status code that you're now allowing new browsers to utterly butcher for reasons I cannot fathom.

Here's how the status codes work with HTTP 1.1:

| Code | Meaning                             | Duration  | Method Change   |
| 301  | Permanent Redirect.                 | Permanent | No              |
| 302  | Temporary Redirect, misused often.  | Temporary | Only by mistake |
| 303  | Process and move on.                | Temporary | Yes             |
| 307  | The true 302!                       | Temporary | No              |
| 308  | Resume Incomplete, see below.       | Temporary | No              |

So here's how the status codes will work now with the HTTP2 updates:

| Code | Meaning                      | Duration  | Method Change |
| 301  | Who the heck knows.          | Permanent | Surprise Me   |
| 302  | Who the heck knows.          | Temporary | Surprise Me   |
| 303  | Process and move on.         | Temporary | Yes           |
| 307  | The true 302!                | Temporary | No            |
| 308  | The true 301!                | Permanent | No            |

And here's how one will have to do a permanent redirect in the future:

 | Code | Older Browsers | Newer Browsers |
 | 301  | Correct.       | Who Knows?     |
 | 308  | Broken!!!      | Correct.       |

This is how they want to alter things. Does this seem like a sane design to you?

If the new design decisions of the HTTP2 team is to now capitulate to rare mistakes made out there, what's to stop here? I can see some newbie developers reading about how 307 and 308 are for redirects, misunderstanding them, and then misusing them too. So in five years we'll have 309 and 310 as we really really really mean it this time? This approach the HTTP2 team is taking is absurd. If you're going to invent new status codes each time you find an isolated instance of someone misusing one, where does it end?

HTTP 308 is already taken!

One last point. Remember how earlier, I mentioned how a key point for the design of the Internet is to work with existing practice? 308 is in fact already used by something else, Resume Incomplete for resumable uploading. Which is used by Google, king of the Internet, and many others.


I'm now dubbing HTTP 308 as Incompetence Expected, as that's clearly the only meaning it has. Or maybe that should be the official name for HTTP2 and the team behind it, I'll let you decide.

Thanks to those who read this article and sent in images. I added them where appropriate.

Tuesday, April 2, 2013

Designing C++ functions to write/save to any storage mechanism


A common issue when dealing with a custom object or any kind of data is to create some sort of save functionality with it, perhaps writing some text or binary to a file. So what is the correct C++ method to allow an object to save its data anywhere?

An initial approach to allow some custom object to be able to save its data to a file is to create a member function like so:
void save(const char *filename);
While this is perfectly reasonable, what if I want something more advanced than that? Say I don't want the data to be saved as its own separate file, but would rather the data be written to some file that is already open, to a particular location within it? What if I'd rather save the data to a database? How about send the data over the network?

Naive Approach

When C++ programmers hear the initial set of requirements, they generally look to one of two solutions:

The first is to allow for a save function which can take an std::ostream, like so:
void save(std::ostream &stream);
C++ out of the box offers std::cout as an instance of an std::ostream which writes to the screen. C++ offers a derived class std::ofstream (std::fstream) which can save to files on disk. C++ also offers a derived class std::ostringstream which saves file to a C++ string.

With these options, you can display the data on the screen, save it to an actual file, or save it to a string, which you can then in turn save it wherever you want.

The next option programmers look to is to overload  std::basic_ostream::operator<< for the custom object. This way one can simply write:
mystream << myobject;
And then the object can be written to any C++ stream.

Either of these techniques pretty much work, but can be a bit annoying when you want a lot of flexibility and performance.

Say I wanted to save my object over the network, what do I do? I could save it to a string stream, grab the string, and then send that over the network, even though that seems a bit wasteful.

And for a similar case, say I have an already open file descriptor, and wish to save my object to it, do I also use a string stream as an intermediary?

Since C++ is extensible, one could actually create their own std::basic_streambuf derived class which works with file descriptors, and attach it to an std::ostream, which can then be used with anything that works with a stream for output. I'm not going to go into the details how to do that here, but The C++ Standard Library explains the general idea, and provides a working file descriptor streambuf example and shows how to use it with stream functions. You can also find some ready made implementations online with a bit of searching, and some compilers may even include a solution out of the box in their C++ extensions.

On UNIX systems, once you have a stream which works with file descriptors, you can now send data over the network, as sockets themselves are file descriptors. On Windows, you'll need a separate class which works with SOCKETs. Of course to turn a file descriptor streambuf into a SOCKET streambuf is trivial, and can probably be done with a few well crafted search and replace commands.

Now this may have solved the extra string overhead with file descriptors and networking, but what about if I want to save to a database? What about if I'm working with C's FILE *? Does one now have to implement a new wrapper for each of these (or pray the compiler offers an extension, or one can be found online)? The C++ stream library is actually a bit bloaty, and creating your own streambufs is somewhat annoying, especially if you want to do it right and allow for buffering. Many stream related library code you find online are also of poor quality. Surely there must be a better option, right?


If we look back at how C handles this problem, it uses function pointers, where the function doing the writing receives a callback to use for the actual writing, and the programmer using it can make the writing go anywhere. C++ of course includes this ability, and even takes it much further, in the form of function objects, and even further in C++ 2011.

Let's start with an example.
template<typename WriteFunction>
void world(WriteFunction func)
  //Do some stuff...
  //Do some more stuff...
  func("World", 5); //Write 5 characters via callback
  //Do some more stuff...
  unsigned char *data = ...;
  func(data, data_size); //Write some bytes
The template function above is expecting any function pointer which can be used to write data by passing it a pointer and a length. A proper signature would be something like the following:
void func(const void *data, size_t length);
Creating such a function is trivial. However, to be useful, writing needs to also include a destination of some sort, a device, a file, a database row, and so on, which makes function objects more powerful.
#include <cstdio>

class writer_file
  std::FILE *handle;
  writer_file(std::FILE *handle) : handle(handle) {}
  inline void operator()(const void *data, size_t length)
    std::fwrite(data, 1, length, handle);
Which can be used as follows:
Or perhaps:
std::FILE *fp = fopen("somefile.bin", "wb");
As can be seen, our World function can write to any FILE *.

To allow any char-based stream to be written, the following function object will do the trick:
#include <ostream>

class writer_stream
  std::ostream *handle;
  writer_stream(std::ostream &handle) : handle(&handle) {}
  inline void operator()(const void *data, size_t length)
    handle->write(reinterpret_cast<const char *>(data), length);
You can call this with:
Or anything in the ostream family.

If for some reason we wanted to write to strings, it's easy to create a function object for them too, and we can use the string directly without involving a string stream.
#include <string>

class writer_string
  std::string *handle;
  writer_string(std::string &handle) : handle(&handle) {}
  inline void operator()(const void *data, size_t length)
    handle->append(reinterpret_cast<const char *>(data), length);
If you're worried about function objects being slow, then don't. Passing a function object like this to a template function has no overhead. The compiler is able to see a series of direct calls, and throws all the extraneous details away. It is as if the body of World is calling the write function to the handle passed to it directly. For more information, see Effective STL Item 46.

If you're wondering why developers forgo function pointers and function objects for situations like this, it is because C++ offers so much with its stream classes, which are also very extensible (and are often extended), they completely forget there are other options. The stream classes are also designed for formatting output, and working with all kinds of special objects. But if you just need raw writing or saving of data, the stream classes are overkill.

C++ 2011

Now C++ 2011 extends all this further in a multiple of ways.


First of all, C++ 2011 offers std::bind() which allows for creating function object adapters on the fly. std::bind() can take an unlimited amount of parameters. The first must be a function pointer of some sort, the next is optionally an object to work on in the case of a member function pointer, followed by the parameters to the function. These parameters can be hard coded by the caller, or bound via placeholders by the callee.

Here's how you would use std::bind() for using fwrite():
#include <functional>
world(std::bind(std::fwrite, std::placeholders::_1, 1, std::placeholders::_2, stdout));
Let us understand what is happening here. The function being called is std::fwrite(). It has 4 parameters. It's first parameter is the first parameter by the callee, denoted by std::placeholders::_1. The second parameter is being hard coded to 1 by the caller. The third parameter is the second parameter from the callee denoted by std::placeholders::_2. The fourth parameter is being hardcoded by the caller to stdout. It could be set to any FILE * as needed by the caller.

Now we'll see how this works with objects. To use with a stream, the basic approach is as follows:
world(std::bind(&std::ostream::write, &std::cout, std::placeholders::_1, std::placeholders::_2));
Note how we're turning a member function into a pointer, and we're also turning cout into a pointer so it can be passed as std::ostream::write's this pointer. The callee will pass its first and second parameters as the parameters to the stream write function. However, the above has a slight flaw, it will only work if writing is done with char * data. We can solve that with casting.
world(std::bind(reinterpret_cast<void (std::ostream::*)(const void *, size_t)>(&std::ostream::write), &std::cout, std::placeholders::_1, std::placeholders::_2));
Take a moment to notice that we're not just casting it to the needed function pointer, but as a member function pointer of std::ostream.

You might find doing this a bit more comfortable than using classical function objects. However, function objects still have their place, wherever functions do. Remember, functions are about re-usability, and some scenarios are complicated enough that you want to pull out a full blown function object.

For working with file descriptors, you might be tempted to do the following:
world(std::bind(::write, 1, std::placeholders::_1, std::placeholders::_2));
This here will have World write to file descriptor 1 - generally standard output. However this simple design is a mistake. Write can be interrupted by signals and needs to be resumed manually (by default, except on Solaris), among other issues, especially if the file descriptor is some kind of pipe or a socket. A proper write would be along the following lines:
#include <system_error>
#include <unistd.h>

class writer_fd
  int handle;
  writer_fd(int handle) : handle(handle) {}
  inline void operator()(const void *data, size_t length)
    while (length)
      ssize_t r = ::write(handle, data, length);
      if (r > 0) { data = static_cast<const char *>(data)+r; length -= r; }
      else if (!r) { break; }
      else if (errno != EINTR) { throw std::system_error(errno, std::system_category()); }

Lambda Functions

Now you might be wondering, why C++ 2011 stopped with std::bind(), what if the function body needs more than just a single function call that can be wrapped up in an adapter? That's where lambda functions come in.
world([&](const void *data, size_t length){ std::fwrite(data, 1, length, stdout); });
world([&](const void *data, size_t length){ std::cout.write(static_cast<const char *>(data), length); });
Note the ridiculous syntax. The [](){} combination signifies we are working with a lambda function. The [] receives a function scope, in this case &, which means that the function operates fully within its parent-scope, and has direct access to all its data. The rest you should already be well familiar with. You can change the stdout or the cout in the body of the lambda function to use your FILE * or ostream as necessary.

Let us look at an example of having our World function write directly to a buffer.
#include <cstring>

void *p = ...; //Point p at some buffer which has enough room to hold the contents needed to be written to it.
world([&](const void *data, size_t length){ std::memcpy(p, data, length); p = static_cast<char *>(p) + length; });
There's a very important point in this example. There is a pointer which is initialized to where writing should begin. Every time data is written, the pointer is incremented. This ensures that if World calls the passed write function multiple times, it will continue to work correctly. This was not needed for files above, as their write pointer increments automatically, or with std::string, where append always writes to the end, wherever it now is.

Be careful writing like this though, you must ensure in advance that your buffer is large enough, perhaps if your object has a way of reporting how much data the next call to its save or write function needs to generate. If it doesn't and you're winging it, something like the following is in order:
#include <stdexcept>

class writer_buffer
  void *handle, *limit;
  writer_buffer(void *handle, size_t limit) : handle(handle), limit(static_cast(handle)+limit) {}
  inline void operator()(const void *data, size_t length)
    if ((static_cast<char *>(handle) + length) > limit) { throw std::out_of_range("writer_buffer"); }
    std::memcpy(handle, data, length);
    handle = static_cast<char *>(handle) + length;
You can use it as follows:
#include <cstdlib>

size_t amount = 1024; //A nice number!
void *buffer = std::malloc(amount);
world(writer_buffer(buffer, amount));
Now an exception will be thrown if the callee tries to write more data than it should.


Lastly, C++ 2011 added the ability for more verbose type checking on function objects, and the ability to create the save/write function as a normal function as opposed to a template function. That ability is a general reusable function object facade, std::function.

To rewrite World to use it, we'd do as follows:
void world(std::function<void (const void *, size_t)> func)
  //Do some stuff...
  //Do some more stuff...
  func("World", 5); //Write 5 characters via callback
  //Do some more stuff...
  unsigned char *data = ...;
  func(data, data_size); //Write some bytes
With std::function, the type is now made explicit instead of being a template. It is anything which receives any kind of buffer and its length, and returns nothing. This can ensure that callers will always use a compatible function as intended by the library designer. For example, in our case, the caller only needs to ensure that data can be passed via a char * and an unsigned char *, based on how World uses the callback function. If World was now modified to also output an int *, less capable callers would now break. std::function can ensure that things are designed properly up front. With std::function, you can now also restructure your code to place various components in different compilation units if you so desire, although perhaps at a performance penalty.


To wrap up, you should now understand some features of C++ that are not as commonly used, or some new features of C++ 2011 that you may not be familiar with. You should now also have some ideas about generic code which should help you improve code you write.

Many examples above were given only with one methodology, although they can be implemented with some of the others. For practice, try doing this yourself. Also try applying these ideas to other kinds of storage mechanisms not covered here, doing so should now be rather trivial for you.

Remember, while this was done with a few standard examples and for writing, it can be extended to all handles Win32 offers, or for reading, or for anything else.

Tuesday, March 19, 2013

OAuth - A great way to cripple your API


 A few years ago, the big social networking sites were looking for a secure way to allow their users to safely use any and all untrusted software to access their own personal accounts. So that a user could have their account on one social networking site safely interact with their account on another social networking site. They also wanted to allow for users to be able to allow their accounts to interact safely with various untrusted web applications and web sites.

In order to safely allow untrusted software to access a user's account, a few points needed to be kept in mind.
  • Untrusted software should not have access to a user's credentials, in case the software is compromised, passwords will not be stolen, as user's passwords are not stored by the software.
  • Untrusted software should not have full access to a user's account, but only limited access as defined by the user. In the same vein, giving the software user's personal credentials will allow unlimited access to a user's account, which could be used maliciously.
  • A user should be able to revoke the permission they granted to allow particular untrusted software to work, in case they turn malicious, despite the limited access they have.
A solution that was developed for this use-case was created - OAuth. OAuth's primary use-case is the one described above.


OAuth is generally implemented in a fashion where the untrusted software is accessed by a user via a standard web browser. In order for a user to authorize that software, the software will redirect the user's browser to a page on the social networking site, with a couple of parameters sent to it. Now that the user is on his social networking site, the untrusted software no longer has control over what the user is doing, as the user left the untrusted software, allowing the user to safely log in to his social networking account. Then, the social networking site will see the parameters it was passed, and ask the user if he or she wants to authorize the software in question, and what kind of access to the user's account it should be given.

If and when the user has authorized the untrusted software, the social networking site can then report back to the untrusted software that it was granted a certain amount of access, and give it some credentials to use. These credentials are unique to the user and the software in question. The social networking site allows for a user to later on see a list of software authorized, and revoke the unique credentials given to any one of them, or modify the amount of access a particular set of credentials has.


There is no standard

Now above, I said OAuth is generally implemented in this fashion. I say generally, because unlike standard HTTP authentication schemes (Basic, Digest, and others), OAuth is a big grab bag of ideas which can be mixed and matched in an infinite amount of ways, and also allows for developers to make their own unique tweaks to their personal implementations.

With the standard HTTP authentication schemes, once a developer knows the protocol, and implemented it with one web site, that exact same knowledge can be reused to logging into any other web site that supports the standard. Likewise, software libraries can be made to handle HTTP authentication, and all a third party developer needs to do is specify to the library which credentials should be used, and then everything works as expected.

With OAuth, once a developer learns how to authenticate with it to one web site, it is likely that the same developer will need to relearn how to connect to every other web site using OAuth. This further means that every web site which supports OAuth needs to document exactly how it is implementing it, and what tweaks are in use. It also means that no library can be written which can simply support every OAuth implementation out there. This places a great burden on developers on both sides. It can also greatly increase frustration for less able users, when their favorite library works great with one site they support, but are unable to extend their software to another site, because their library lacks some aspect of OAuth for this new site, or a unique OAuth tweak on this new site renders the library incompatible.

Here are some choice quotes from the official RFC:
  • However, as a rich and highly extensible framework with many optional components, on its own, this specification is likely to produce a wide range of non-interoperable implementations.
  • This framework was designed with the clear expectation that future work will define prescriptive profiles and extensions necessary to achieve full web-scale interoperability.

Another issue is that while various standard HTTP authentication schemes have well understood security margins, OAuth is entirely variable. On one web site, its OAuth implementation may be secure, while on another, its implementation may be Swiss cheese. Even though a security consultant should generally look over any authorization implementation to ensure mistakes were not made, laymen can have a good understanding how reliable and secure their standardized authentication scheme is. A manager can ask their developers if some bullet points were adhered to, and then be reasonably confident in their security. Whereas with OAuth, the entire (complex) implementation needs to be reviewed from top to bottom by a top security professional to ensure it is secure. A manager has no adequate set of bullet points to discuss with their developers, as unique implementation details will drastically change the applicability of various points, with many important details still missing. At best, a manager can only get a false sense of security when OAuth is in use. The designers of OAuth respond to this point with a 71 page document of security issues that need to be dealt with!

What all this boils down to, is that OAuth is really just a set of ideas and recommendations on how to implement a unique authorization scheme. OAuth does not allow for interoperability as standards (usually) guarantee. Essentially, OAuth ensures that API authentication with your web site will be confusing, and will only work for the exact use-case for social networking sites described above.

Crippling Design - APUI

The common OAuth use-case described above is to allow for a user on one web site to allow their software to communicate for them with another web site. The workflow described above requires that a user navigate between web sites using their browser and manually enter their credentials and authorization to various aspects of their account as part of the overall software authorization process.

This means that OAuth (at least how it's normally implemented) only works between two web sites, with individual users, and that user intervention is required in order for software to authenticate with another web site. The fact that user intervention is required with manual input means that any API behind OAuth is not an API - Application Programming Interface, but an APUI - Application Programming User Interface, meaning user intervention is required for functionality. All in all, this cripples your API, or APUI as it should now properly be called.

Let us now focus on what cannot be properly done with OAuth in place:
  • Third party software which is not part of a web site cannot interact with an OAuth APUI.
  • One user in third party software cannot act on behalf of another user via an OAuth APUI.
  • Third party software cannot run automated processes on an OAuth APUI.
  • Organizations cannot ensure tight integration between various software and services they use.
Let us review an example case where a company launches a new online service for coordinating schedules and personal calendars between people. We'll call this new online service Calendar. The developers of Calendar create an APUI for it using OAuth, so third party software can integrate with Calendar.

As described above, a user needs to navigate between one web site and another in order to authorize software, with the two web sites sending each other information. What if one wants to integrate something which isn't a web site with Calendar? There's tons of desktop calendaring applications, Microsoft Outlook, Mozilla Lightning, Evolution, KOrganizer, Kontact, and more. If their developers want to integrate with the Calendar service, they need to embed a web browser in their applications now?

Furthermore, that may not even help, if the OAuth workflow requires information be sent to an existing web site, what web site is associated with a user's personal copy of software? Developers of that software are unlikely to create a web site with user accounts just for this integration. Even if they did, if the site goes down (temporarily), then the integration stops working. Even though, in theory, there should be no dependance of some extra web site between Calendar and the software trying to integrate with it.


Also, as a security consideration, if an application does embed a web browser in it so it can authenticate with OAuth, the first requirement is no longer met - Untrusted software should not have access to a user's credentials. When using a standard web browser, once a user leaves the third party software's site and redirects to Calendar's site, the third party software cannot steal the information the user is entering. But when the third party software itself is the one which browses to Calendar's site, then it has access to everything the user is entering, including passwords.

Actually, this attack works on OAuth in every circumstance, including web site to web site, and I've actually used it in practice. A web site can embed a web browser via a Java Applet or similar, or have a web browser server side which presents the OAuth log in page to the user, but slightly modified to have all the data entered pass through the third party site. Therefore OAuth doesn't even fulfill its own primary security objective!

Incompatible with Enterprise

Next, once we go to the enterprise level, OAuth starts becoming much worse. Say a boss has his secretary manage his calendar for him, which is a very common scenario. In many OAuth setups, he cannot enter his Calendar credentials into the company-wide calendaring software running on a secure company server, which the secretary can then access. Rather, he would need to enter it directly in the browser the secretary uses, and stay there checking off many options. Also it is common that OAuth implementations are using security tokens which expire, meaning the boss will need to keep reentering his Calendar credentials again and again. Most bosses will just get fed up and give his secretary his credentials, especially if he's not physically near his secretary. It is also likely that the secretary will then write the password down. This gets compounded if multiple secretaries manage his schedule.

Now say an organization has software for which it would like to run background processes for all its users with Calendar. Perhaps every week it would like to automatically analyze the Calendar accounts of all department chiefs to find a good time for a weekly meeting. How would this work exactly? Every department chief would now have to go and enter their credentials into this software? Then do it again each time security tokens expire from past authentications?

Of course this is only the beginning. Since OAuth was designed for the likes of Twitter and Facebook which cater to individual personal accounts, the common implementations do not allow for hierarchical permissions as enterprise organizations need.

In enterprise infrastructure, software is already used which has well defined roles and access rights for ever single user. This infrastructure should be deciding who can do what, and to what level one user can act on behalf of another user. Once an organization purchases accounts for all their employees from Calendar, no employee should be able to turn off or limit what the enterprise management software can do with their Calendar account. The enterprise management software is the one who needs to make such decisions. This flies in the face of Untrusted software should not have full access to a user's account, but only limited access as defined by the user and A user should be able to revoke the permission they granted to allow particular untrusted software to work.

The amount of work involved to tightly integrate Calendar with existing infrastructure is also enormous from a user perspective. Every single account now has to have a user navigate across web pages and check all kinds of boxes until all the users are integrated. OAuth implementations generally forget to have administrator accounts which can do everything on behalf of other users in their organization.

It should be obvious at this point whether enterprise organizations would be willing to purchase accounts for their entire staff with Calendar, when integration into their existing infrastructure or new endeavors with the service will be difficult if not outright impossible. Imagine how much money Calendar sales now stand to lose from being unattractive to enterprise clients.

Repel third party developers

OAuth implementations also generally require that any software which uses their APUIs be preregistered in advance. This places extra burden on third party developers. The OAuth implementations also commonly require that the third party web site integrating be preregistered in advance, so you can say goodbye to your application being compatible with staging servers which may be at another URL. It also makes software much less attractive when one party sells software to another party, as every new client now needs to go through an application registration process for the URLs they plan to setup the software they are purchasing. Therefore third party developers are less likely to be interested in selling software which integrates with Calendar, as there's hassle involved with every single sale.


  • OAuth is not a standard, but a set of ideas, which does not allow third party developers to reuse knowledge, and places extra documentation burden on implementers.
  • OAuth as a whole has undefined security characteristics.
  • OAuth doesn't properly fulfill its primary security objectives.
  • OAuth doesn't work well outside social networking web site use-cases.
  • OAuth services are unattractive to enterprise organizations looking to integrate such services into their infrastructure.
  • OAuth services are less likely to have professional third parties sell software based upon them.


If you're looking to implement authorization for your API, I recommend to sticking with well understood secure designs, such as HTTP Basic Authentication over SSL/TLS (or HTTP Digest Authentication).

In order to achieve a situation where users can securely authorize third party software, without giving over their personal credentials (passwords), I recommend that these services have a page where they can generate new credentials (keys) which the user can copy and paste. They can then name these keys themselves (avoiding application registration hassle), and set permissions upon them themselves. Since the user is the one initiating the key creation, and copying and pasting it themselves, they cannot fall prey to a man-in-the-middle attack where the third party software initiates the authorization process.

But remember the use-cases described here, and ensure that organizations have a way to access all user accounts company-wide, and without individual users being able to disable or limit that access.


If you want your service to be used by everyone out there, be well supported by third parties, and to have them create all kinds of interesting software with it, do not use OAuth.

Even the original social networking sites behind OAuth decided they really need other options for different use-cases, such as Twitter's xAuth, or Yahoo offering Direct OAuth, which turns the entire scheme into a more complicated version of HTTP Basic Authentication, with no added benefits. Perhaps the most damaging point against OAuth, is that the original designer behind it decided to remove his name from the specification, and is washing his hands clean of it.

I find it really amazing at how blind many big players are these days to all the problems with OAuth. When I first heard that IBM, a major enterprise player started offering services only accessible with typical utterly crippled OAuth, I was overcome with disbelief. Yet at the same time, I hear that they're wondering why they're not seeing the sales they used to with other services they offer, or compared to the competition.

I'm even more amazed to see other big companies throwing away their currently working authentication systems for OAuth. Followed by them wondering why many third party developers are not upgrading to support the new authentication scheme, and clients jumping ship to inferior services.

It seems many developers and managers out there simply don't get it. If you know anyone like that, show them this article.

Saturday, December 1, 2012

Debian breaks OSS4

Several people have contacted me to tell me that the latest version of OSS4 in Debian Unstable 4.2-build2007-1+nmu1 introduces several audio issues such as garbled sound or kernel panics. I can confirm I have issues as well with this version.

Downgrading to 4.2-build2007-1 fixes the problem.

I recommending putting the following in /etc/apt/preferences:

Package: oss4-base oss4-dev oss4-dkms oss4-gtk
Pin: version 4.2-build2007-1+nmu1
Pin-Priority: -1
This will prevent Debian from trying to upgrade to that version.

If you already accidentally upgraded to it, older versions are still available.

Thursday, July 19, 2012

Creating portable Linux binaries

For some, the idea of creating a portable Linux binary is somewhat elusive.

In this article, we will be discussing how to create a Linux binary for a specific architecture, that you will have great success running on a large variety of Linux distros. This includes current releases, somewhat old ones, and hopefully far into the future.

A common problem facing those looking to deploy proprietary software on Linux, or for those trying to supply binaries to a very large user-base which will not compile your software themselves, is how to offer one binary that fits most normal scenarios.

There are generally four naive approaches to solving this problem.

  1. The developers set up a bunch of specific distros, and compile the software on each of them, and give out distro specific binaries. This makes sense at first, till you run into some trouble.
    • You have to juggle many live CDs or maintain a bunch of installed distros which is painful and time consuming.
    • You end up only offering support for the distros you have handy, and you will get quite a few users on a more exotic distro nagging you for support, or a different and incompatible version of a distro you're already supporting.
    • The compilers or other build utilities on some distros are too old for your modern software, and you need to build them elsewhere, or figure out how to back port modern software to that old distro.
    • Builds break when a user upgrades their system.
    • Users end up needing to install some non standard system libraries, increasing everyone's frustration.
  2. The developers just statically link the binaries. This isn't always a legal option due to some licenses that may be involved. Binaries which are fully statically linked also in many instances exhibit incorrect behavior (more on this later).
  3. The developers just compile the software on one system, and pray that it works for everyone else.
  4. Compile with a really really old distro and hope it works everywhere else. However this succumbs to the last three problems outlined in naive approach #1, and in many cases the binaries produced won't work with modern distros.
Now there are plenty of companies that supply those portable Linux binaries. You find on their website downloads for say Linux i386, AMD64, and PPC. Somehow that i386 binary manages to run on every i386 system you've tested, Red Hat, Debian, SUSE, Ubuntu, Gentoo, and both old and modern versions at that. What is their secret sauce?

Now let us dive into all the important information and techniques to accomplish this worthy goal.

First thing you want to know is what exactly is your binary linked to anyway? For this, the handy ldd command comes in.

/tmp> ldd myapp =>  (0x00007fff7a1ff000) => /usr/lib/x86_64-linux-gnu/ (0x00007f1f8a765000) => /lib/x86_64-linux-gnu/ (0x00007f1f8a4e3000) => /lib/x86_64-linux-gnu/ (0x00007f1f8a2cc000) => /lib/x86_64-linux-gnu/ (0x00007f1f89f45000)
        /lib64/ (0x00007f1f8aaa9000)
The lines which are directed to a file are system libraries that you need to worry about. In this case, there's libstdc++, libm, libgcc, and libc. libc and libm are both part of (E)GLIBC, the C library that most Linux applications will be using. libstdc++ is GCC's C++ library. libgcc is GCC's implementation of some programming constructs that your program may be using, such as exception handling, and things like that.

In general (E)GLIBC is broken up into many sub libraries that your program may be linked against. Other notable examples are libdl for Dynamic Loading, libpthread for threading, librt for various real time functions, and a few others.

Your application will not run on a system unless all these dependencies are found, and are compatible. Therefore, versions of these also come into play. In general a newer minor version of a library will work, but not an older.

In order to find versions numbers, you want to use objdump. Here's an example with finding out what version of (E)GLIBC is needed:
/tmp> objdump -T myapp | grep GLIBC_
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 ungetc
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.3   __ctype_toupper_loc
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 fputc
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 free

In this case, 2.3 is the highest version number. Therefore this binary needs (E)GLIBC 2.3 or higher on the system. Note, the version numbers have nothing to do with the version installed on your system, rather (E)GLIBC marks each function with the minimum version that contains it.

Of course all this applies to other libraries as well, particularly libgcc and libstdc++.

Now that we know a little bit about what we're doing, I'm going to present the first bit of secret sauce.

If you're using C++, link with -static-libstdc++ this will ensure that libstdc++ is linked statically, but won't link every other lib statically like -static would. You want libstdc++ linked statically, because it's safe to do so, some systems may be using an older version (or have none at all, in the case of some servers), or you want your binary to remain compatible if a new major version of libstdc++ comes out which is no longer backwards compatible. Note that even though libstdc++ is GPL'd, it also offers a linking exception that allows you to link against it and even statically link it in any application.

If you see that your binary needs libgcc, also use -static-libgcc for the same reasons given above. Also, GCC is GPL'd, and has the same linking exception as above. I once had the unfortunate scenario where I sold a client an application without libgcc statically linked, that used exceptions. On his old server, as long as everything went absolutely perfectly, the application ran fine, but if any issue occurred, instead of gracefully handling the issue, the application terminated immediately. Since his libgcc was too old, the application saw the throws, but none of the catches. Statically linking libgcc fixed this issue.

Now you might be thinking, hey what about statically linking (E)GLIBC? Let me warn you that doing so is a bad idea. Some features in (E)GLIBC will only work if the statically linked (E)GLIBC is the exact same version of (E)GLIBC installed on the system, making statically linking pointless, if not downright problematic. (E)GLIBC's libdl is quite notable in this regard, as well as several networking functions. (E)GLIBC is also licensed under LGPL. Which essentially means that if you give out the source to your application, then in most cases you can distribute statically linked binaries with it, but otherwise, not. Also, since 99% of the functions are marked as requiring extremely old versions of (E)GLIBC, statically linking is hardly necessary in most cases.

The next bit of the secret sauce is statically linking those non standard libs your application needs but nothing else.

You probably never learned in school how to selectively static link those libraries you want, but it is indeed possible. Before the list of libraries you wish to static link, place -Wl,-Bstatic and afterwards -Wl,-Bdynamic.

Say in my application I want to statically link libcurl and OpenSSL, but want to dynamically link zlib, and the rest of my libs, such as other parts of (E)GLIBC, I would use the following as my link flags:
gcc -o app *.o -static-libgcc -Wl,-Bstatic -lcurl -lssl -lcrypto -Wl,-Bdynamic -lz -ldl -lpthread -lrt

The next step is to ensure that your libraries pull in as few dependencies as possible. Here's the output from ldd on my =>  (0x00007fffbadff000) => /usr/lib/x86_64-linux-gnu/ (0x00007f84410a4000) => /usr/lib/x86_64-linux-gnu/ (0x00007f8440e7b000) => /usr/lib/x86_64-linux-gnu/ (0x00007f8440c6b000) => /usr/lib/x86_64-linux-gnu/ (0x00007f8440a1a000) => /lib/x86_64-linux-gnu/ (0x00007f8440812000) => /usr/lib/x86_64-linux-gnu/ (0x00007f84405d2000) => /usr/lib/x86_64-linux-gnu/ (0x00007f8440374000) => /usr/lib/x86_64-linux-gnu/ (0x00007f843ff90000) => /usr/lib/x86_64-linux-gnu/ (0x00007f843fd75000) => /lib/x86_64-linux-gnu/ (0x00007f843fb5e000) => /lib/x86_64-linux-gnu/ (0x00007f843f7d7000) => /lib/x86_64-linux-gnu/ (0x00007f843f558000) => /lib/x86_64-linux-gnu/ (0x00007f843f342000) => /usr/lib/x86_64-linux-gnu/ (0x00007f843f127000) => /usr/lib/x86_64-linux-gnu/ (0x00007f843ee66000) => /lib/x86_64-linux-gnu/ (0x00007f843ec4a000)
        /lib64/ (0x00007f844157e000) => /usr/lib/x86_64-linux-gnu/ (0x00007f843e976000) => /usr/lib/x86_64-linux-gnu/ (0x00007f843e74c000) => /lib/x86_64-linux-gnu/ (0x00007f843e548000) => /usr/lib/x86_64-linux-gnu/ (0x00007f843e33f000) => /lib/x86_64-linux-gnu/ (0x00007f843e13a000) => /lib/x86_64-linux-gnu/ (0x00007f843df36000) => /lib/x86_64-linux-gnu/ (0x00007f843dd32000) => /usr/lib/x86_64-linux-gnu/ (0x00007f843db21000) => /usr/lib/x86_64-linux-gnu/ (0x00007f843d90f000)
This is quite unacceptable. Distros generally compile packages with everything enabled. Your application generally does not need everything a library has to offer. In the case of libcurl, you can compile it yourself, and disable the features you aren't using. In an example application, I only need HTTP and FTP support, so I could compile libcurl with very little, and now have this: =>  (0x00007fffbadff000) => /lib/x86_64-linux-gnu/ (0x00007f8440812000) => /usr/lib/x86_64-linux-gnu/ (0x00007f8440374000) => /usr/lib/x86_64-linux-gnu/ (0x00007f843ff90000) => /lib/x86_64-linux-gnu/ (0x00007f843fb5e000) => /lib/x86_64-linux-gnu/ (0x00007f843f7d7000) => /lib/x86_64-linux-gnu/ (0x00007f843f342000) => /lib/x86_64-linux-gnu/ (0x00007f843ec4a000)
        /lib64/ (0x00007f844157e000) => /lib/x86_64-linux-gnu/ (0x00007f843e13a000)
This is much more manageable. Refer to the documentation of your libraries in order to see how to compile them without  features you don't need.

If you're going to be compiling your own libraries, you probably want to set up a second system, virtual machine, or a chroot for building your customized library versions and applications, to ensure it doesn't conflict with your main system. Especially for the upcoming tip.

Secret sauce part 3, push your (E)GLIBC requirements down.

Let's look at an objdump on OpenSSL.
/tmp> objdump -T /usr/lib/ | grep GLIBC_
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 chmod
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 fileno
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 __sysv_signal
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 printf
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 memset
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 ftell
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 getgid
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 shutdown
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 close
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 syslog
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 ioctl
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 abort
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 memchr
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 gethostbyname
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 fseek
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.7   __isoc99_sscanf
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 openlog
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 exit
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 strcasecmp
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 gettimeofday
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 setvbuf
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 read
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 strncmp
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 malloc
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 fopen
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 setsockopt
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 sysconf
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 getpid
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 fgets
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 geteuid
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 vfprintf
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 closelog
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 fputc
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 times
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 free
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 strlen
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 ferror
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 opendir
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 __xstat
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 listen
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.3   __ctype_b_loc
0000000000000000  w   DF *UND*  0000000000000000  GLIBC_2.2.5 __cxa_finalize
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 readdir
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 dlerror
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 sprintf
0000000000000000      DO *UND*  0000000000000000  GLIBC_2.2.5 stdin
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 strrchr
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 dlclose
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 poll
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 getegid
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 gmtime_r
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 strerror
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 sigaction
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 strcat
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 getsockopt
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 fputs
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 lseek
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 strtol
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 getsockname
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 connect
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 memcpy
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 memmove
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 strchr
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 socket
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 fread
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 __fxstat
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 getenv
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 __errno_location
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 qsort
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 strncasecmp
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 strcmp
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 strcpy
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 getuid
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.3   __ctype_tolower_loc
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 memcmp
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 feof
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 fclose
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 dlopen
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 recvfrom
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 strncpy
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 dlsym
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 closedir
0000000000000000      DO *UND*  0000000000000000  GLIBC_2.2.5 stderr
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 fopen64
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 sendto
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 bind
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 fwrite
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 realloc
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 perror
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 fprintf
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 localtime
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 write
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 accept
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 strtoul
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 open
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 time
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 fflush
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 getservbyname
It seems that OpenSSL would work on (E)GLIBC 2.3, except for one pesky function which needs 2.7+. This is a problem if I want to ship an application with this modern OpenSSL on say Red Hat Enterprise Linux 5 which comes with GLIBC 2.5, or say Debian Stable from ~4 years ago, which only has 2.4.

In this case OpenSSL is using a C99 version of sscanf(), but not actually by choice.

In /usr/include/stdio.h on (E)GLIBC 2.7+, you'll notice two blocks:

#if defined __USE_ISOC99 && !defined __USE_GNU \
    && (!defined __LDBL_COMPAT || !defined __REDIRECT) \
    && (defined __STRICT_ANSI__ || defined __USE_XOPEN2K)
# ifdef __REDIRECT
/* For strict ISO C99 or POSIX compliance disallow %as, %aS and %a[
   GNU extension which conflicts with valid %a followed by letter
   s, S or [.  */
extern int __REDIRECT (fscanf, (FILE *__restrict __stream,
        __const char *__restrict __format, ...),
           __isoc99_fscanf) __wur;
extern int __REDIRECT (scanf, (__const char *__restrict __format, ...),
           __isoc99_scanf) __wur;
extern int __REDIRECT_NTH (sscanf, (__const char *__restrict __s,
            __const char *__restrict __format, ...),
# else
extern int __isoc99_fscanf (FILE *__restrict __stream,
          __const char *__restrict __format, ...) __wur;
extern int __isoc99_scanf (__const char *__restrict __format, ...) __wur;
extern int __isoc99_sscanf (__const char *__restrict __s,
          __const char *__restrict __format, ...) __THROW;
#  define fscanf __isoc99_fscanf
#  define scanf __isoc99_scanf
#  define sscanf __isoc99_sscanf
# endif


# if !defined __USE_GNU \
     && (!defined __LDBL_COMPAT || !defined __REDIRECT) \
     && (defined __STRICT_ANSI__ || defined __USE_XOPEN2K)
#  ifdef __REDIRECT
/* For strict ISO C99 or POSIX compliance disallow %as, %aS and %a[
   GNU extension which conflicts with valid %a followed by letter
   s, S or [.  */
extern int __REDIRECT (vfscanf,
           (FILE *__restrict __s,
      __const char *__restrict __format, _G_va_list __arg),
     __attribute__ ((__format__ (__scanf__, 2, 0))) __wur;
extern int __REDIRECT (vscanf, (__const char *__restrict __format,
        _G_va_list __arg), __isoc99_vscanf)
     __attribute__ ((__format__ (__scanf__, 1, 0))) __wur;
extern int __REDIRECT_NTH (vsscanf,
         (__const char *__restrict __s,
          __const char *__restrict __format,
          _G_va_list __arg), __isoc99_vsscanf)
     __attribute__ ((__format__ (__scanf__, 2, 0)));
#  else
extern int __isoc99_vfscanf (FILE *__restrict __s,
           __const char *__restrict __format,
           _G_va_list __arg) __wur;
extern int __isoc99_vscanf (__const char *__restrict __format,
          _G_va_list __arg) __wur;
extern int __isoc99_vsscanf (__const char *__restrict __s,
           __const char *__restrict __format,
           _G_va_list __arg) __THROW;
#   define vfscanf __isoc99_vfscanf
#   define vscanf __isoc99_vscanf
#   define vsscanf __isoc99_vsscanf
#  endif
# endif

These two blocks of code make fscanf(), scanf(), sscanf(), vfscanf(), vscanf(), and vsscanf() use special C99 versions. Since older applications were already compiled against C89 versions, (E)GLIBC doesn't want to potentially break them and change how an existing function works. So instead, a new set of functions were created which only exist in (E)GLIBC 2.7+, and (E)GLIBC by default will direct all calls to these functions to the proper C99 versions when compiling.

Now there are some defines you can set in your library code and application code to ensure it uses the old more backwards compatible versions, but getting the exact right combination of defines without breaking anything else can be tricky. It may also be tedious to modify a code-base you're not familiar with.

Therefore, I recommend just deleting these two blocks from your <stdio.h> on your build system. You want your build system to be able to build everything for backwards compatibility, right?

If you're recompiling libraries like OpenSSL which are designed for massive portability with all kinds of systems, odds are, they're not looking for C99 support in basic scanf() family functions anyway. If you do happen to need C99 scanf() support in your application, I recommend that you add it manually with a specialized lib, for maximum portability. You can easily find a bunch online.

The last scenario that you may encounter is that you happen to want to use a modern library function. For most libs you can just statically link them, but that won't work for (E)GLIBC. Since some functions depend on system support, or that custom versions don't perform as well as the built in system ones, you definitely want to use the built in ones if they're available. The question is, how to once the binary has already been compiled?

So for our final bit of secret sauce, dynamically load any modern functions that you want to use, and work around them, or disable some functionality if not present.

Remember libdl that we mentioned above? It offers dlopen() for opening system libraries, and dlsym() for finding out if certain functions are present or not, and retrieving a pointer to them.

I'm going to post a full example that you can look at and play with. In this example, we have a program which tries to figure out how big system pipes are. In this application, we are going to see how much data we can stuff in a pipe before we're told that the pipe is full, and the write would need to block.

Linux offers a function called pipe2() which has the crucial ability to create a pipe in non-blocking mode. If it doesn't exist, we can create it ourselves, but we prefer the built in one if possible.

#ifndef __linux__
#error This program is specifically designed for Linux, even though it works elsewhere

#include <stdio.h> //puts(), fputs(), printf(), fprintf(), stderr
#include <errno.h> //errno, perror(), EINTR, EAGAIN, EWOULDBLOCK
#include <fcntl.h> //fcntl(), F_SETFL, F_GETFL, O_NONBLOCK, F_SETFD, F_GETFD, FD_CLOEXEC, and for some: O_CLOEXEC
#include <dlfcn.h> //dlopen(), dlsym(), dlclose(), dlerror(), RTLD_LAZY
#include <unistd.h> //pipe() used in our implementation, write(), close()

//Lifted from: /usr/include/<arch>/bits/fcntl.h
#ifndef O_CLOEXEC
#define O_CLOEXEC       02000000        /* set close_on_exec */
//End lift

typedef int (*pipe2_t)(int [2], int);

//Implement the rather straight forward pipe2(), note: this function is of type: static pipe2_t
static int our_pipe2(int pipefd[2], int flags)
  int ret = pipe(pipefd);
  if (!ret) //Success, pipe created
    //The built in pipe2() would not suffer from race conditions that the following code would succumb to in a threaded application
    if (flags & O_NONBLOCK)
      fcntl(pipefd[0], F_SETFL, fcntl(pipefd[0], F_GETFL) | O_NONBLOCK);
      fcntl(pipefd[1], F_SETFL, fcntl(pipefd[1], F_GETFL) | O_NONBLOCK);

    if (flags & O_CLOEXEC)
      fcntl(pipefd[0], F_SETFD, fcntl(pipefd[0], F_GETFD) | FD_CLOEXEC);
      fcntl(pipefd[1], F_SETFD, fcntl(pipefd[1], F_GETFD) | FD_CLOEXEC);

static pipe2_t pipe2 = our_pipe2; //pipe2() is initialized to our function

size_t pipe_size() //Manually determine the size of the system's pipe, for automatic, look up Linux specific F_GETPIPE_SZ
  //Create a union for using a pipe, so usage is a bit more logical
    int pipefd[2];
      int read;
      int write;
    } side;
  } u;

  size_t amount = 0; //A pipe size of 0 signifies unknown

  if (!pipe2(u.pipefd, O_NONBLOCK)) //Note, here pipe2() is used
    for (;;) //Write to a pipe in a loop, the final amount should be the size of the pipe
      ssize_t w = write(u.side.write, &amount, sizeof(size_t)); //Write a size_t to the pipe

      if (w > 0) { amount += w; } //Success, add amount written and then loop
      else if (w == 0) //Pipe was closed, and we certainly didn't close it
        perror("Pipe unexpectedly closed");
        amount = 0; //Reset to unknown, because an error occured
      else /* Error occured trying to write */ if (errno != EINTR) //And it wasn't an interruption, so something that needs handling
        if ((errno != EAGAIN) && (errno != EWOULDBLOCK)) //Failed to write to pipe - and it's nothing we'd fix
          perror("Failed to write to pipe");
          amount = 0; //Reset to unknown, because an error occured
        //Else, pipe is full, we're done!

        break; //In either case, we're done writing to the pipe
      //Else If (errno == EINTR), we'd just loop and try again
  else { perror("Failed to create pipe"); }


int main(const int argc, const char *const *const argv)
  void *so = dlopen("", RTLD_LAZY); //Open the C library
  if (so)
    void *sym = dlsym(so, "pipe2"); //Grab the handle to pipe2() if it exists
    if (sym) //Success!
      pipe2 = (pipe2_t)sym; //Use the built in one instead of ours
      puts("Using system's pipe2().");
    else { puts("Using our pipe2()."); }
    puts("Using our pipe2().");
    fprintf(stderr, "Could not open C library: %s\n", dlerror());

  //Here's the real work
  size_t a = pipe_size();
  if (a) { printf("Pipe size is: %zu\n", a); }
  else { fputs("Could not determine pipe size.\n", stderr); }

  if (so) { dlclose(so); }
Here's how to compile and run it:
/tmp> gcc -Wall -o pipe_test pipe_test.c -ldl
/tmp> ./pipe_test
Using system's pipe2().
Pipe size is: 65536
Now pipe2() Was added to GLIBC in 2.9, yet this binary here according to objdump only needs (E)GLIBC 2.2.5+. Here's the output from an older system with GLIBC 2.7, using the exact same binary created on a newer system:
/tmp> ./pipe_test
Using our pipe2().
Pipe size is: 65536
Lastly, let me recap all the techniques we learned.
  • Use ldd and objdump to see version requirements of binaries and libraries.
  • Statically link compiler and language libraries, such as libgcc and libstdc++.
  • Statically link selected libraries, while dynamically linking others.
  • Compile selected libraries with as little needed functionality as possible.
  • Pushing (E)GLIBC requirements down, by being wary of functions which have changed over time, and (E)GLIBC redirects calls to them in newly compiled programs by default.
  • Pushing (E)GLIBC requirements down by not directly using new functions, and instead working around their presence.
Doing all this, you'll still need to make different builds for different operating systems, and different architectures like x86 and ARM, but at least you won't be forced to for all different distros and versions thereof.

One thing of note, it's possible to have Linux with different C libraries, and in those cases, you may as well be using a different Operating System. You'll be hard pressed to make complex programs compiled against one C library run on Linux which uses a different C library, where the needed one is not present. Thankfully though, all the mainstream desktop and server distros all use (E)GLIBC.

In any case, the techniques you've learned here can also be applied to other setups too.  (E)GLIBC was only focused on in this article because of its popularity and its many gotchas, but many other libraries that you may use, particularly video and audio libraries have similar issues as well.