Monday, December 21, 2009


Happy "Holidays" From Google



If you use GMail, you probably got this message recently:


Happy Holidays from Google

Hello,

As we near the end of the year, we wanted to take a moment to thank you for the time, energy, commitment, and trust you've shared with us in 2009.

With sharing in mind, this year we've decided to do something a little different. We hope you'll find it fits the spirit of the holiday season.

We're looking forward to working with you to build lasting success in 2010.

Happy Holidays,
Your Google Team


While on the surface it seems like a nice gesture, wouldn't it be nice if these big companies actually put some thought into what they wrote?

The use of terms or "codewords" like "Happy Holidays" or "holiday season" is meant to be all inclusive of the various winter holidays celebrated by different religions or cultural groups, without singling out any one of them in particular. It's primarily meant to include minorities that celebrate Kwanzaa and Hanukkah.

But this letter was sent on December 21, after Hanukkah was already completed two days earlier. If they really wanted to be all inclusive, perhaps they should have sent it the first week in December, instead of waiting till soon after Hanukkah was over, portraying Antisemitism.

All these "codewords" used are actually born out of "Political Correctness", a practice designed to discriminate against your average white male, while not actually caring about the minorities you're trying to protect. Isn't it nice to see another big company show that they aim for Political Correctness, yet show they couldn't care less about those minorities?

On a similar note, a friend of mine tells me that he recently applied for a job at Google, and they sent him a form asking him to specify his Race on it. Wonder why?

Saturday, December 5, 2009


Cryptic Linux Distro Updates



We're almost in 2010, I thought by now to anyone who basically knows how to use a computer, general upgrade prompts in Linux Distros would make sense, and won't seem cryptic like "Abort, Retry, Ignore, Fail?".

Then you get something like this:



This is a new upgrade message in Debian/Ubuntu.
Here's the full text:

Various snmp software needs extracted MIBs from RFCs and IANA - which cannot be shipped - to be working as expected. These MIBs can be automatically fetched and extracted as part of installing this package.

This will take several minutes to complete, even with a fast internet connection.

Download and extract MIBs from RFCs and IANA?


Are all these acronyms really necessary? Should a user even be given such cryptic information, and instead see a prompt along the lines of: "This package requires additional components to fully function, download them?".

Even a power user who is familiar with a term like RFC, does this message even make any sense?

Seems like Linux Distros still has a long way to go.

Tuesday, November 24, 2009


Malicious hackers are not out there



Security as it is today is an illusion. What? How could I say that, I'm not serious, am I?

Most people today do not understand what security is or is not about. As evidenced by so many works of modern fiction centering around a plot where the terrorists/foreign government/aliens "bug" a server, a cable, or a satellite. Today's technology is supposed to prevent attacks involving any layer in the middle being bugged. Besides not understanding what modern security is capable of, many who are working with it do not understand what it is not capable of.

A quick scan of source code in many projects will turn up code which fails even text book level security principals. I even see some major projects have code commented that it needs a more secure hash or nonce generator or something similar, which again could be found in modern textbooks.

It is shocking the sheer number of online services or applications one can install (forums, webmail, blog, wiki, etc...) that have insecure login. Nearly all of them take user login credentials in plain text, allowing anyone between the user's computer and the website's application to steal the passwords.

It is sad that nearly all sites use a custom login scheme that can be buggy and/or receive login credentials unencrypted, considering that HTTP - the protocol level of communication supports secure logins. HTTP login is rarely used though because it lacks a simple way to log out (why?), and can not be made to look pretty without using AJAX, which is why the vast majority of site creators avoid it.

The HTTP specifications actually describes two methods of login for "HTTP 401", one called "Basic Authentication", and another called "Digest Authentication". The former transmits login credentials in plain text, and the latter using an encryption of sorts. Most sites that avoid the worry of properly creating a custom login scheme and resort to HTTP 401 generally use Basic Authentication. Historically the reason is that most developers of HTTP servers and clients have been too stupid to figure out how to do it properly. Which is surprising considering it is such a simple scheme. IIS and IE didn't have it done properly till relatively recently. Apache historically has had issues with it. Qt's network classes handled it improperly until recently. I'm also told Google Chrome currently has some issues with it.


However, even if one used Digest as the login mechanism on their website, it can easily be subject to a
Man in the middle attack, because the HTTP spec allows for there to be the possibility of sending passwords in an unencrypted fashion.

The following diagram illustrates it:


Since requests for authentication are requested from the server and not the client, the machine in the middle can change the request to be the insecure variant.

So of course the next level up is HTTPS, which does HTTP over SSL/TLS, which is supposed to provide end to end security, preventing man in the middle attacks. This level of security makes all those fiction stories fail in their plot. It also is supposed to keep us safe, and is used by websites for processing credit card information and other sensitive material.

However, most users just type "something.muffin" into their browser, instead of prefixing it with http:// or https://, which will default to http://. Which again means the server has to initiate the secure connection. Since again this is also over a system which has both secure and insecure methods of communication, the same type of man in the middle attack as above can be performed.

The following diagram illustrates it:


Webservers are generally the one that initiates the redirection to an HTTPS page, which can be modified by the server in the middle. Any URL within a page which begins with https:// can be rewritten. For example, https://something.muffin can be changed to http://something.muffin:443 by an attacker in the middle, and then proceed with the attack as described above.

Of course users should be looking for padlocks and green labels and similar in their browser, but how many do so? Since most sites people visit aren't running in secure environments, do you expect them to really notice when some page which is supposed to be secure isn't? Do you expect users to be savvy about security when most developers aren't?

The amount of data which should be transferred securely but isn't is mind boggling. I see websites create a security token over HTTPS, but then pass that token around over HTTP, allowing anyone in the middle to steal it. I see people e-mail each other passwords to accounts on machines they manage all the time. I see database administrators login to phpMyAdmin running on servers with their root passwords sent in plain text. People working on projects together frequently send each other login credentials over forums or IRC in plain text.

Anyone managing a hub somewhere on the vast internet should be able to log tons and tons of passwords. Once a password is gotten to someone's e-mail or forum account, then that can be scanned for even more passwords. Also, I see many users save root/admin passwords in plain text files on web servers, if one managed to get into their account by nabbing their password to it, they quite often will also be able to gain root by a simple scan of the user's files. Even if not, once access is gained to a machine, privilege escalation is usually the norm as opposed to the exception, because server administrators quite often do not keep up with security updates, or are afraid to alter a server that they finally got working.

Careful pondering would show our entire infrastructure for communication is really a house of cards. It wouldn't be that hard for a small team with a bit of capital to setup free proxy servers around the world, offer free wi-fi at a bunch of hotspots, or start a small ISP. So the question we have to ask ourselves, is why are we still standing with everything in the shaky state it's in? I think the answer is simple, the malicious hackers really aren't out there. Sure there's hackers out there, and some of them do wreak a bit of havoc. But it seems no one is really interested in making trouble on a large scale.

Mostly the hackers you hear about are people in a competition, or research, or those "security hackers", which have gone legit and want to help you secure your business. It's funny the amount of times I heard a story about how some bigwig at a company goes to some sort of computer expo, and runs across a table or booth of security "gurus". The bigwig asks how the security gurus can help his business, with the response asking if the bigwig owns a website. Once the bigwig mentions the name of his site, one guru pulls out his laptop and shows the bigwig the site with it defaced in some way. The bigwig panics and immediately hires them to do a whole load of nothing. Little does he realize he was just man-in-the-middle'd.

Tuesday, November 10, 2009


We got excellent documentation!



Ever try to work with a library you've never dealt with before? How do you approach the task? Do you try to find another program which uses the library and cannibalize it? Get someone who already knows how to use it to teach you? Find a good example? Or just trudge your way through and get something that barely works?

I personally would like to have some good documentation which tells me exactly what I need to do to get the job done. Something which I can rely on to tell me everything I need to know, and to avoid any particular pitfalls.

Except most of the time documentation is written by people who would be better off in some other profession. Like terrorist interrogators. Or perhaps the Spanish Inquisition.

Although when you talk to people about the documentation for their library, they act like the documentation is two stone tablets brought down from heaven with sacred commandments written on them. Perhaps it is. But in the same fashion, the documentation is just as mysterious to anyone who hasn't spent years studying the library to decipher its true meaning.

For many libraries, I have spent hours pouring over their documentation, to come up with like 5-10 lines of code to do what I needed to do. 10 hours for 10 lines of code? A bit much I think. Why can't people make documentation for those not familiar with the library, so they can get all the basic tasks done, and provide good reference for anything more advanced? Sometimes the documentation is so completely unhelpful, that I have to resort to the source code, or scour the internet for something to help me. This is completely unacceptable.

Lets look at some of the various types of offenders.


Doxygen equals documentation.

This is the kind of documentation written by those obnoxious programmers who don't want to write any documentation at all. They run a script on their source code which creates a set of HTML pages with a list of all the files in the library, a list of functions and classes, and all nicely interlinked. It also pulls out the comments about each function and clearly displays it. Sure it makes it easy to jump back and forth in a browser between various internals of the source. But it really gives no insight on how to use the library. If the library is written really cleanly, and commented well, perhaps this helps, but usually those creating the library didn't put any more effort into it than they put into creating their documentation.


Really, honest, there's documentation!

Then there are those that try to convince you they have documentation. You have a set of text files, or an HTML file, or a PDF or whatever which tells you how amazing the library is, and tells you all the wonderful things the library is capable of. They'll even give you notable examples of programs using their library. You'll have a great comparison chart of why this library is better than its competitors. You'll even get some design documentation, rational, and tips on how you can expand the library further. Good luck finding a single sentence which actually tells you how to use it.


We got you the bare essentials right over here, or was it over there?

Then you have the documentation which can never give you any complete idea. Sure, just use this function and pass it these six arrays all filled out. Don't worry about what to put in them, those arrays are explained on another page. Oh yeah this array can be used for a trillion different things, depending on which function you use it with, so we'll just enumerate what you may want to use it for. You may get more information looking at these data types though. Before you know it, you're looking at 20 different pages trying to figure out how to bring together the information to use a single function.

I see your warrant, and I raise you a lawyer!

This kind of documentation seems to be written by those that don't actually want you to use their library and are all evasive about it. Every time you think the documentation is going to comply and actually tell you something useful, you're faced with something that isn't what you wanted. You'll get a bunch of small 4 line examples, each that do something, but no explanation as to what they're doing exactly. You'll even be told here and there some cryptic details about what a function supposedly does. Good luck figuring how to use anything.

I see your lawyer, and I'll bury you with an army of lawyers!

This is one of the worst offenders that big companies or organizations generally pull. You'll get "complete working examples", and a lot of it. The examples will be thousands of lines long and perform a million other things besides what the library itself does, let alone the function you just looked up. Good luck finding what you need amidst all the noise. The Dietel & Dietel line of how to program books that many colleges and universities use play the same game. Create enough information overload in the simplest of cases and force you to switch to a major in marketing.

I'm sorry your honor, I didn't realize I didn't turn over the last chapter.

This kind of documentation isn't so bad. You'll get some good notes on how to do all the basic stuff the library is capable of. But any function or class with any sort of complexity is completely missing, and you'll have to refer to the source code. But I guess the authors don't know how to put the trickier things into words, at least not like the easier stuff.


I think that about sums it up. There are some libraries out there with good documentation, but usually its of the kinds described above. Anyone else feel the same way?

Friday, November 6, 2009


FatELF Dead?



A while back, someone came up with a project called FatELF. I won't go into the exact details of all its trying to accomplish, but the basic idea was that like Mac OS X has universal binaries using the Mach-o object format which can run on multiple architectures, the same should be possible with software for Linux and FreeBSD, which use the ELF object format.

The creators of FatELF cite many different reasons why FatELF is a good idea, which most of us probably disagree with. But I found it could solve a pretty crucial issue today.

The x86 line of processors which is what everyone uses for their home PCs recently switched from 32-bit to 64-bit. 64-bit x86 known as x86-64 is backwards compatible with the old architecture. However programs written for the new one generally run faster.

x86-64 CPUs contain more registers than traditional x86-32 ones, so a CPU can juggle more data internally without offloading it to much slower RAM. Also, most distributions offered precompiled binaries designed for a very low common denominator, generally a 4x86 or the original Pentium. Programs compiled for these older processors can't take advantage of much of the improvements that have been done to the x86 line in the past 15 years. A distribution which targets the lowest common denominator for x86-64 on the other hand is targeting a much newer architecture, where every chip already contains MMX, SSE, similar technologies, and other general enhancements.

Installing a distribution geared for x86-64 can mean a much better computing experience for the most part. Except certain programs unfortunately are not yet 64 bit ready, or are closed source and can't be easily recompiled. In the past year or two, a lot of popular proprietary software were ported by their companies to x86-64, but some which are important for business fail completely under x86-64, such as Cisco's Webex.

x86-32 binaries can run on x86-64, provided all the libraries it needs are available on the system. However, many distributions don't provide x86-32 libraries on their x86-64 platform, or they provide only a couple, or provide ones which simply don't work.

All these issues could be fixed if FatELF was supported by the operating system. A distribution could provide an x86-64 platform, with all the major libraries containing both 32 and 64 bit versions within. Things like GTK, Qt, cURL, SDL, libao, OpenAL, and etc. We wouldn't have to worry about one of these libraries conflicting when installing two variations, or simply missing from the system.

It would make it easier on those on an x86-64 bit platform knowing they can run any binary they get elsewhere without a headache. It would also ease deployment issues for those that don't do anything special to take advantage of x86-64, and just want to pass out a single version of their software.

I as a developer have to have an x86-32 chroot on all my development systems to make sure I can produce 32 bit binaries properly, which is also a hassle. All too often I have to jump back and forth between a 32 bit shell to compile the code, and a 64 bit shell where I have the rest of my software needed to analyze it, and commit it.

But unfortunately, it now seems FatELF is dead, or on its way.

I wish we could find a fully working solution to the 32 on 64 bit problem that crops up today.

Thursday, November 5, 2009


They actually want bad code



So I was in this huge meeting yesterday, and I got the shock of my life.

We were discussing how we're going to go about creating and marketing a new program which will be deployed on the servers of our clients. When I suggested I be the one to take charge of the program design and creation, and handpick my team of the best programmers in the company to write the code, I was shot down. The reason? They don't want the program to be written correctly. They don't want the code written by people who know what they're doing.

That had me completely flabbergasted. I needed more details. I asked what exactly was wrong with the way I did things? With creating the program properly? Our chief executive in charge of marketing dependability and quick maintenance boiled it down for me.

The problems with me writing the code are as follows:
No matter which language(s) we choose to build the program with, whether it be C++, PHP, C#, or something else, I'm going to make sure we use the classes and functions provided by the language most fit for use in our program. Every single function will be as clear and minimalistic and self contained as possible. And this is evil in terms of dependability and quick maintenance.

If for example we used C# with .NET and I found just the perfect class out of the few thousand provided to fit the job, and it turns out down the line some issue crops up, apparently, they can't complain to Microsoft. Microsoft will tell them no one uses that class, and it is probably buggy, and they'll put it on a todo list to be looked at several months down the line.

If I use any function or class in C++ or PHP outside of the most basic 10-20 ones that dime-a-dozen programmers learn right away, they won't be able to get someone outside our group of professionals to review and fix it.

Basically, they want the program written only using classes, functions, arrays, loops, and the least amount of standard library usage. Because a random programmer most likely will barely be familiar with anything contained within the standard library.

They would prefer reinventing built in functions, and also having them written incorrectly, in terms of output correctness, and running time. Since it means a programmer will never need to look in a manual to be able to understand a piece of code and fix it. Which is important apparently, as most can only figure out what is wrong with the logic directly in front of them, and then try to brute force correct output.

But it doesn't even stop at good code making good use of the language, instead of reinventing the wheel.

Quite often in our existing projects, I go to look at a bug report, and notice some function which works incorrectly, and in the process of fixing it, I condense the logic and make the code much better. Let me give an example.

This is very similar to an existing case we had. The code was as follows:

/*
This function creates a log on Sunday, Tuesday, Thursday
It takes as input an integer with a value of 1-7, 1 being Sunday.
*/
void logEveryOtherDay(int dayOfTheWeek)
{
if (dayOfTheWeek == 1)
{
logger.open();
logger.write("Sunday");
logger.dumpData();
logger.close();
}
else if (dayOfTheWeek == 3)
{
logger.open();
logger.write("-------------");
logger.write("Tuesday");
logger.dumpData();
logger.close();
}
else if (dayOfTheWeek == 5)
{
logger.open();
logger.write("-------------");
logger.write("Thursday");
logger.dumpData();
logger.close();
}
}


The problem reported was that logs from Sunday missed the ----- separator before it, and they'd want a log on Saturday too if ran then. When fixing it, the code annoyed me, and I quickly cleaned it to the following:


//This functions takes an integer and returns true if it's odd, and false if even
static bool isOdd(int i) { return(i&1); }

static const char *daysOfTheWeek[] = {
0, //Begin with nothing, as we number the days of the week 1-7
"Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday"
}

/*
This function creates a log on Sunday, Tuesday, Thursday, Saturday
It takes as input an integer with a value of 1-7, 1 being Sunday.
*/
void logEveryOtherDay(int dayOfTheWeek)
{
if (isOdd(dayOfTheWeek)) //Logging isn't done on even days
{
logger.open();
logger.write("-------------");
logger.write(daysOfTheWeek(dayOfTheWeek));
logger.dumpData();
logger.close();
}
}


I think it should be obvious my code has much cleaner logic, and should be easy for any reasonable programmer to follow. I frequently do things like this. I even once went to look at a 2000 line function which had roughly a dozen bug reports against it, was a total mess, and ran really really slowly. Instead of hunting for the cause of each issue in it, I decided to scrap it. I created 2 helper functions, one 3 lines, the other 5, and rewrote the body of the function from 2000 lines to roughly 40. Instead of many nested ifs and several loops, we now had a single if and else which did exactly what they needed to, and called one of the two helper functions as needed where the real looping was done. The new function was profiled to run an order of a magnitude faster, and it passed all the test cases we designed, where the original failed a few. It now also contained 2 new features which were sorely lacking from the original. It was now also much easier to read it for correctness, as much less was going on in any section of the code.

But as this executive continued to tell me, what I did on these occasions is evil for an average programmer.

They can't comprehend a small amount of code doing so much. They can't understand what isOdd() does or is trying to do, unless they actually see its source. Its source of "return(i&1);" is just too confusing for them, because they don't know what "&1" means, nor can they comprehend how it can return true or false without containing an elaborate body of code. They can't just take the comment at face value that it does what it says it does. They are also frightened when they review different versions of a file to try to trace a bug when they see a ton of code just disappeared at some point, yet says it does more in the commit log.

So to sum it up, they don't want me, or programmers like me working on any code that is to be deployed on a client's server. When a client from Africa, or South America calls us up with a problem, they don't want to fly one of our good programmers down there to look at it. They want to make sure they can hire someone on site in one of those places to go and look at the problem and fix it quickly. Which apparently can't happen when there's no guarantee of being able to hirer a good programmer there on short notice, and other kinds of programmers can't deal with good code or standard library/class usage.

This mentality makes me very scared, although I guess it does explain to some extent why I find the innards of many rather large open source projects which are used commercially to be filled with tons of spaghetti logic, and written in a manner which suggests the author didn't really know what they were doing, nor should they be allowed to write code.

Anyone experience anything similar? Comments?

Wednesday, October 28, 2009


Who still watches Television?



So I know a couple of people who every night or so, sit down and watch Television. I find the very thought of it mind boggling. Why would anyone torture h[im/er]self so?

Television is all about locking yourself into someone else's timetable.
  • Each show comes on when your network decides it comes on.
  • You can't go back in case you missed something.
  • You can't rewatch a particular enjoyable scene.
  • You can't pause.
  • You can't stop it and continue it later.
  • You can't skip an annoying scene.
I find the idea of such a lack of control on the viewer's part excruciatingly painful.

People miss appointments or go to bed late because they just had to know what happens. Parents have to fight with their kids to get them to do homework, which would make them miss their favorite show.

People don't take proper care of themselves by not going to use the facilities when they need to, or getting a drink, or answering the phone. The list of problems goes on and on.

This problem was always apparent, and many people bought video cassette recorders, or other newer devices to get the same job done. But there were those that didn't, and instead preferred to torture themselves.

Okay, at first the technology may have been annoying (video cassettes), then it was too expensive. It's always another annoying peripheral in your house for the most part just taking up space. But why is this still going on today?

If you read this site often, it's quite likely you own your own computer at home and have a high speed internet connection. There's also a good chance if you bought a computer or new hard drive in the past few years, that you have gigabytes of free space that you have no idea what to do with.

If you own what I described above, why would you want to torture yourself so? Many stations are now putting their shows on their website which you can watch in mid-quality annoying flash. But ignoring that, nowadays, there are tons of people who record the show and then share that recording via Bit Torrent or file sharing websites.

Unlike traditional home recording techniques, quite often you don't even have to program your device when to record for you anymore - when using a computer. Many "Online Television" sites provide RSS feeds that you can subscribe to which would automatically download your favorite shows as each episode comes out. You don't have to screw up setting to record at 3:59, and finding out your time was three minutes behind the network, and you missed the beginning. Or instead of setting the end time to 5:01, you screwed up and selected 4:01, and recorded a grand total of two minutes of your hour long show.

These sites generally have the shows up a couple of minutes after they air in the earliest timezone showing them, and the files are of good quality, yet small enough to be downloaded in just a couple of minutes. If you happen to live in a later timezone, many times you can be finished watching a show before it even airs where you live.

So who exactly is still torturing themselves? And why?

Before you question, what would happen if everyone only started watching "Online Television", who exactly would be doing the recordings? We have to ask ourselves why aren't the companies producing these shows using the new medium available to distribute them? If they had an RSS feed with all the new shows set up to be downloaded via Bit Torrent on them (advertisements included), I'm sure many people would subscribe to them. They could even charge a few bucks a year to gain access to a private torrent, cutting out the middlemen (networks, cable companies) they use.

Sunday, October 25, 2009


Distributed HTTP



A couple years back, a friend of mine got into an area of research which was rather novel and interesting at the time. He created a website he hosted from his own computer, where one can read about the research, and download various data samples he made.

Fairly soon, it was apparent that he couldn't host any large data samples. So he found the best compression software available, and setup a BitTorrent tracker on his computer. When someone downloaded some data samples, they would be sharing the bandwidth load, allowing more interested parties to download at once without crushing my friend's connection. This was back at a time when BitTorrent was unheard of, but those interested in getting the data would make the effort to do so.

As time went on, his site popularity grew. He installed some forum software on his PC, so a community could begin discussing his findings. He also gave other people login credentials to his machine, so they can edit pages, and upload their own data.

The site evolved into something close to a wiki, where each project got its own set of pages describing it, and what was currently known on the topic. Each project got some images to visually provide an idea of what each data set covered before one downloaded the torrent file. Some experiments also started to include videos.

As the hits to his site kept increasing, my friend could no longer host his site on his own machine, and had to move his site to a commercial server which required payment on a monthly or yearly basis. While BitTorrent could cover the large data sets, it in no way provided a solution to hosting the various HTML pages and PNG images.

The site constantly gained popularity, and my friend was forced to keep upgrading to an increasingly more powerful server, where the hosting costs increased just as rapidly. Requests for donations, and ads on the server could only help offset costs to an extent.

I imagine other people and small communities have at times run into similar problems. I think its time for a solution to be proposed.

Every major browser today caches files for each site it visits, so it doesn't have to rerequest the same images, scripts, and styles on each page, and conditionally requests pages, only if they haven't been updated since what was in the cache. I think this already solves one third of the problem.

A new URI scheme can be created, perhaps dhttp:// that would act just like normal HTTP with a couple of exceptions. The browser would have some configurable options for Distributed HTTP, such as up to how many MB per site will it cache, how many MB overall, how many simultaneous uploads will it be willing to provide per site, as well as overall, which port it will run on, and a duration as to how long it will do so. When the browser connects via dhttp://, it'll include some extra headers providing the user's desired settings on this matter. The HTTP server will be modified to keep track of which IP addresses connected to it, and downloaded which files recently.

When a request for a file comes into the DHTTP server, it can respond with a list of perhaps five IP addresses to choose from (if available), chosen based on an algorithm designed to round robin the available browsers connecting to the site, and the preferences chosen therein. The browser can then request via a normal HTTP request the same file from one of those IP addresses it received. The browser would need a miniature HTTP server built in which would understand that requests coming to it that seem to be for a DHTTP server should be replied to from its cache. It would also know not to share files that are in the cache which did not originate from a DHTTP server.

If requests to each of those IP addresses have timed out, or responded with 404, then the browser can rerequest that file from the DHTTP server set with a timeout or unavailable header for each of those IP addresses, in which case the DHTTP server will respond with the requested file directly.

The HTTP server should also know to keep track of when files are updated, so it knows not to refer a visitor to an IP address which contains an old file. This forwarding concept should also be disabled in cases of private data (login information), or dynamic pages. However for static (or dynamic which is only generated periodically) public data, all requests should be satisfiable by this described method.

Thought would have to be put into how to handle "leach" browsers which never share from their cache, or just always request from the DHTTP server with timeout or unavailable headers sent.

I think something implemented along these lines can help those smaller communities that host sites on their own machines, and would like to remain self hosting, or would like to alleviate hosting costs on commercial servers.

Thoughts?

Friday, October 23, 2009


Blogger Spam



If you remember, the other day I had a bit of a meltdown in terms of all the spam I saw piling up over here.

I only have ~30 articles here, yet I had over 300 comments which were spam, and it is quite an annoying task to go delete them one by one. Especially when a week later, I'll have to go delete them one by one yet again.

Instead of just throwing my hands up in the air, I found it was time to get insane - I went to check out Blogger's API. So looking it over, I found it's really easy to log in, and about everything else after that gets annoying.

Blogger provides a way to get a list of articles, create new articles, delete articles, and also managing their comments. But the support is kind of limited if you want to specify what kind of data you want to retrieve.

At first, I thought about analyzing each comment for spam, but I didn't want to run the risk of false positives, and figured my best bet for now is just to identify spammers. I identified 25 different spam accounts.

However, Blogger only offers deleting comments by the comment ID, and then, only one by one. The only way to retrieve the comment ID is to retrieve the comments for a particular article, which includes the comments themselves and a bunch of other data. All this data is in a rather large XML file.

It would be rather easy to delete comments if Blogger provided a function like deleteCommentsOf(userId, blogId), or getCommentIdsOf(userId, blogId), or something similar. But no, one needs 4 steps just to get an XML file which contains the comments IDs along with a lot of other unnecessary data. This has to be repeated for each article.

It seems Blogger's API is really only geared towards providing various types of news feeds of a blog, and minimal remote management to allow others to create an interface for one to interact with blogger on a basic level. Nothing Blogger provides is geared towards en masse management.

Blogger also has the nice undocumented caveat that when retrieving a list of articles for a site, it includes all draft articles not published yet, if the requester is currently logged in.

But no matter, I create APIs wrapped around network requests and parsing data for a living. So using the libraries I created and use at work for this kind of thing, and 200 lines later which includes plenty of comments and whitespace, I got an API which allows me to delete all comments from a particular user from a Blogger site. So I arm an application using my new API with the 25 users I identified, and a few minutes later, presto, they're all gone.

As of the time of this posting, there should be no spam in any of the articles here. I will have to rerun my application periodically, as well as update it with the user IDs of new spam accounts, but it shouldn't be a big deal any more.

Remember the old programming dictum: Annoyance+Laziness = Great Software. It surely beats deleting things by hand every couple of days.

Monday, October 19, 2009


Why online services suck



Does anyone other than me think online services suck?

The thing that annoys me the most is language settings. Online service designers one day had this great idea to check the geographical IP address the user visited their site from, and use it to automatically set the language to the native one for the country they visited from. While this sounds nice in theory, most people only know their mother tongue, and also go on vacation now and then, or visit some other country for business purposes.

So here I am, on business in a foreign country, and I connect my laptop into the Ethernet jack in my hotel room which comes with free Internet access, so I can check my e-mail. What's the first thing I notice? The entire interface is no longer in English. Even worse is that the various menu items and buttons are moved around in this other language.

Even Google, known for being ahead of the curve when it comes to web services can't help but make the same mistakes. I'm sitting here looking at the menu on top of Blogger, wondering which one is login.

For Google this is a worse offense compared to other service providers, as I already was logged into their main site.

Google keeps their cookies set for all eternity (well, until the next time rollover disaster), and they know I always used Google in English. Now it sees me connecting from a different country than usual and thinks I want my language settings switched? Even after I set it to English on their main page, I have to figure out how to set it to English again on Blogger and YouTube?

What's really sad about all this is that every web browser sends each website as part of its request a "user agent", which tells the web server the name of the browser, a version number, operating system details, and language information. My browser is currently sending: "Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.0.11)". Notice the en-US? That tells the site the browser is in English, for the United States. If I downloaded a different version of Firefox, or installed a language package and switched Firefox to a different language, it would tell the web server that I did so. If one uses Windows in another language, Internet Explorer will also tell the web server the language Windows/Internet Explorer is in.

Why are these service providers ignoring browser information, and instead solely looking at geographical information? People travel all the times these days. Let us also not forget those in restrictive countries who use foreign proxy servers to access the internet.


However, common issues, such as annoying language support is hardly the end of the problems. In terms of online communication, virtually all of them suffer from variations of spam. Again, where is Google here? Every time I go read comments on Blogger, I see nothing but spam posts. Even when I go to cleanup my own site, the spam just fills up again a few days later.

Where's the flag as spam button? Where's the flag this user as solely a spammer button?

Sure Google as a site manager lets me block all comments on my site till I personally review them to see if they're spam, but in today's need for hi-speed communication is that really an option when you may have a hot topic on hand? Why can't readers flag posts on their own?

In terms of management, why doesn't Blogger's site management features include a list where I can check off posts and hit one mass delete, instead of having to click delete and "Yes I'm sure" on each and every spam post? Why can't I delete all posts from user X and ban that user from ever posting on my site again?


Okay, maybe this isn't so much an article why online services suck, but more about language and spam complaints, and mostly at Google for the moment. Jet-lag, and getting your E-mail interface in Gibberish does wonders for a friendly post. I'll try to come up something better for my next article.

Sunday, June 21, 2009


How end users can utilize multicore processors



In recent years, the average desktop/workstation computer has gone from single core to multiple cores. Those of us who do video encoding, compression, or run a lot of processes at once are absolutely loving it. Where does everyone else fit in?


Why multiple cores?
How many operations a particular CPU core could do at once has been increasing over the years. If we compare what we can do today with what were able to do a decade ago, we see we've come a long way. Our CPU cores were once doubling in speed every 18 months. However, in recent times, it seems trying to push the CPU farther and farther in how much it could do at once has been steadily getting closer and closer to the theoretical maximum. There's only so much we can do to make the various components that make up a CPU get closer together on the silicon, or work better together, with today's technology, and without making the chip catch fire. Therefore, we went to the next logical step, put more than one CPU on each CPU slab we stick in our motherboards.


Are CPUs currently fast enough?
CPUs have gotten so fast in recent years, that for normal every day usage, they're fast enough. Whether I'm browsing the web, writing an e-mail, doing some math, painting a pretty picture, listening to music, watching a video, doing my taxes, or most other common tasks, nothing about the machine's speed disappoints me. I've found 2 GHz to be fast enough. Unlike the old days, I'm not sitting in front of a machine wishing it could go faster, or subconsciously reaching for my remote to hit the fast forward button while watching a program load or complete an operation. Of course increased memory availability played a role in this too. In any event, computers made in the past 5 years or so have been fast enough for most people.


So what can multiple cores do for me?
Well, for certain applications, where large processes can be broken up into other smaller independent processes, the processes can be completed faster. For video encoding, a video frame can be broken up into quadrants, each one handled individually. For compression, a file can be broken up into chunks, and each compressed separately. You can now also run a lot of processes at once. You can have an HTTP Server, an application server, and a database server all running on a single machine without any one of them slowing the others down. Even for home users, you can run more background processes, such as your virus scanner while you're working on other projects. For programmers like myself, it's great, I can compile multiple programs at once, or have a program compiling in the background, while still doing other stuff, such as burning a DVD, listening to music, and reading CNN, with everything being really fast and responsive, and without my DVD drive spitting out a coaster. Also, if you like running multiple operating systems at once using VirtualBox or something similar, you can assign each operating system you're currently using its own CPU.

So what can't multiple cores do?
Multiple cores can't make single threaded applications work faster. If all you're doing is playing your average game, or writing a letter or something similar, you'll have one core being used to it's maximum, while the others are just sitting there doing nothing.

Why aren't more applications multi-threaded?
This is simply a matter of there's nothing to do to make them multi-threaded. In a program where every single operation is based off of the result of the previous operation, there is no way to break it up into two components, to run each in a different thread, and by extension, each in a different CPU core. Even if there are a couple of occasional segments that can be broken up, in many cases it may not be worth the overhead of doing so. Multi-threading only works well when there's large segments each containing many operations in them that can be broken up. Multi-threading fails if the two threads have to constantly sync results between them.

So why should the average user bother for 4 or 8 core processors?
This is an excellent question. Why should a business or average home user waste money on these higher end CPUs? Let me call your attention to a few other points about modern computers.

Modern desktop computers even at home and work generally come with:
  • 6 audio jacks in the rear, 2 in the front
  • 8 or more USB ports
  • A video card with two DVI connectors
  • A motherboard which supports 2 video cards
Now of course there's cases where you want 7.1 sound, and lots of microphones, and other devices plugged in, your cameras, gamepads, and printers (hey, printers belong attached to your network switch!), multiple screens, or lots of video cards working together on video like CPUs do in the cases similar to what I highlighted above.

Now if you realize what you have, it all seems a little too convenient.
4 cores - 4 users.
8 audio jacks - Speakers + Microphone per user for 4 users.
8 USB ports - Keyboard+Mouse per user for 4 users.
2 video cards with 2 DVI each - 4 screens.

It almost seems like the average machine you can buy for $500-$600 is asking you to use it for 4 users!

Now the great thing is, even average integrated sound cards allow each jack to receive their own programming, and plugging something in one jack doesn't force mute another. On many models, even a jack's primary use of input/output is really left up to the software, and only the average drivers force it to be one or the other.

You can buy extension cords to keep your "virtual" computers further away from each other. You can get powered USB hubs to provide as many USB ports you want to each user, or get keyboards which offer additional USB ports on them so users can plug in thier own devices such as memory sticks.

Now look back at the average home user. Who at home with only a single computer doesn't get the wife or kids nagging they want to do something? Who at home or at work wouldn't like to cut costs a bit? You already are going to have to buy several screens, speakers, keyboards, and mice. Now just buy one computer, maybe spend $100-$200 on it more than you wanted to, and perhaps another $50 on extension cords, and now you don't have to buy another 1-3 computers, which would add on $400-$2000.

Imagine even if you're a power user who does a lot of intensive projects that you need that really powerful computer for. How often are you really encoding those videos? Can you just have them queued up to be done at night while everyone is sleeping?

You can now also spend a little extra on that processor and video card to keep your son happy playing all those new games, while you get a lot more power out of your computer during normal hours while he's doing homework, and you're doing your taxes, all on the same machine, and still end up saving money. You also now only have to run that virus scanner on a single machine in the background, instead of several.

So now the question is, can we already do this? And how well can we do it?
There's some articles you can read, on how to set it up, but it seems a lot more of a hassle than one would like.

It'd be really nice to have a special multiseat optimized distro ready to be used in such a manner out of the box. Or perhaps a distro such as Ubuntu provided a special mode for it. Maybe even GNOME or KDE should have an admin setting where they can detect your current setup and offer an option turn the multiple virtual desktops they have on them into an environment suitable for multiple users with just a single click.

Of course this would probably need a lot more work done in the sound area to provide a virtual sound system to each user, and make sure the underlying drivers can work with each audio jack independently. Also would mean they'd have to understand how to sandbox each particular virtual desktop now residing on each screen to the inputs in front of it.

Thoughts?

Thursday, June 18, 2009


State of sound in Linux not so sorry after all



About two years ago, I wrote an article titled the "The Sorry State of Sound in Linux", hoping to get some sound issues in Linux fixed. Now two years later a lot has changed, and it's time to take another look at the state of sound in Linux today.


A quick summary of the last article for those that didn't read it:
  • Sound in Linux has an interesting history, and historically lacked sound mixing on hardware that was more software based than hardware.
  • Many sound servers were created to solve the mixing issue.
  • Many libraries were created to solve multiple back-end issues.
  • ALSA replaced OSS version 3 in the Kernel source, attempting to fix existing issues.
  • There was a closed source OSS update which was superb.
  • Linux distributions have been removing OSS support from applications in favor of ALSA.
  • Average sound developer prefers a simple API.
  • Portability is a good thing.
  • Users are having issues in certain scenarios.


Now much has changed, namely:
  • OSS is now free and open source once again.
  • PulseAudio has become widespread.
  • Existing libraries have been improved.
  • New Linux Distributions have been released, and some existing ones have attempted an overhaul of their entire sound stack to improve users' experience.
  • People read the last article, and have more knowledge than before, and in some cases, have become more opinionated than before.
  • I personally have looked much closer at the issue to provide even more relevant information.


Let's take a closer look at the pros and cons of OSS and ALSA as they are, not five years ago, not last year, not last month, but as they are today.

First off, ALSA.
ALSA consists of three components. First part is drivers in the Kernel with an API exposed for the other two components to communicate with. Second part is a sound developer API to allow developers to create programs which communicate with ALSA. Third part is a sound mixing component which can be placed between the other two to allow multiple programs using the ALSA API to output sound simultaneously.

To help make sense of the above, here is a diagram:


Note, the diagrams presented in this article are made by myself, a very bad artist, and I don't plan to win any awards for them. Also they may not be 100% absolutely accurate down to the last detail, but accurate enough to give the average user an idea of what is going on behind the scenes.

A sound developer who wishes to output sound in their application can take any of the following routes with ALSA:
  • Output using ALSA API directly to ALSA's Kernel API (when sound mixing is disabled)
  • Output using ALSA API to sound mixer, which outputs to ALSA's Kernel API (when sound mixing is enabled)
  • Output using OSS version 3 API directly to ALSA's Kernel API
  • Output using a wrapper API which outputs using any of the above 3 methods


As can be seen, ALSA is quite flexible, has sound mixing which OSSv3 lacked, but still provides legacy OSSv3 support for older programs. It also offers the option of disabling sound mixing in cases where the sound mixing reduced quality in any way, or introduced latency which the end user may not want at a particular time.

Two points should be clear, ALSA has optional sound mixing outside the Kernel, and the path ALSA's OSS legacy API takes lacks sound mixing.

An obvious con should be seen here, ALSA which was initially designed to fix the sound mixing issue at a lower and more direct level than a sound server doesn't work for "older" programs.

Obvious pros are that ALSA is free, open source, has sound mixing, can work with multiple sound cards (all of which OSS lacked during much of version 3's lifespan), and included as part of the Kernel source, and tries to cater to old and new programs alike.

The less obvious cons are that ALSA is Linux only, it doesn't exist on FreeBSD or Solaris, or Mac OS X or Windows. Also, the average developer finds ALSA's native API too hard to work with, but that is debatable.


Now let's take a look at OSS today. OSS is currently at version 4, and is a completely different beast than OSSv3 was.
Where OSSv3 went closed source, OSSv4 is open sourced today, under GPL, 3 clause BSD, and CDDL.
While a decade old OSS was included in the Linux Kernel source, the new greatly improved OSSv4 is not, and thus may be a bit harder for the average user to try out. Older OSSv3 lacked sound mixing and support for multiple sound cards, OSSv4 does not. Most people who discuss OSS or try OSS to see how it stacks up against ALSA unfortunately are referring to, or are testing out the one that is a decade old, providing a distortion of the facts as they are today.

Here's a diagram of OSSv4:
A sound developer wishing to output sound has the following routes on OSSv4:
  • Output using OSS API right into the Kernel with sound mixing
  • Output using ALSA API to the OSS API with sound mixing
  • Output using a wrapper API to any of the above methods


Unlike in ALSA, when using OSSv4, the end user always has sound mixing. Also because sound mixing is running in the Kernel itself, it doesn't suffer from the latency ALSA generally has.

Although OSSv4 does offer their own ALSA emulation layer, it's pretty bad, and I haven't found a single ALSA program which is able to output via it properly. However, this isn't an issue, since as mentioned above, ALSA's own sound developer API can output to OSS, providing perfect compatibility with ALSA applications today. You can read more about how to set that up in one of my recent articles.

ALSA's own library is able to do this, because it's actually structured as follows:

As you can see, it can output to either OSS or ALSA Kernel back-ends (other back-ends too which will be discussed lower down).

Since both OSS and ALSA based programs can use an OSS or ALSA Kernel back-end, the differences between the two are quite subtle (note, we're not discussing OSSv3 here), and boils down to what I know from research and testing, and is not immediately obvious.

OSS always has sound mixing, ALSA does not.
OSS sound mixing is of higher quality than ALSA's, due to OSS using more precise math in its sound mixing.
OSS has less latency compared to ASLA when mixing sound due to everything running within the Linux Kernel.
OSS offers per application volume control, ALSA does not.
ALSA can have the Operating System go into suspend mode when sound was playing and come out of it with sound still playing, OSS on the other hand needs the application to restart sound.
OSS is the only option for certain sound cards, as ALSA drivers for a particular card are either really bad or non existent.
ALSA is the only option for certain sound cards, as OSS drivers for a particular card are either really bad or non existent.
ALSA is included in Linux itself and is easy to get ahold of, OSS (v4) is not.

Now the question is where does the average user fall in the above categories? If the user has a sound card which only works (well) with one or the other, then obviously they should use the one that works properly. Of course a user may want to try both to see if one performs better than the other one.

If the user really needs to have a program output sound right until Linux goes into suspend mode, and then continues where it left off when resuming, then ALSA is (currently) the only option. I personally don't find this to be a problem, and furthermore I doubt it's a large percentage of users that even use suspend in Linux. Suspend in general in Linux isn't great, due to some rogue piece of hardware like a network or video card which screws it up.

If the user doesn't want a hassle, ALSA also seems the obvious choice, as it's shipped directly with the Linux Kernel, so it's much easier for the user to use a modern ALSA than it is a modern OSS. However it should be up to the Linux Distribution to handle these situations, and to the end user, switching from one to the other should be seamless and transparent. More on this later.

Yet we also see due to better sound mixing and latency when sound mixing is involved, that OSS is the better choice, as long as none of the above issues are present. But the better mixing is generally only noticed at higher volume levels, or rare cases, and latency as I'm referring to is generally only a problem if you play heavy duty games, and not a problem if you just want to listen to some music or watch a video.


But wait this is all about the back-end, what about the whole developer API issue?

Many people like to point fingers at the various APIs (I myself did too to some extent in my previous article). But they really don't get it. First off, this is how your average sound wrapper API works:

The program outputs sound using a wrapper, such as OpenAL, SDL, or libao, and then sound goes to the appropriate high level or low level back-end, and the user doesn't have to worry about it.

Since the back-ends can be various Operating Systems sound APIs, they allow a developer to write a program which has sound on Windows, Mac OS X, Linux, and more pretty easily.

Some like Adobe like to say how this is some kind of problem, and makes it impossible to output sound in Linux. Nothing could be further from the truth. Graphs like these are very misleading. OpenAL, SDL, libao, GStreamer, NAS, Allegro, and more all exist on Windows too. I don't see anyone complaining there.

I can make a similar diagram for Windows:

This above diagram is by no means complete, as there's XAudio, other wrapper libs, and even some Windows only sound libraries which I've forgotten the name of.

This by no means bothers anybody, and should not be made an issue.

In terms of usage, the libraries stack up as follows:
OpenAL - Powerful, tricky to use, great for "3D audio". I personally was able to get a lot done by following a couple of example and only spent an hour or two adding sound to an application.
SDL - Simplistic, uses a callback API, decent if it fits your program design. I personally was able to add sound to an application in half an hour with SDL, although I don't think it fits every case load.
libao - Very simplistic, incredibly easy to use, although problematic if you need your application to not do sound blocking. I added sound to a multitude of applications using libao in a matter of minutes. I just think it's a bit more annoying to do if you need to give your program its own sound thread, so again depends on the case load.

I haven't played with the other sound wrappers, so I can't comment on them, but the same ideas are played out with each and every one.

Then of course there's the actual OSS and ALSA APIs on Linux. Now why would anyone use them when there are lovely wrappers that are more portable, customized to match any particular case load? In the average case, this is in fact true, and there is no reason to use OSS or ALSA's API to output sound. In some cases, using a wrapper API can add latency which you may not want, and you don't need any of the advantages of using a wrapper API.

Here's a breakdown of how OSS and ALSA's APIs stack up.
OSSv3 - Easy to use, most developers I spoke to like it, exists on every UNIX but Mac OS X. I added sound to applications using OSSv3 in 10 minutes.
OSSv4 - Mostly backwards compatible with v3, even easier to use, exists on every UNIX except Mac OS X and Linux when using the ALSA back-end, has sound re-sampling, and AC3 decoding out of the box. I added sound to several applications using OSSv4 in 10 minutes each.
ALSA - Hard to use, most developers I spoke to dislike it, poorly documented, not available anywhere but Linux. Some developers however prefer it, as they feel it gives them more flexibility than the OSS API. I personally spent 3 hours trying to make heads or tails out of the documentation and add sound to an application. Then I found sound only worked on the machine I was developing on, and had to spend another hour going over the docs and tweaking my code to get it working on both machines I was testing on at the time. Finally, I released my application with the ALSA back-end, to find several people complaining about no sound, and started receiving patches from several developers. Many of those patches fixed sound on their machine, but broke sound on one of my machines. Here we are a year later, and my application after many hours wasted by several developers, ALSA now seems to output sound decently on all machines tested, but I sure don't trust it. We as developers don't need these kinds of issues. Of course, you're free to disagree, and even cite examples how you figured out the documentation, added sound quickly, and have it work flawlessly everywhere by everyone who tested your application. I must just be stupid.

Now I previously thought the OSS vs. ALSA API issue was significant to end users, in so far as what they're locked into, but really it only matters to developers. The main issue is though, if I want to take advantage of all the extra features that OSSv4's API has to offer (and I do), I have to use the OSS back-end. Users however don't have to care about this one, unless they use programs which take advantage of these features, which there are few of.

However regarding wrapper APIs, I did find a few interesting results when testing them in a variety of programs.
App -> libao -> OSS API -> OSS Back-end - Good sound, low latency.
App -> libao -> OSS API -> ALSA Back-end - Good sound, minor latency.
App -> libao -> ALSA API -> OSS Back-end - Good sound, low latency.
App -> libao -> ALSA API -> ALSA Back-end - Bad sound, horrible latency.
App -> SDL -> OSS API -> OSS Back-end - Good sound, really low latency.
App -> SDL -> OSS API -> ALSA Back-end - Good sound, minor latency.
App -> SDL -> ALSA API -> OSS Back-end - Good sound, low latency.
App -> SDL -> ALSA API -> ALSA Back-end - Good sound, minor latency.
App -> OpenAL -> OSS API -> OSS Back-end - Great sound, really low latency.
App -> OpenAL -> OSS API -> ALSA Back-end - Adequate sound, bad latency.
App -> OpenAL -> ALSA API -> OSS Back-end - Bad sound, bad latency.
App -> OpenAL -> ALSA API -> ALSA Back-end - Adequate sound, bad latency.
App -> OSS API -> OSS Back-end - Great sound, really low latency.
App -> OSS API -> ALSA Back-end - Good sound, minor latency.
App -> ALSA API -> OSS Back-end - Great sound, low latency.
App -> ALSA API -> ALSA Back-end - Good sound, bad latency.

If you're having a hard time trying to wrap your head around the above chart, here's a summary:
  • OSS back-end always has good sound, except when using OpenAL->ALSA to output to it.
  • ALSA generally sounds better when using the OSS API, and has lower latency (generally because that avoids any sound mixing as per an earlier diagram).
  • OSS related technology is generally the way to go for best sound.


But wait, where do sound servers fit in?

Sounds servers were initially created to deal with problems caused by OSSv3 which currently are non existent, namely sound mixing. The sound server stack today looks something like this:

As should be obvious, these sound servers today do nothing except add latency, and should be done away with. KDE 4 has moved away from the aRts sound server, and instead uses a wrapper API known as Phonon, which can deal with a variety of back-ends (which some in themselves can go through a particular sound server if need be).

However as mentioned above, ALSA's mixing is not of the same high quality as OSS's is, and ALSA also lacks some nice features such as per application volume control.

Now one could turn off ALSA's low quality mixer, or have an application do it's own volume control internally via modifying the sound wave its outputting, but these choices aren't friendly towards users or developers.

Seeing this, Fedora and Ubuntu has both stepped in with a so called state of the art sound server known as PulseAudio.

If you remember this:

As you can see, ALSA's API can also output to PulseAudio, meaning programs written using ALSA's API can output to PulseAudio and use PulseAudio's higher quality sound mixer seamlessly without requiring the modification of old programs. PulseAudio is also able to send sound to another PulseAudio server on the network to output sound remotely. PulseAudio's stack is something like this:

As you can see it looks very complex, and a 100% accurate breakdown of PulseAudio is even more complex.

Thanks to PulseAudio being so advanced, most of the wrapper APIs can output to it, and Fedora and Ubuntu ship with all that set up for the end user, it can in some cases also receive sound written for another sound server such as ESD, without requiring ESD to run on top of it. It also means that many programs are now going through many layers before they reach the sound card.

Some have seen PulseAudio as the new Voodoo which is our new savior, sound written to any particular API can be output via it, and it has great mixing to boot.

Except many users who play games for example are crying that this adds a TREMENDOUS amount of latency, and is very noticeable even in not so high-end games. Users don't like hearing enemies explode a full 3 seconds after they saw the enemy explode on screen. Don't let anyone kid you, there's no way a sound server, especially with this level of bloat and complexity ever work with anything approaching low latency acceptable for games.

Compare the insanity that is PulseAudio with this:

Which do you think looks like a better sound stack, considering that their sound mixing, per application volume control, compatibility with applications, and other features are on par?

And yes, lets not forget the applications. I'm frequently told about how some application is written to use a particular API, therefore either OSS or ALSA need to be the back-end they use. However as explained above, either API can be used on either back-end. If setup right, you don't have to have a lack of sound using newer version of Flash when using the OSS back-end.

So where are we today exactly?
The biggest issues I find is that the Distributions simply aren't setup to make the choice easy on the users. Debian and derivatives provide a Linux sound base package to select whether you want OSS or ALSA to be your back-end, except it really doesn't do anything. Here's what we do need from such a package:
  • On selecting OSS, it should install the latest OSS package, as well as ALSA's ALSA API->OSS back-end interface, and set it up.
  • Minimally configure an installed OpenAL to use OSS back-end, and preferably SDL, libao, and other wrapper libraries as well.
  • Recognize the setting when installing a new application or wrapper library and configure that to use OSS as well.
  • Do all the above in reverse when selecting ALSA instead.

Such a setup would allow users to easily switch between them if their sound card only worked with the one which wasn't the distribution's default. It would also easily allow users to objectively test which one works better for them if they care to, and desire to use the best possible setup they can. Users should be given this capability. I personally believe OSS is superior, but we should leave the choice up to the user if they don't like whichever is the system default.

Now I repeatedly hear the claim: "But, but, OSS was taken out of the Linux Kernel source, it's never going to be merged back in!"

Let's analyze that objectively. Does it matter what is included in the default Linux Kernel? Can we not use VirtualBox instead of KVM when KVM is part of the Linux Kernel and VirtualBox isn't? Can we not use KDE or GNOME when neither of them are part of the Linux Kernel?

What matters in the end is what the distributions support, not what's built in. Who cares what's built in? The only difference is that the Kernel developers themselves won't maintain anything not officially part of the Kernel, but that's the precise jobs that the various distributions fill, ensuring their Kernel modules and related packages work shortly after each new Kernel comes out.

Anyways, a few closing points.

I believe OSS is the superior solution over ALSA, although your mileage may vary. It'd be nice if OSS and ALSA just shared all their drivers, not having an issue where one has support for one sound card, but not the other.

OSS should get suspend support and anything else it lacks in comparison to ALSA even if insignificant. Here's a hint, why doesn't Ubuntu hire the OSS author and get it more friendly in these last few cases for the end user? He is currently looking for a job. Also throw some people at it to improve the existing volume controlling widgets to be friendlier with the new OSSv4, and maybe get stuff like HAL to recognize OSSv4 out of the box.

Problems should be fixed directly, not in a roundabout matter as is done with PulseAudio, that garbage needs to go. If users need remote sound (and few do), one should just be easily able to map /dev/dsp over NFS, and output everything to OSS that way, achieving network transparency on the file level as UNIX was designed for (everything is a file), instead of all these non UNIX hacks in place today in regards to sound.

The distributions really need to get their act together. Although in recent times Draco Linux has come out which is OSS only, and Arch Linux seems to treat OSSv4 as a full fledged citizen to the end user, giving them choice, although I'm told they're both bad in the the ALSA compatibility department not setting it up properly for the end user, and in the case of Arch Linux, requiring the user to modify the config files of each application/library that uses sound.

OSS is portable thanks to its OS abstraction API, being more relevant to the UNIX world as a whole, unlike ALSA. FreeBSD however uses their own take on OSS to avoid the abstraction API, but it's still mostly compatible, and one can install the official OSSv4 on FreeBSD if they so desire.

Sound in Linux really doesn't have to be that sorry after all, the distributions just have to get their act together, and stop with all the finger pointing, propaganda, and FUD that is going around, which is only relevant to ancient versions of OSS, if not downright irrelevant or untrue. Let's stop the madness being perpetrated by the likes of Adobe, PulseAudio propaganda machine, and whoever else out there. Let's be objective and use the best solutions instead of settling for mediocrity or hack upon hack.

Monday, May 25, 2009


Perfect sound with OSS version 4



So I just happened to be keeping my eye on some packages being upgraded in Debian on dist-upgrade, and something caught my eye, the package "libasound2-plugins". I wondered what kind of plugins it provided, so I asked APT to show me what it was. Here's what came up:


Package: libasound2-plugins
Priority: optional
Section: libs
Installed-Size: 488
Maintainer: Debian ALSA Maintainers
Architecture: amd64
Source: alsa-plugins
Version: 1.0.19-2
Depends: libasound2 (>> 1.0.18), libc6 (>= 2.2.5), libjack0 (>= 0.116.1), libpulse0 (>= 0.9.14), libsamplerate0
Filename: pool/main/a/alsa-plugins/libasound2-plugins_1.0.19-2_amd64.deb
Size: 119566
MD5sum: 89efb281a3695d8c0f0d3c153ff8041a
SHA1: fdd93b68ec0b8e6de0b67b3437b9f8c86c04b449
SHA256: 7eb5b023373db00ca1b65765720a99654a0b63be741a5f5db2516a8881048aa6
Description: ALSA library additional plugins
This package contains plugins for the ALSA library that are
not included in the main libasound2 package.
.
The following plugins are included, among other:
- a52: S16 to A52 stream converter
- jack: play or capture via JACK
- oss: run native ALSA apps on OSS drivers
- pulse: play or capture via Pulse Audio
- lavcrate, samplerate and speexrate: rate converters
- upmix and vdownmix: convert from/to 2 and 4/6 channel streams
.
ALSA is the Advanced Linux Sound Architecture.
Enhances: libasound2
Homepage: http://www.alsa-project.org/
Tag: devel::library, role::plugin, works-with::audio


Now something jumped out at me, run native ALSA apps on OSS drivers?
If you read my sound article, you know I'm an advocate of OSSv4, since it seems superior where it matters.

So I looked into the documentation for the Debian (as well as Ubuntu) package "libasound2-plugins" on how this ALSA over OSS works exactly.

I edited /etc/asound.conf, and changed it to the following:

pcm.!default {
type oss
device /dev/dsp
}

ctl.!default {
type oss
device /dev/mixer
}

And presto, every ALSA application now started properly outputting sound for me. No more need to always have to fiddle with configurations for each sound layer to use OSS, because the distros don't allow auto config of them.

I could never get flash on 64 bit with sound before, even though each new OSS release says they "fixed it". Now it does work for me.

I tested the following with ALSA:
MPlayer (-ao alsa)
Firefox, flashplugin-nonfree, Homestar Runner
ZSNES (-ad alsa)
bsnes (defaults)

Oh and in case you're wondering, mixing is working perfectly. I tried running four instances of MPlayer, two set to use ALSA, the other two set to output using OSS, and I was able to hear all four at once.

Now it's great to setup each application and sound layer individually to use OSS, so there's less overhead. But just making this one simple change means you don't have to for each application where the distro defaulted to ALSA, or have to suffer incompatibility when a particular application is ALSA only.


Note that depending how you installed OSS and which version, it may have tried forcing ALSA programs to use a buggy ALSA emulation library, which is incomplete, and not bug for bug compatible with the real ALSA. If that happened to you, here's how to use the real ALSA libraries, which are 100% ALSA compatible, as it's 100% the real ALSA.

First check where everything is pointing with the following command ls -la /usr/lib/libasound.*
I get the following:

-rw-r--r-- 1 root root 1858002 2009-03-04 11:09 /usr/lib/libasound.a
-rw-r--r-- 1 root root 840 2009-03-04 11:09 /usr/lib/libasound.la
lrwxrwxrwx 1 root root 18 2009-03-06 03:35 /usr/lib/libasound.so -> libasound.so.2.0.0
lrwxrwxrwx 1 root root 18 2009-03-06 03:35 /usr/lib/libasound.so.2 -> libasound.so.2.0.0
-rw-r--r-- 1 root root 935272 2009-03-04 11:09 /usr/lib/libasound.so.2.0.0

Now as you can see libasound.so and libasound.so.2 both point to libasound.so.2.0.0. The bad emulation is called libsalsa. So if instead of seeing "-> libasound..." you see "-> libsalsa..." there, you'll want to correct the links.

You can correct with the following commands as root:

cd /usr/lib/
rm libasound.so libasound.so.2
ln -s libasound.so.2.0.0 libasound.so
ln -s libasound.so.2.0.0 libasound.so.2

If you're using Ubuntu and don't know how to switch to root, try sudo su prior to the steps above.

If you'd like to try to configure as many applications as possible to use OSS directly to avoid any unneeded overhead, see the documentation here and here which provide a lot of useful information. However if you're happy with your current setup, the hassle to configure each additional application isn't needed as long as you setup ALSA to use OSS.

Enjoy your sound!

Ancient coding ideas finally in English - Part 2





  1. What is the best way to write it? Whatever is best for the program itself and best for the programmers.
    Be just as careful with with minor code as with major code, as you don't know in the end which will be more important.
    Consider what you lose when not writing the code properly against its gains, and consider the benefits of a poor implementation against what it loses.
    Focus on three things and you will avoid code repetition: Know what other code exists, others will review your code, the code will exist for a long time.


  2. It's best to write code for the customer's demands, they will overlook its negative qualities.
    Code that's not written for those buying it will be for naught, and improving it will lead to code repetition.
    Those writing for a community should write the code for its own sake, those that came before you will help you.
    You will end up getting credit for the work that gets added on as if you yourself did it.


  3. Be wary of standards bodies or other organizations, since they only recruit people for their own agenda.
    They will act like they love you when it is to their advantage, but they will not stand by you when you need it.


  4. Desire what the community wants, and the community will want what you desire.
    Do what they want instead of what you want, and other programmers will desire what you desire.


  5. Don't alienate the community.
    Don't trust code till the code is about to be recycled.
    Don't judge an implementation till you try to implement it yourself.
    Don't do something that is unaccepted hoping it will eventually be accepted.
    Don't plan to only write it properly later, maybe you won't.


  6. An idiot doesn't care about code repetition.
    One who is unlearned will never be a hero.
    One who is embarrassed will never learn.
    One who is always angry and demanding can't teach.
    Those who solely focus on making money will never be more than an idiot.
    Wherever there is no one else to write the code, you write it.


  7. The master who invented the above statement once looked at a hack being recycled, and stated:
    Since this hack replaced an older hack, it itself got replaced, and the hack replacing it will also be replaced.


  8. One who increases code size increases bugs.
    One who increases features increases worry.
    One who increases threads increases overhead.
    One who increases processes increases communication layers.
    One who increases the amount of code they solely enjoy increases black magic in the code.
    One who increases usefulness of the code increases its lifespan.
    One who increases the amount of thought put into writing the code increases its intelligence.
    One who increases his own agenda only does so for himself.
    One who increases usefulness for the sake of the community will earn for himself everlasting gratitude.


  9. If you write a lot of code, don't view yourself as so special, because it is for this reason you became a programmer.


  10. The master who invented the above statement had five students.
    The first was someone who never overlooked a detail.
    The second always looked to the source of the issue.
    The third was a hero.
    The fourth always avoided code repetition.
    The fifth was always increasing his own understanding and knowledge of techniques.


  11. The master said of his first student, if we weighed him against everyone else out there, his abilities would outweigh them all.
    Another master said, if the fifth student was weighed against the other four, his abilities would outweigh them all.


  12. The master asked his students: What is the best trait for becoming a good programmer?
    The first answered: One who carefully checks his code.
    The second answered: One who has a good friend to bounce ideas off of.
    The third answered: One who sees the needs of those around him.
    The fourth answered: One who anticipates future needs.
    The fifth answered: One who desires to write the best code he can.
    The master stated, the fifth answered best, as his answer includes all the others.

    The master then asked his students: What should a good programmer avoid?
    The first answered: Ignoring what is going on in the code.
    The second answered: Idiot friends.
    The third answered: A weak community.
    The fourth answered: Allocating resources without freeing them.
    The fifth answered: Becoming complacent in his understanding of what is best.
    The master stated, the fifth answered best, as one who becomes complacent will end up with what everyone else answered.


  13. Stick up for your fellow programmers as you would for yourself.
    Don't get angry easily.
    Fix the code before the problem is apparent.
    Enjoy the fire of clever code, but be careful lest you be burned by it.


  14. Bad analyzing, no desire for good code, and hating your community will all cause one's code to be thrown away.


  15. Managing resources you allocate should be as important to you as managing the resources you already have.
    Perfect your programming abilities, as this is what a programmer is.
    All your code should be written for its own sake.


  16. Be meticulous in learning about what you need to accomplish, and the tools necessary to do so.
    Don't make the theory of prime importance.
    Don't underestimate yourself, and don't think your work will only be minor.


  17. Have a good answer ready for those who may find issues with your code.
    Understand your customer.
    Understand your employer.


  18. The day may be short, but there's a lot of work to be done.
    Programmers are lazy, even though their programs do much.
    The boss is demanding.


  19. You don't have to do all the work by yourself, you don't have to finish every last bit of it.
    You however can't leave the code in disarray.
    If you write a lot of good code, you'll be properly compensated for it.
    Believe that those who employ you will compensate you for your effort.
    Know that those who put in the effort will be compensated greatly in years to come, even if not initially.

Sunday, May 24, 2009


Ancient coding ideas finally in English - Part 1





  1. Each great programmer learned programming from the master before him.
    Be precise in writing code.
    Teach programming to others, as you'll understand how things work better yourself when you're forced to explain it.
    Put safety in your code, don't just look at what minimally works, protect yourself from careless mistakes.


  2. Good programs depend on 3 things:
    The code.
    The hardware.
    The presentation.


  3. Don't write a program to get bare minimum done, and be over with it.
    Rather do it for the sake of the program itself, and try to get the most out of the program.


  4. Your code should be welcome to other good programmers.
    You should pay attention to minor details of their code.
    Pay attention to their ideas.


  5. Your code should be open for improvements.
    Let even simple programmers review it for mistakes.
    Be minimalistic on communication layers in your code.
    [The previous] refers to communicating with your own code, this applies even more so when your code has to communicate across a network or other external dependencies.
    Excessive external dependencies or communication across a network only hurts your program, and worsens code quality, and you'll end up with an unmanageable program.


  6. Declare over yourself a coding standard.
    Buy yourself friends to bounce ideas off of.
    Accept every new idea for consideration, no matter how ridiculous it might seem at first glance.


  7. Stay away from bad libraries.
    Don't statically link to them.
    Don't give up when it's a disaster out there.


  8. Don't make yourself a sole reviewer of large amounts of code.
    When presented with two implementations, assume both of them are complete garbage until convinced otherwise.
    When having to decide between two implementations you are presented with, examine the pros and cons of the situation, and what improvements can be done, and be happy when proponents of each were able to walk away being able to accept the pros and cons and reach the best implementation.


  9. Review code in depth, and have ever changing methods to do so, otherwise a severe bug can slip by you.


  10. Love writing code.
    Hate bureaucracy.
    Don't give in to status quo.


  11. Be careful in what you to say to others, or you may create a rift in your community, the outcome of the current work will fail, and your program will be for naught.


  12. Love peace and run after it.
    Love the other programmers, and show them how to write better code.
    One who only seeks to advance his own self will end up destroying himself.
    One who doesn't learn what's new will end up not being able to write code with what he currently knows.
    One who doesn't learn at all shouldn't be writing code.
    One who doesn't use libraries the way they're supposed to be used will be lost in the end.


  13. If you don't write the code, who is going to write it for you?
    If you're always writing all the libraries yourself, what's the point?
    If you don't write it today, then when exactly?


  14. Take programming seriously.
    Optimize the little bits which do the most.
    Welcome all patches with a smile (even if you don't commit them all).


  15. Make for yourself a system, and stay away from assumptions, learn about it if you're not sure.
    Don't write a program based on rough estimates.


  16. I've spent a lot of time with those who know only theory. Don't argue with them, just pay attention.
    The theory isn't the main thing, but the practical. If you dwell on the theory, you'll end up being repetitious with your code.


  17. On three things a good program depends on:
    Meeting Requirements.
    Correctness.
    Peace with the community.