Showing posts with label Firefox. Show all posts
Showing posts with label Firefox. Show all posts

Wednesday, December 14, 2011


Progression and Regression of Desktop User Interfaces



As this Gregorian year comes to a close, with various new interfaces out now, and some new ones on the horizon, I decided to recap my personal experiences with user interfaces on the desktop, and see what the future will bring.

When I was younger, there were a few desktop environments floating around, and I've seen a couple of them at school or a friend's house. But the first one I had on my own personal computer, and really played around with was Windows 3.

Windows 3 came with something called Program Manager. Here's what it looked like:




The basic idea was that you had a "start screen", where you had your various applications grouped by their general category. Within each of these "groups", you had shortcuts to the specific applications. Now certain apps like a calculator you only used in a small window, but most serious apps were only used maximized. If you wanted to switch to another running app, you either pressed the alt+tab keyboard shortcut to cycle through them, or you minimized everything, where you then saw a screen listing all the currently running applications.

Microsoft also shipped a very nice file manager, known as "File Manager", which alone made it useful to use Windows. It was rather primitive though, and various companies released various add-ons for it to greatly enhance its abilities. I particularly loved Wiz File Manager Pro.



Various companies also made add-ons for Program Manager, such as to allow nested groups, or shortcuts directly on the main screen outside of a group, or the ability to change the icons of selected groups. Microsoft would've probably built some of these add-ons in if it continued development of Program Manager.

Now not everyone used Windows all the time back then, but only selectively started it up when they wanted something from it. I personally did everything in DOS unless I wanted to use a particular Windows app, such as file management or painting, or copying and pasting stuff around. Using Windows all the time could be annoying as it slowed down some DOS apps, or made some of them not start at all due to lack of memory and other issues.

In the summer of 1995, Microsoft released Windows 4 to the world. It came bundled with MS-DOS 7, and provided a whole new user experience.


Now the back-most window, the "desktop", no longer contained a list of the running programs (in explorer.exe mode), but rather you could put your own shortcuts and groups and groups of groups there. Rather running programs would appear at the bottom of the screen in a "taskbar". The taskbar now also contained a clock, and a "start menu", to launch applications. Some always running applications which were meant to be background tasks appeared as a tiny icon next to the clock in an area known as a "tray".

This was a major step forward in usability. Now, no matter which application you were currently using, you could see all the ones that were currently running on the edge of your screen. You could also easily click on one of them to switch to it. You didn't need to minimize all to see them anymore. Thanks to the start menu, you could also easily launch all your existing programs without needing to minimize all back down to Program Manager. The clock always being visible was also a nice touch.

Now when this came out, I could appreciate these improvements, but at the same time, I also hated it. A lot of us were using 640x480 resolutions on 14" screens back then. Having something steal screen space was extremely annoying. Also with how little memory systems had back at the time (4 or 8 MB of RAM), you generally weren't running more than 2 or 3 applications at a time and could not really appreciate the benefits of having an always present taskbar. Some people played with taskbar auto-hide because of this.

The start menu was also a tad ridiculous. Lots of clicks were needed to get anywhere. The default appearance also had too many useless things on it.

Did anyone actually use help? Did most people launch things from documents? Microsoft released a nice collection of utilities called "PowerToys" which contained "TweakUI" which you could use to make things work closer to how you want.

The default group programs installed to within the start menu was quite messy though. Software installers would not organize their shortcuts into the groups that Windows came with, but each installed their apps into their own unique group. Having 50 submenus pop out was rather unwieldy, and I personally organized each app after I installed it. Grouping into "System Tools", "Games", "Internet Related Applications", and so on. It was annoying to manually do all this though, as when removing an app, you had to remove its group manually. On upgrades, one would also have to readjust things each time too.

Windows 4 also came with the well known Windows Explorer file manager to replace the older one. It was across the board better than the vanilla version of File Manager that shipped with Windows 3.

I personally started dual booting MS-DOS 6.22 + Windows for Workgroups 3.11 and Windows 95 (and later tri-booted with OS/2 Warp). Basically I used DOS and Windows for pretty much everything, and Windows 95 for those apps that required it. Although I managed to get most apps to work with Windows 3 using Win32s.

As I got a larger screen and more RAM though, I finally started to appreciate what Windows 4 offered, and started to use it almost exclusively. I still exited Windows into DOS 7 though for games that needed to use more RAM, or ran quicker that way on our dinky processors from back then.

Then Windows 4.10 / Internet Explorer 4 came out which offered a couple of improvements. First was "quick launch" which allowed you to put shortcuts directly on your taskbar. You could also make more than one taskbar and put all your shortcuts on it. I personally loved this feature, I put one taskbar on top of my screen, and loaded it with shortcuts to all my common applications, and had one on the bottom for classical use. Now I only had to dive into the start menu for applications I rarely used.

It also offered a feature called "active desktop" which made the background of the desktop not just an image, but a web page. I initially loved the feature, as I edited my own page, and stuck in an input line which I would use to launch a web browser to my favorite search engine at the time (which changed monthly) with my text already searched for. After a while active desktop got annoying though, as every time IE crashed, it disabled it, and you had to deal with extra error messages, and go turn it on manually.

By default this new version also made every Windows Explorer window have this huge sidebar stealing your precious screen space. Thankfully though, you could turn it off.

All in all though, as our CPUs got faster, RAM became cheaper, and large screens more available, this interface was simply fantastic. I stopped booting into other OSs, or exiting Windows itself.

Then Windows 5 came out for consumers, and UI wise, there weren't really any significant changes. The default look used these oversized bars and buttons on each window, but one could easily turn that off. The start menu got a bit of a rearrangement to now feature your most used programs up front, and various actions got pushed off to the side. Since I already put my most used programs on my quick launch on top, this start menu was a complete waste of space. Thankfully, it could also be turned off. I still kept using Windows 98 though, as I didn't see any need for this new Windows XP, and it was just a memory hog in comparison at the time.

What was more interesting to me however was that at work, all our machines ran Linux with GNOME and KDE. When I first started working there, they made me take a course on how to use Emacs, as every programmer needs a text editor. I was greatly annoyed by the thing however, where was my shift highlight with shift+delete and shift+insert or ctrl+x and ctrl+v cut and paste? Thankfully though I soon found gedit and kedit which was like edit/notepad/wordpad but for Linux.

Now I personally don't use a lot of windowsy type software often. My primary usage of a desktop consists of using a text editor, calculator, file manager, console/terminal/prompt, hex editor, paint, cd/dvd burner, and web browser. Only rarely do I launch anything else. Perhaps a PDF, CHM, or DJVU reader when I need to read something.

After using Linux with GNOME/KDE at work for a while, I started to bring more of my work home with me and wanted these things installed on my own personal computer. So dual booting Windows 98 + Linux was the way to go. I started trying to tweak my desktop a bit, and found that KDE was way more capable than GNOME, as were most of their similar apps that I was using. KDE basically offered me everything that I loved about Windows 98, but on steroids. KWrite/KATE was notepad/wordpad but on steroids. The syntax highlighting was absolutely superb. KCalc was a fine calc.exe replacement. Konqueror made Windows Explorer seem like a joke in comparison.

Konqueror offered network transparency, thumbnails of everything rather quickly, even of text files (no pesky thumbs.db either!). An info list view which was downright amazing:
This is a must have feature here. Too often are documents distributed under meaningless file names. With this, and many other small features, nothing else I've seen has even come close to Konqueror in terms of file management.

Konsole was just like my handy DOS Prompt, except with tab support, and better maximizing, and copy and paste support. KHexEdit was simply better than anything I had for free on Windows. KolourPaint is mspaint.exe with support for way more image formats. K3b was also head and shoulders above Easy CD Creator or Nero Burning ROM. For Web Browsers, I was already using Mozilla anyway on Windows, and had the same option on Linux too.

For the basic UI, not only did KDE have everything I liked about Windows, it came with an organized start menu. Which also popped out directly, instead from a "programs" submenu.

The taskbar was also enhanced that I could stick various applets on it. I could stick volume control directly on it. Not a button which popped out a slider, the sliders themselves could appear on the taskbar. Now for example, I could easily adjust my microphone volume directly, without popping up or clicking on anything extra. There was an eyedropper which you could just push to find out the HTML color of anything appearing on the screen - great for web developers. Another thing which I absolutely love, I can see all my removable devices listed directly on my taskbar. If I have a disk in my drive, I see an icon for that drive appearing directly on my taskbar, and I can browse it, burn to it, eject it, whatever. With this, everything I need is basically at my finger tips.

Before long I found myself using Linux/KDE all the time. On newer machines I started to dual boot Windows XP with Linux/KDE, so I could play the occasional Windows game when I wanted to, but for real work, I'd be using Linux.

Then KDE 4 comes out, and basically half the stuff I loved about KDE was removed. No longer is it Windows on steroids. Now KDE 4.0 was not intended for public consumption. Somehow all the distros except for Debian seemed to miss that. Everyone wants to blame KDE for miscommunicating this, but it's quite clear 4.0 was only for developers if you watched the KDE 4 release videos. Any responsible package maintainer with a brain in their skull should've also realized that 4.0 was not ready for prime time. Yet it seems the people managing most distros are idiots that just need the "latest" version of everything, ignoring if it's better or not, or even stable.

At the time, all these users were upset, and all started switching to GNOME. I don't know why anyone who truly loved the power KDE gave you would do that. If KDE 3 > GNOME 2 > KDE 4, why did people migrate to GNOME 2 when they could just not "upgrade" from KDE 3? It seems to me that people never really appreciated what KDE offers in the first place if they bothered moving to GNOME instead of keeping what was awesome.

Nowadays people tell me that KDE 4 has feature parity with KDE 3, but I have no idea what they're talking about. The Konqueror info list feature that I described above still doesn't seem to exist in KDE 4. You can no longer have applets directly on your taskbar. Now I have to click a button to pop up a list of my devices, and only then can I do something with them. No way to quickly glance to see what is currently plugged in. Konsole's tabs now stretch to the full width of your screen for some pointless reason. If you want to switch between tabs with your mouse, prepare for carpal tunnel syndrome. Who thought that icons should grow if they can? It's exactly like those idiotic theoretical professors who spout that CPU usage must be maximized at all times, and therefore use algorithms that schedule things for maximum power draw, despite that in normal cases performance does not improve by using these algorithms. I'd rather pack in more data if possible, having multiple columns of information instead of just huge icons.

KHexEdit has also been completely destroyed. No longer is the column count locked to hex (16). I can't imagine anyone who seriously uses a hex editor designed the new version. For some reason, KDE now also has to act like your mouse only has one button, and right click context menus vanished from all over the place. It's like the entire KDE development team got invaded by Mac and GNOME users who are too stupid to deal with anything other than a big button in the middle of the screen.

Over in the Windows world. Windows 6 (with 6.1 comically being consumerized as "Windows 7") came out with a bit of a revamp. The new start menu seems to fail basic quality assurance tests for anything other than default settings. Try to set things to offer a classic start menu, this is what you get:


If you use extra large icons for your grandparents, you also find that half of the Control Panel is now inaccessible. Ever since Bill Gates left, it seems Windows is going down the drain.

But hardly are problems solely for KDE and Windows, GNOME 3 is also a major step back according to what most people tell me. Many of these users are now migrating to XFCE. If you like GNOME 2, why are you migrating to something else for? And what is it with people trying to fix what isn't broken? If you want to offer an alternate interface, great, but why break or remove the one you already have?

Now a new version of Windows is coming out with a new interface being called "Metro". They should really be calling it "Retro". It's Windows 3 Program Manager with a bunch of those third party add-ons, with a more modern look to it. Gone is the Windows 4+ taskbar so you can see what was running, and easily switch applications via mouse. Now you'll need to press the equivalent of a minimize all to get to the actual desktop. Another type of minimize to get back to Program Manager to launch something else, or "start screen" as they're now calling it.

So say goodbye to all the usability and productivity advantages Windows 4 offered us, they want to take us back to the "dark ages" of modern computing. Sure a taskbar-less interface makes sense on handheld devices with tiny screens or low resolution, but on modern 19"+ screens? The old Windows desktop+taskbar in the upcoming version of Windows is now just another app in their Metro UI. So "Metro" apps won't appear on the classic taskbar, and "classic" applications won't appear on the Metro desktop where running applications are listed.

I'm amazed at how self destructive the entire market became over the last few years. I'm not even sure who to blame, but someone has to do something about it. It's nice to see that a small group of people took KDE 3.5, and are continuing to develop it, but they're rebranding everything with crosses everywhere and calling it "Trinity" desktop. Just what we need, to bring religious issues now into desktop environments. What next? The political desktop?

Thursday, June 3, 2010


Undocumented APIs



This topic has been kicked around all over for at least a decade. It's kind of hard to read a computer programming magazine, third party documentation, technology article site, third party developer mailing list or the like without running into it.

Inevitably someone always mentions how some Operating System has undocumented APIs, and the creators of the OS are using those undocumented APIs, but telling third party developers not to use them. There will always be the rant about unfair competition, hypocrisy, and other evils.

Large targets are Microsoft and Apple, for creating undocumented APIs that are used by Internet Explorer, Microsoft Office, Safari, iPhone development, and other in-house applications. To the detriment of Mozilla, OpenOffice.org, third parties, and hobbyist developer teams.

If you wrote one of those articles/rants/complaints, or agreed with them, I'm asking you to turn in your Geek Membership card now. You're too stupid, too idiotic, too dumb, too nearsighted, and too vilifying (add other derogatory adjectives you can think of) to be part of our club. You have no business writing code or even discussing programming.

Operating Systems and other applications that communicate via APIs provide public interfaces that are supposed to be stable. If you find a function in official public documentation, that generally signifies it is safe to use the function in question, and the company will support it for the foreseeable future versions. If a function is undocumented, it's because the company doesn't think they will be supporting the function long term.

Now new versions which provide even the exact same API inevitably break things. It is so much worse for APIs which you're not supposed to be using in the first place. Yet people who complain about undocumented APIs, also complain when the new version removes a function they were using. They blame the company for purposely removing features they knew competition or hobbyists were using. That's where the real hypocrisy is.

As much as you hate a new version breaking your existing software, so does Microsoft, Apple, and others. The popularity of their products is tied to what third party applications exist. Having popular third party applications break on new versions is a bad thing. That's why they don't document APIs they plan on changing.

Now why do they use them then? Simple, you can write better software when you intimately understand the platform you're developing for, and write the most direct code possible. When issues occur because of their use, it's easy for the in-house teams to test their own software for compatibility with upcoming versions and fix them. If Microsoft sees a new version of Windows break Internet Explorer, they can easily patch the Internet Explorer provided with the upcoming version, or provide a new version altogether. But don't expect them to go patching third party applications, especially ones which they don't have the source to. They may have done so in the past, and introduced compatibility hacks, but this really isn't their problem, and you should be grateful when they go out of their way to do so.

So quit your complaining about undocumented APIs, or please go around introducing yourself as "The Dumb One".

Sunday, October 25, 2009


Distributed HTTP



A couple years back, a friend of mine got into an area of research which was rather novel and interesting at the time. He created a website he hosted from his own computer, where one can read about the research, and download various data samples he made.

Fairly soon, it was apparent that he couldn't host any large data samples. So he found the best compression software available, and setup a BitTorrent tracker on his computer. When someone downloaded some data samples, they would be sharing the bandwidth load, allowing more interested parties to download at once without crushing my friend's connection. This was back at a time when BitTorrent was unheard of, but those interested in getting the data would make the effort to do so.

As time went on, his site popularity grew. He installed some forum software on his PC, so a community could begin discussing his findings. He also gave other people login credentials to his machine, so they can edit pages, and upload their own data.

The site evolved into something close to a wiki, where each project got its own set of pages describing it, and what was currently known on the topic. Each project got some images to visually provide an idea of what each data set covered before one downloaded the torrent file. Some experiments also started to include videos.

As the hits to his site kept increasing, my friend could no longer host his site on his own machine, and had to move his site to a commercial server which required payment on a monthly or yearly basis. While BitTorrent could cover the large data sets, it in no way provided a solution to hosting the various HTML pages and PNG images.

The site constantly gained popularity, and my friend was forced to keep upgrading to an increasingly more powerful server, where the hosting costs increased just as rapidly. Requests for donations, and ads on the server could only help offset costs to an extent.

I imagine other people and small communities have at times run into similar problems. I think its time for a solution to be proposed.

Every major browser today caches files for each site it visits, so it doesn't have to rerequest the same images, scripts, and styles on each page, and conditionally requests pages, only if they haven't been updated since what was in the cache. I think this already solves one third of the problem.

A new URI scheme can be created, perhaps dhttp:// that would act just like normal HTTP with a couple of exceptions. The browser would have some configurable options for Distributed HTTP, such as up to how many MB per site will it cache, how many MB overall, how many simultaneous uploads will it be willing to provide per site, as well as overall, which port it will run on, and a duration as to how long it will do so. When the browser connects via dhttp://, it'll include some extra headers providing the user's desired settings on this matter. The HTTP server will be modified to keep track of which IP addresses connected to it, and downloaded which files recently.

When a request for a file comes into the DHTTP server, it can respond with a list of perhaps five IP addresses to choose from (if available), chosen based on an algorithm designed to round robin the available browsers connecting to the site, and the preferences chosen therein. The browser can then request via a normal HTTP request the same file from one of those IP addresses it received. The browser would need a miniature HTTP server built in which would understand that requests coming to it that seem to be for a DHTTP server should be replied to from its cache. It would also know not to share files that are in the cache which did not originate from a DHTTP server.

If requests to each of those IP addresses have timed out, or responded with 404, then the browser can rerequest that file from the DHTTP server set with a timeout or unavailable header for each of those IP addresses, in which case the DHTTP server will respond with the requested file directly.

The HTTP server should also know to keep track of when files are updated, so it knows not to refer a visitor to an IP address which contains an old file. This forwarding concept should also be disabled in cases of private data (login information), or dynamic pages. However for static (or dynamic which is only generated periodically) public data, all requests should be satisfiable by this described method.

Thought would have to be put into how to handle "leach" browsers which never share from their cache, or just always request from the DHTTP server with timeout or unavailable headers sent.

I think something implemented along these lines can help those smaller communities that host sites on their own machines, and would like to remain self hosting, or would like to alleviate hosting costs on commercial servers.

Thoughts?

Monday, May 25, 2009


Perfect sound with OSS version 4



So I just happened to be keeping my eye on some packages being upgraded in Debian on dist-upgrade, and something caught my eye, the package "libasound2-plugins". I wondered what kind of plugins it provided, so I asked APT to show me what it was. Here's what came up:


Package: libasound2-plugins
Priority: optional
Section: libs
Installed-Size: 488
Maintainer: Debian ALSA Maintainers
Architecture: amd64
Source: alsa-plugins
Version: 1.0.19-2
Depends: libasound2 (>> 1.0.18), libc6 (>= 2.2.5), libjack0 (>= 0.116.1), libpulse0 (>= 0.9.14), libsamplerate0
Filename: pool/main/a/alsa-plugins/libasound2-plugins_1.0.19-2_amd64.deb
Size: 119566
MD5sum: 89efb281a3695d8c0f0d3c153ff8041a
SHA1: fdd93b68ec0b8e6de0b67b3437b9f8c86c04b449
SHA256: 7eb5b023373db00ca1b65765720a99654a0b63be741a5f5db2516a8881048aa6
Description: ALSA library additional plugins
This package contains plugins for the ALSA library that are
not included in the main libasound2 package.
.
The following plugins are included, among other:
- a52: S16 to A52 stream converter
- jack: play or capture via JACK
- oss: run native ALSA apps on OSS drivers
- pulse: play or capture via Pulse Audio
- lavcrate, samplerate and speexrate: rate converters
- upmix and vdownmix: convert from/to 2 and 4/6 channel streams
.
ALSA is the Advanced Linux Sound Architecture.
Enhances: libasound2
Homepage: http://www.alsa-project.org/
Tag: devel::library, role::plugin, works-with::audio


Now something jumped out at me, run native ALSA apps on OSS drivers?
If you read my sound article, you know I'm an advocate of OSSv4, since it seems superior where it matters.

So I looked into the documentation for the Debian (as well as Ubuntu) package "libasound2-plugins" on how this ALSA over OSS works exactly.

I edited /etc/asound.conf, and changed it to the following:

pcm.!default {
type oss
device /dev/dsp
}

ctl.!default {
type oss
device /dev/mixer
}

And presto, every ALSA application now started properly outputting sound for me. No more need to always have to fiddle with configurations for each sound layer to use OSS, because the distros don't allow auto config of them.

I could never get flash on 64 bit with sound before, even though each new OSS release says they "fixed it". Now it does work for me.

I tested the following with ALSA:
MPlayer (-ao alsa)
Firefox, flashplugin-nonfree, Homestar Runner
ZSNES (-ad alsa)
bsnes (defaults)

Oh and in case you're wondering, mixing is working perfectly. I tried running four instances of MPlayer, two set to use ALSA, the other two set to output using OSS, and I was able to hear all four at once.

Now it's great to setup each application and sound layer individually to use OSS, so there's less overhead. But just making this one simple change means you don't have to for each application where the distro defaulted to ALSA, or have to suffer incompatibility when a particular application is ALSA only.


Note that depending how you installed OSS and which version, it may have tried forcing ALSA programs to use a buggy ALSA emulation library, which is incomplete, and not bug for bug compatible with the real ALSA. If that happened to you, here's how to use the real ALSA libraries, which are 100% ALSA compatible, as it's 100% the real ALSA.

First check where everything is pointing with the following command ls -la /usr/lib/libasound.*
I get the following:

-rw-r--r-- 1 root root 1858002 2009-03-04 11:09 /usr/lib/libasound.a
-rw-r--r-- 1 root root 840 2009-03-04 11:09 /usr/lib/libasound.la
lrwxrwxrwx 1 root root 18 2009-03-06 03:35 /usr/lib/libasound.so -> libasound.so.2.0.0
lrwxrwxrwx 1 root root 18 2009-03-06 03:35 /usr/lib/libasound.so.2 -> libasound.so.2.0.0
-rw-r--r-- 1 root root 935272 2009-03-04 11:09 /usr/lib/libasound.so.2.0.0

Now as you can see libasound.so and libasound.so.2 both point to libasound.so.2.0.0. The bad emulation is called libsalsa. So if instead of seeing "-> libasound..." you see "-> libsalsa..." there, you'll want to correct the links.

You can correct with the following commands as root:

cd /usr/lib/
rm libasound.so libasound.so.2
ln -s libasound.so.2.0.0 libasound.so
ln -s libasound.so.2.0.0 libasound.so.2

If you're using Ubuntu and don't know how to switch to root, try sudo su prior to the steps above.

If you'd like to try to configure as many applications as possible to use OSS directly to avoid any unneeded overhead, see the documentation here and here which provide a lot of useful information. However if you're happy with your current setup, the hassle to configure each additional application isn't needed as long as you setup ALSA to use OSS.

Enjoy your sound!

Tuesday, May 12, 2009


Will Linux ever be mainstream?



Constantly different sites and communities always discuss the possibility of Linux becoming mainstream and when the mainstreaming will take place. Often reasons are laid out where Linux is lacking. Most reasons don't seem to be in touch with reality. This will be an attempt to go over some of those reasons, cut out the fluff from the fact, and perhaps touch on a few areas that have not been gone over yet.

One could argue with today's routers that Linux is already mainstream, but let us focus more on full blown computer Linux, which runs on servers, workstation, and home computers.

When it comes to servers, the question really is who isn't running Linux? Practically every medium sized or larger company runs Linux on a couple of their servers. What makes Linux so compelling that many companies have at least one if not many Linux servers?

Servers are a very different breed of computer than the workstation or home computer. "Desktop Linux" as it's known is the type of OS for average everyday Joe. Joe is the kind of guy who wants to sit down and do a few specific tasks. He expects those tasks to be easy to do, and be mostly the same on every computer. He doesn't expect anything about the 'tasks' to scare him. He accepts the program may crash or go haywire in the middle, at which time it's just new cup of coffee time. Except Desktop Linux isn't for every day Joe ... yet.

Servers on the other hand are designed primarily for functionality. They have to have maximum up time. It doesn't matter if the server is hard to understand, and work with, and only two guys in the whole office can make heads or tails out of it. It's okay that the company needs to hire two guys with PhDs, who are complete recluses, and never attend a single office party.

Windows servers are primarily used by those that need special Windows functionality at the office, such as ActiveDirectory, or Exchange so everyone has integrated Outlook. Some even use Windows as HTTP Servers and the like. Windows is less known for working, but being great for those specialized tasks, or servers which don't need those two PhD recluses to manage. Even guys who have never written a piece of code in their entire life can manage a Windows server - usually. Microsoft always tries to press this latter point home with all their get the facts campaigns.

The real fact is though that companies on their servers need functionality, reliability, and countability. While larger companies would prefer to replace every man with a machine which is guaranteed to last forever and not require a single ounce of maintenance, they would rather rely on personnel than hardware. Sure, when I'm a really small business, I'd rather have a server I can manage myself and have a clue what I'm doing, but if I had the money, I'd rather have expert geeky Greg who I can count on to keep our hardware setup afloat. Even when geeky Greg is a bit more expensive than laid-back Larry, I'm happier knowing that I have the best people on the job.

Windows servers while being great in their niches, are also a pain in the neck in more generalized applications. We have a Windows HTTP/FTP server at work. One day it downloads security patches from Microsoft, and suddenly HTTP/FTP stop working entirely. Our expert laid-back Larry spent a few hours looking at the machine trying to find out what changed, and mostly had to resort to using Google as opposed to any knowledge of how Windows works. Finally he sees on some site that Microsoft changed some firewall settings to be extra restrictive, and managed to fix the problem.

Another time, part of the server got hacked into, and we have to reinstall most of it. For some reason, a subsection of our site just refused to work, apparently a permission problem somewhere. On Linux/Apache, permission problems are either a setting in Apache or on the file system, easy to find. Windows on the other hand, with their oh-so-much-better fine grained permission support seem to have dozens if not hundreds of places where to look for security settings. This took our Larry close to two weeks to fix.

Yet another time, a server application which we wrote in-house ran flawlessly on both Linux and Windows XP. However, when we installed it on our Windows Server 2003 server, it inexplicably didn't work. It's no wonder companies use Linux servers for many server tasks. There's also a decent amount of server applications a company can purchase from Red Hat, IBM, Oracle, and a couple of other companies. Linux on the server clearly rocks, even various statistical sites agree.

Now let us move on to the workstation and home computer segment, where we'll see a very different picture.

On the workstation, two features are key, manageability, and usability. Companies like to make sure that they can install new programs across the board, that they can easily update across the board, and change settings on every single machine in the office from one location. Granted on Linux one can log in as root to any machine and do what they want, but how many applications are there that allow me to automate management remotely? For example, apt-get (and its derivatives) are known as one of the best package managers for Desktop Linux, yet they don't have any way to send a call to update to every single machine on a network. Sure using NFS I can have an ActiveDirectory like setup where any user can log into any machine and get their settings and files, but how exactly do I push changes to the software on the machines themselves? Every place I asked this question to seems to have their own customized setup.

One place SSHs into every single machine individually, and then paste some huge command into the terminal. Another upgrades one machine, mirrors the hard drive, then goes to each machine in turn and re-images the installed hard disk. One place which employs a decent number of programmers wrote a series of scripts which every night download a file from a server and execute it. Another, also with an excellent programming task force, wrote their own SSH based application which logs into every machine on the network and runs whichever commands the admin puts in on all of them, allowing real time pushing of updates to all the machines at once.

Is it any wonder that a large company is scared to have Linux on all their machines or that it really is expensive to maintain? We keep toting/hearing how amazing X is because of the client/server setup, or these days in regards to PulseAudio, let us start hearing it for massive remote management. And remember not to limit this just to installing packages, we need to be able to change system files remotely and simultaneously, with a method which becomes standard.

The other aspect if of course usability, and by usability I mean being able to use the kind of software the company needs. Now for some companies, documents, spreadsheets, and web browsers are the extent of the applications they need, and for that we're already there. Unless of course they also need 100% compatibility with the office suites used by other companies.

What about specialized niches though? That's where real companies have their major work done. These companies are using software to manage medical history, other clientèle meta-data, stocks (both monetary and in-store), and multitudes of other specialized fields. All these applications more or less connect to some server somewhere and do database manipulation. We're really talking about webapps in desktop form. Why is every last one of these 3rd party applications only written for Windows?

The reasons are probably threefold. If these applications worked in any standard browser, we're really providing more functionality in them than should be exposed to the user. Do you want the user to hit stop or the close button in the corner of their browser in middle of a transaction? Sure, the database should be robust and atomic enough to handle these kinds of situations, but do we want to spoon-feed these situations to the users? We also certainly don't want general system upgrades which would install a newer version of the browser to break one of the key applications being used by the company. To solve this problem requires a custom browser, bringing us back to square one when it comes to making this a desktop application.

The next reason is known as catch-22. Why should a generic company making an application bother with anything than the most popular OS by a landslide? We need way more Desktop Linux users for a company to bother, but if the companies don't bother, it's unlikely that Desktop users will switch to Linux. Also, as I've said before, portability isn't difficult in most cases, but most won't bother unless we enlighten them.

Lastly, many of these applications are old, or at least most of their code base is. There's just no incentive to rewrite them. And when one of these applications is made in-house, it'll be made for what the rest of the company is already running.

To get Linux onto the workstation then, we need the following to take place:
  • Creation of standardized massive management tools
  • Near perfect interoperability of office suites
  • Get ports of Linux office suites to be mainstream on Windows too
  • Get work oriented applications on Windows to be written portably
  • Make Linux more popular on the Desktop in all aspects
We have to stop being scared of Open Source on closed sourced Operating Systems, if half the offices out there used Open Office on Windows, they wouldn't mind running Open Office on Linux, and they won't have any different interoperability issues that they don't already have.

We also need to make portability excellence more the norm. These companies could benefit a lot from using Qt for example. Qt has great SQL support. Qt contains a web browser so webapps can be made without providing anything unnecessary in the interface. Qt also has great easy to use client/server support, with SSL to boot. Also, Qt applications are probably the easiest type to make multilingual, and the language can be changed on the fly, which is very important for apps used world wide, or for companies looking to save money by hiring immigrants. Lastly, Qt is easier to use than the Win32 API for these relatively basic applications. If they used 100% Qt, the majority of the time, the program would work on Linux with just a simple recompile.

For the above to happen we really need a major Qt push in the developer community. The fight between GTK, wxWidgets, and Qt is going to be hurting us here. Sure, initially Qt was a lot more closed, and we needed GTK to push Qt in the right direction. But today, Qt is LGPL, offers support/maintenance contracts, and is a good 5-10 years ahead of GTK in breadth of features supplied. Even if you like GTK better for whatever reason, it really can't stand up objectively to Qt from the big business perspective. We need developers to get behind our best development libraries. We also need to get schools to teach the libraries we use as part of the mainstream curriculum. Fracturing the community on this point is only going to hurt us in the long run.

Lastly, we come to Linux on the home computer. What do we need on a home computer exactly? They're used for personal finances, homework, surfing the web, multimedia, creativity, and most importantly, gaming.

Are the finance applications available for Linux good enough? I really have no idea, perhaps someone can enlighten me in the comments. We'll get back to this point shortly.

For homework, I'd say Linux was there already. We have Google and Wikipedia available via the world wide web. Dictionaries and Thesauruses are available too. We got calculators and documents, nothing is really missing.

For surfing the web we're definitely there, no questions asked.

Multimedia we're also there aside from a few annoyances. I'll discuss this more below.

For creativity, I'm not sure where we are. Several years back, it seems all the kids used to love making greeting cards, posters, and the like using programs such as The Print Shop Deluxe or Print Artist. Do we have any decent equivalents on Linux?

Thing is, a company would have to be completely insane to port popular home publishing software to Linux. First there's all the reasons mentioned above regarding catch-22 and the like. Then there's nutjobs like Richard Stallman out there who will crucify the company attempting to port their software to Linux. For starters, see this article which says:
Some of the most important projects on our list are replacement projects. These projects are important because they address areas where users are continually being seduced into using non-free software by the lack of an adequate free replacement.


Notice how they're trying to crush Skype for example. Basically any time a company will port their application to Linux, and it becomes popular enough on Desktop Linux, you'll have these nutjobs calling for the destruction of said program by completely reimplementing it and giving it away for free. And reimplement it they do, even if not as effectively, but adequate enough to dissuade anyone from ever buying the application. Then the free application gets ported to Windows too, effectively destroying the company's business model and generally the company itself. Don't believe they'll take it that far? Look how far they went to stop Qt/KDE. Remember all those old office suites and related applications available for Linux a decade ago? How many of them are still around or in business? When free versions of voice chatting are available on all platforms, and can even interface with standard telephones, do you think Skype will still be around?

Basically, trying to port a popular application to Linux is a great way to get yourself a death sentence. If for example Adobe ever ported Photoshop to Linux, there'd be such a massive upsurge in getting the GIMP or a clone to have a sane interface, and get in some of those last features, Photoshop would probably be dead in a year.

And unless some of these applications are ported to Linux, we'll probably never see niche applications as good as their Windows counterparts. Many programmers just don't care enough to develop these to the extent needed, and some only do so when they feel it's part of a holy war. Thus giving us a whole new dimension to the catch-22.

Finally, we come to gaming. Is Linux good enough for companies to develop for? First glance, and you think a resounding yes. A deeper look reveals otherwise. First off, there's good ol` video. For the big games today, it's all about graphics. How many video cards provide full modern OpenGL support on Linux? The problem is basically as follows. X Windows a system designed way back when with all sorts of cool ideas in mind, where the current driver API is simply not enough to take full advantage of accelerated OpenGL. You can easily search online and find tons of results on why X is really bad, but it really stands out when it comes to video.

NVidia has for several years now put out "evil drivers" which get the job done, and provide fantastic OpenGL support on top of Linux. The drivers though are viewed as evil, since they bypass the bottom 1/3 of X and talk straight to the Kernel, and don't fully follow the X driver API. And of course, they're also closed source. All the other drivers today for the most part communicate with the system via the X API, especially the open sourced drivers. Yet they'll never measure up, because X prevents them from measuring up. But they'll continue to stick to what little X does provide. NVidia keeps citing they can't open source their drivers because they'll lose their competitive advantage. Many have questioned this, as for the most part, the basic principals are the same on all cards, what is so secret in their drivers? When in reality, if they open sourced their drivers, the core functionality would probably be merged into X as a new driver API, allowing ATI and Intel to compete on equal footing, losing their competitive advantage. It's not the card per sè they're trying to hide, but the actual driver API that would allow all cards to take advantage of themselves, bypassing any stupidity in X. At the very least, ATI or Intel could grab a lot of that code and make it easier for themselves to make an X-free driver that works for X well.

When it comes down to it, as tiny as the market share is that Linux already has, it becomes even smaller if you want to release an application that needs good video support. On the other hand, those same video cards work just fine in Windows.

Next comes sound, which I have discussed before. The main sound issue for games is latency, and ALSA (the default in Linux) is really bad in that regard. This gets compounded when sound has to run through a sound server on its way to the drivers that talk to the sound card. For playing music, ALSA seems just fine to everybody, you don't notice or care that the sound starts or stops a moment or two after you press the button. For videos as well, it's generally a non-issue. In most video formats, the video takes longer to decode than it does to process sound, so they're ready at the same time. It also doesn't have to be synced for input. So everything seems fine. In the worst case scenario, you just tell your video player to alter the video/audio sync slightly, and everything is great.

When it comes to games, it's an entirely different ballpark. For the game not to appear laggy, the video has to be synced to the input. You want the gun to fire immediately after the user presses the button, without a lag. Once the bullet hits the enemy and the user sees the enemy explode, you want them to hear that enemy explode. The audio has to be synched to the video. Players will not accept having the sound a second or two late. Now means now. There's no room for all the extra overhead that is currently required.

I find it mind boggling that Ubuntu, a distribution designed for average Joe, decided to make the entire system routed through PulseAudio, and see it as a good thing. The main advantage of PulseAudio is that it has a client/server architecture so that sound generated on one machine can be output on another. How many home users know of this feature, let alone have a reason to use it? The whole system makes sound lag like crazy.

I once wrote a game with a few other developers which uses SDL or libao to output sound. Users way back when used to enjoy it. Nowadays with ALSA, and especially with PulseAudio which SDL and libao default to outputting to in Ubuntu, users keep complaining that the sound lags two or more seconds behind the video. It's amazing this somehow became the default system setup.

Next is input. This one is easy right? Linux surely supports input. Now let me ask you this, how many KDE or GNOME games have you seen that allow you to control them via a Joystick/Gamepad? The answer is quite simply, none of them do. Neither Qt nor GTK provide any input support other than keyboard or mouse. That's right, our premier application framework libraries don't even support one of the most popular inventions of the 80s and 90s for PC gamers.

Basically, here you'll be making a game and using your library to handle both keyboard and mouse support, when you want to add on joystick support, you'll have to switch to a different library, and possibly somehow merge a completely different event loop into the main one your program uses for everything else. Isn't it so much easier on Windows where they provide a unified input API which is part of the rest of the API you're already using?

Modern games tend to include a lot of sound, and more often than not, video as well. It'd be nice to be able to use standard formats for these, right? The various APIs out there, especially Phonon (part of Qt/KDE) is great at playing sound or video for you. But which formats should you be putting your media in? Which formats are you ensured will be available on the system you're deploying on? Basically all these libraries have multiple backends where support can be drastically different, and the most popular formats, such as those based on MPEG standards don't come standard on most Linux distributions, thanks to them being "non free". Next you'll think fine, let us just ship the game with uncompressed media. This actually works fine for audio, but is a mess when it comes to video. Try making a pure uncompressed AVI and running it in Xine, MPlayer, and anything else that can be used as a Phonon back end. No two video players can agree on what the uncompressed AVI format is. Some display the picture upside down, some have different visions of which byte signifies red, blue, and green, and so on.

For all of these reasons, the game market, currently the largest in home software, has difficultly designing and properly deploying games on Linux. The only companies which have managed to do it in the past are those that made major games for DOS back in the day, where there also was no good APIs or solutions for doing anything.

Now that we wrapped it all up from the actual applications side of things, let us have a look at actual usability for the home user.

We're taken back to average Joe who wants to setup his machine. He's not sure what to do. But he hears there are great Ubuntu forums where he can ask for help. He goes and asks, and gets a response similar to the following:

Open a terminal, then type:
sudo /etc/init.d/d restart
ln -s /alt/override /bin/appstart
cd /etc/app
sudo nano b.conf

Preload=yes
ctrl+x
yes

Does anyone realize how intimidating this is? Even if average Joe was really Windows Power User Joe, does he really feel safe entering commands with which he is unfamiliar?

In the Windows world, we'd tell such a user to open up Windows Explorer, navigate to certain directories, copy files, edit files with notepad and the like. Is it really so hard to tell a user to open up Nautilus or Dolphin or whatever their file manager is, and navigate to a certain location and edit a file with gedit/kwrite?

Sure, it is faster to paste a few quick commands into the terminal, but we're turning away potential users. The home user should never be told he has to open a terminal. In 98% of the cases he really doesn't and what he wants/needs can be done via the GUI. Let us start helping these users appropriately.

Next is the myth about compiling. I saw an article written recently that Linux sucks because users have to compile their own software. I haven't compiled any software on Linux in years, except for those applications that I work on myself. Who in the world is still perpetuating this myth?

It's actually sad to see some distributions out there that force users to recompile stuff. No I'm not talking about Gentoo, but Red Hat actually. We have a server running Red Hat at work, we needed mod_rewrite added to it the other day. Guess what? We had to recompile the server to add that module. On Debian based distros one just runs "a2enmod rewrite", and presto the module is installed. Why the heck are distros forcing these archaic design principals on us?

Then there's just the overall confusion, which many others point out. Do I use KDE or GNOME? Pidgin or Kopete? Firefox or Konqueror? X-Chat or Konversation? VLC or MPlayer? RPM or DEB? The question is, is this a problem? So what if we have a lot of choices.

The issue arises when the choice breaks down the field. When deploying applications this can get especially nightmarish. We need to really focus more on providing just the best solution, and improving the best solution if it's lacking in an area, as opposed to having multiple versions of everything. OSS vs. ALSA, RPM vs. DEB, and a bunch of others which are base to the system shouldn't really be around these days.

The other end of the spectrum is less important to providing a coherent system for deploying on. But it does confuse some users. When I want to help someone, do I just assume they use Krusader as a file manager? Should I try to be generic about file managers? Should I have them install Krusader so I can help them? This theme is played over in many variations on most Linux help forums.
"Oh yes, go open that in gedit."
"Gedit? What's Gedit?"
"Are you running KDE or GNOME?"
"GNOME"
"Are you sure?"
"Wait, are GNOME and XFCE the same thing?"

What's really bad though is when users understand there's multiple applications, but can't find one to satisfy them. It's easy when the choices are simple or advanced, you choose the one more suited to your tastes for that kind of application. But it gets really annoying when one of those apps tries to be like the other. Do we need two apps that behave exactly the same but are different? If you started different, and you each have your own communities, then stay different. We don't need variety when there is no real difference. KDE 4 trying to be more like GNOME is just retarded. Trying to steal GNOME's user base by making a design which appeals more to GNOME users but has a couple of flashy features isn't a way to grow your user base, it's just a way to swap one for another.

Nintendo in the past couple of years was faced with losing much of their user base to Sony. For example, back in the late 90s, all the cool RPGs for which Nintendo was known, had all sequels moved to Sony hardware. However, instead of trying to win back old gamers, they took an entirely different approach. Nintendo realized the largest market of gamers weren't those on the other systems, but those that weren't on any systems. The largest market available for targeting is generally those users not yet in the market, unless the market in question is already ubiquitous.

That said, redesigning an existing program to target those who are currently non-users can sometimes have the potential to alienate loyal users, depending on what sort of changes are necessary, so unless pulling in all the non-users is guaranteed, one should be careful with this strategy. Although a user base with non paying customers is more likely to have success with such a drastic change, as they aren't dependent on their users either way. Balance is required, so many new users are acquired while a minimal amount of existing users are alienated.

To get Linux on home computers the following needs to take place:
  • We need to stop fighting every company that ports a decent product to Linux
  • We should write good programs even if there is nothing else to compete with on Linux
  • We shouldn't leave programs as adequate
  • We need a real solution to the X fiasco
  • We need a real solution to the sound mixing/latency fiasco, and clunky APIs and more sound servers isn't it
  • We need to offer tools to the gaming market and try to capture it
  • Support has to be geared towards the users, not the developers
  • Stop the myths, and prevent new users installing distros that perpetuate them
  • Stop competition between almost identical programs
  • Let programs that are similar but very different go their own ways
  • Bring in users that don't use anything else
  • Keep as many old users as possible and not alienate them
Linux being so versatile is great, and hopefully it will break into new markets. As I said before, many routers use Linux. To be popular on the desktop though, it either has to get users to desktops that currently don't have any, or manage to steal users from other desktops while keeping the ones they have. Becoming Windows isn't the answer, as others like to point out, but competing toe to toe in all existing areas while obtaining new ones is. In some areas, we may just have to throw away existing users to an extent (possibly eliminate X), if we want to grab everyone else out there.

Speaking of versatility, has everyone seen this? Linux grown technology does in many ways have the potential to beat Windows and the rest in a large way.

Everyone remember Duke Nukem Forever? Supposedly the best game ever, since it has unprecedented levels of world operability within the game? Such as being able to go to a soda machine, put in some money, press buttons, and buy a drink. With Qt, we can provide a game with panels throughout it where a user can control many things within the game, and there'd be a rich set of controls developers can easily add. Imagine playing a game where you go checkout the enemy's computer system, the system seeming pretty normal, you can bring up videos of plans they plan on. Or notice a desktop with a web browser, where you yourself can go ahead and login and check your E-Mail within the game itself, providing a more real experience. Or the real clincher. You know those games where plot-wise you break into the enemy factory and reprogram the robots or missiles or whatever? With Qt, the "code" you're reprogramming can be actual JavaScript code used in game. If made simple enough, it can seem realistic, and really give a lot of flexibility to those that want to reprogram the enemy's design. We have the potential to provide unprecedented levels of gameplay in games. If only we could get companies to put out games using Qt+OpenGL+Phonon, which they will probably not even consider looking at until Qt has Joystick support. Even then we still need to promote Qt more, which will make it easier to get companies to port games to Linux...

I think Ubuntu has some good ideas for adopting home users, but could be improved upon in many ways. Ultimately, we need a lot of change in the area of marketing to home users. There's so much that needs to be fixed and in some areas, we're not even close.

Feel free to post your comments, arguments, and disagreements.

Tuesday, January 29, 2008


Mozilla is the new Microsoft



So who here loves Mozilla and their browsers such as Firefox?

I do too, but am I the only one finding Mozilla turning more into Microsoft every day?

To start off, lets point fingers at the memory issue. Notice how Windows' memory requirements have gone through the roof? Yet comparable desktop environments and windowing systems from other sources seem to be just as advanced as Microsoft's latest offering and don't use a fraction of the RAM. I'm told KDE 4 runs comfortably with only 512MB of RAM, and you can even get away with 256MB, while a friend of mine running Vista started with 1.5GB but said he had to add another 512MB so the system didn't get all slugish on him. Does Vista offer anything special that KDE 4 doesn't?

Compare that to Firefox these days. I open up Firefox with two tabs, one in a pretty ordinary text page with a single image on it, and another with some tables and a form, and it uses over 400MB of RAM. When mentioning it to Mozilla, they of course tell you it's a feature not a bug (sound familiar?). What feature exactly needs all that RAM? I'm told some nonsense about how it caches previous pages one went to, so they load quickly when you press back. Considering I just opened the browser, which pages is it caching exactly?

To further compound my bafflement of the issue, I even disabled that feature in about:config, and yet Firefox still eats a ton of RAM when I open it. I used to run Netscape Navigator on a machine with 16MB of RAM, and two whole browser windows opened, and it didn't eat a fraction of the RAM Firefox does now (and this was before Windows had virtual memory). Heck, even other web browsers such as Konqueror and Opera don't use that much.

Well, now you'll tell me, but at least Firefox started the whole standards revolution right? At least it's better than Internet Explorer, right?

While it's true that it's better than IE, I wouldn't even consider IE a browser, as it can't even get basic standards right. If you had a program which edited text files, and you have it open a file, and it comes up with paragraphs out of order, and some not even appearing, would you call it a text editor? Yet I'm starting to see Gecko (the engine powering Firefox and Seamonkey) go the same route. Try setting a background for a radio button with CSS in Firefox, it completely ignores it. Yet even IE, the non-webbrowser, can do CSS backgrounds for radio buttons (albeit it only changes the background color outside the radio button itself within its rendering square).

This gets worse when trying to couple some JavaScript with your pages in Firefox. The onchange event for select boxes only fires in Firefox when changing them with the keyboard when one presses enter. All other browsers fire the event as soon as you can see a change within the box, such as when one presses up or down. Now perhaps you'll tell me that's undefined behavior so Firefox can implement it differently than everyone else (sound like Microsoft any?), but now apply this to how it handles dynamic text. Say you want to take our example above and convert your onchange in a select box to an onkeydown, so it should fire for every up/down request in the box without the user needing to press enter. Firefox again fails completely here, since it won't update text on a page modified by JavaScript till the screen is clicked, or a button is pressed. Meaning if you have your onkeydown event for a select box call a function which changes some text on the page, to say perhaps the contents of that select box. You won't actually see those changes till you press a button, perhaps up or down again, showing you the previous value on your page from the select box, instead of the current one. Sounds like a real stellar browser, doesn't it?

Then there's the issue about how Mozilla is trying to make their browser the only platform out there. Why do most people today still use Windows, even when there's a better OS for the average person? It all boils down to the applications. Anyone who is proposed to switch asks, can I play my favorite games on this other OS? Does my digital camera's photo sorting software work on it? What about all the other applications that I use on a regular basis? As Microsoft's chief primate keeps yelling, it's all about the developers. People will be stuck using whatever runs their favorite apps.

Now of course we know that the internet in many ways is becoming the new platform out there, hence why Microsoft tried to crush it or control it all these years. So as long as your browser is standards compliant, if people develop for it, we should be okay, no?

Unfortunately it's not like that. GMail doesn't work on Konqueror unless Konqueror changes its user agent to report as Gecko. Meaning many sites lock out browsers just because they didn't test with them, not because it doesn't work for any particular reason. A similar problem exists how IIS servers gibberish to browsers reporting themself as Opera. Yet Microsoft only wants to keep it that way.

Recently Microsoft reported that the upcoming IE 8 will only go into standard compliance mode when websites include an IE8 tag in their code. Now most people who discussed this feature have totally missed the main problem. It's not about having to add an extra tag. It's that the default mode for any whacko out there writing a webpage is to render in IE 6 or IE 7 super-bug-filled-non-standards-compliant-mode, so webpages will continue to not become standardized. Of course Microsoft realizes this, they'll do everything they can to make sure the large majority of stuff coming out by the clueless masses only works on their platform.

Now I'm not talking about Mozilla making the web buggy like Microsoft is trying to (but I bet they'd like that if they could get away with it), but it's that they're trying to get people dependent on their web browser. How so? It's all about the extensions!
Mozilla invented their own language and format and ranted and raved about how everyone should be writing extensions for their browser, and how their browser is the de-facto standard of the free world.

How many of you using Firefox feel they need all kinds of extras that people write for it? How often do you sit down at someone else's Firefox and feel naked about a lack of certain features you tailored your browser back home or at work to?

Now while I could go on and on about all kinds of extensions, everyone has their own tastes, so it'd be pointless to, but I'll point out a couple of examples.

Some of the extensions I use regularly for Firefox, such as auto translations of pages in languages I don't read/speak, or user agent spoofing are built right into Konqueror, many others aren't available anywhere else, or only for certain browsers written specifically for them. I'm talking about the GMail/Yahoo/And whatever else e-mail notifiers that can tell you when you received new e-mail, or ForcastFox which can tell me mostly accurate weather info for the week for just about anywhere in the world.

Now these extensions I just mentioned are perhaps not anything to get addicted to, but what about Ad blocking? Now of course other browsers also have a Mozilla compatible ad block framework installed, how many of them allow you to install something like the "Filterset.G Updater" so you can have your blacklists automatically updated by others to hide all the new types of ads on the internet that come out, such as the "Download Firefox Ad" that appears on the right of this page?

Not only that, we can hardly forget the wide plethora of developer related things one can get for Firefox, such as JavaScript Console, DOM Inspector, and all the extensions such as the Web Developer Toolbar and who knows what else.

Am I the only one who now feels locked into Firefox and can't switch because their special browser software doesn't run elsewhere? Even when we realize what a horrible wreck Firefox is, and terrible on machines with little RAM available?

Furthermore, you know how Microsoft always forces their desires upon you? Take this picture of Bill Gates for example:


Notice how his own personal desktop is the default Windows XP background? Surely a man as technical as Bill Gates can figure out how to change it to something he enjoys, just like everyone else has.
Or perhaps Bill Gates never changed it, since the default Windows XP background was Bill Gates' personal background at the time? Of course leave it to a guy like him to force his personal desires on everyone else.

Now lets see how Mozilla does the exact same thing. Ever wondered where Firefox's logo came from? Who would think that an image of a fox conquering the world is a good logo for a browser?

Check out a picture of Mozilla's former CEO, Mitchell Baker:


Now what do you think came first, the Firefox logo? Or her ridiculous hair cut?
Still don't get what I mean? See this:


If you think you can point out how Mozilla isn't the Microsoft of the internet platform, please let me know, I'd love to hear how you came to your false conclusions.