Tuesday, May 12, 2009


Will Linux ever be mainstream?



Constantly different sites and communities always discuss the possibility of Linux becoming mainstream and when the mainstreaming will take place. Often reasons are laid out where Linux is lacking. Most reasons don't seem to be in touch with reality. This will be an attempt to go over some of those reasons, cut out the fluff from the fact, and perhaps touch on a few areas that have not been gone over yet.

One could argue with today's routers that Linux is already mainstream, but let us focus more on full blown computer Linux, which runs on servers, workstation, and home computers.

When it comes to servers, the question really is who isn't running Linux? Practically every medium sized or larger company runs Linux on a couple of their servers. What makes Linux so compelling that many companies have at least one if not many Linux servers?

Servers are a very different breed of computer than the workstation or home computer. "Desktop Linux" as it's known is the type of OS for average everyday Joe. Joe is the kind of guy who wants to sit down and do a few specific tasks. He expects those tasks to be easy to do, and be mostly the same on every computer. He doesn't expect anything about the 'tasks' to scare him. He accepts the program may crash or go haywire in the middle, at which time it's just new cup of coffee time. Except Desktop Linux isn't for every day Joe ... yet.

Servers on the other hand are designed primarily for functionality. They have to have maximum up time. It doesn't matter if the server is hard to understand, and work with, and only two guys in the whole office can make heads or tails out of it. It's okay that the company needs to hire two guys with PhDs, who are complete recluses, and never attend a single office party.

Windows servers are primarily used by those that need special Windows functionality at the office, such as ActiveDirectory, or Exchange so everyone has integrated Outlook. Some even use Windows as HTTP Servers and the like. Windows is less known for working, but being great for those specialized tasks, or servers which don't need those two PhD recluses to manage. Even guys who have never written a piece of code in their entire life can manage a Windows server - usually. Microsoft always tries to press this latter point home with all their get the facts campaigns.

The real fact is though that companies on their servers need functionality, reliability, and countability. While larger companies would prefer to replace every man with a machine which is guaranteed to last forever and not require a single ounce of maintenance, they would rather rely on personnel than hardware. Sure, when I'm a really small business, I'd rather have a server I can manage myself and have a clue what I'm doing, but if I had the money, I'd rather have expert geeky Greg who I can count on to keep our hardware setup afloat. Even when geeky Greg is a bit more expensive than laid-back Larry, I'm happier knowing that I have the best people on the job.

Windows servers while being great in their niches, are also a pain in the neck in more generalized applications. We have a Windows HTTP/FTP server at work. One day it downloads security patches from Microsoft, and suddenly HTTP/FTP stop working entirely. Our expert laid-back Larry spent a few hours looking at the machine trying to find out what changed, and mostly had to resort to using Google as opposed to any knowledge of how Windows works. Finally he sees on some site that Microsoft changed some firewall settings to be extra restrictive, and managed to fix the problem.

Another time, part of the server got hacked into, and we have to reinstall most of it. For some reason, a subsection of our site just refused to work, apparently a permission problem somewhere. On Linux/Apache, permission problems are either a setting in Apache or on the file system, easy to find. Windows on the other hand, with their oh-so-much-better fine grained permission support seem to have dozens if not hundreds of places where to look for security settings. This took our Larry close to two weeks to fix.

Yet another time, a server application which we wrote in-house ran flawlessly on both Linux and Windows XP. However, when we installed it on our Windows Server 2003 server, it inexplicably didn't work. It's no wonder companies use Linux servers for many server tasks. There's also a decent amount of server applications a company can purchase from Red Hat, IBM, Oracle, and a couple of other companies. Linux on the server clearly rocks, even various statistical sites agree.

Now let us move on to the workstation and home computer segment, where we'll see a very different picture.

On the workstation, two features are key, manageability, and usability. Companies like to make sure that they can install new programs across the board, that they can easily update across the board, and change settings on every single machine in the office from one location. Granted on Linux one can log in as root to any machine and do what they want, but how many applications are there that allow me to automate management remotely? For example, apt-get (and its derivatives) are known as one of the best package managers for Desktop Linux, yet they don't have any way to send a call to update to every single machine on a network. Sure using NFS I can have an ActiveDirectory like setup where any user can log into any machine and get their settings and files, but how exactly do I push changes to the software on the machines themselves? Every place I asked this question to seems to have their own customized setup.

One place SSHs into every single machine individually, and then paste some huge command into the terminal. Another upgrades one machine, mirrors the hard drive, then goes to each machine in turn and re-images the installed hard disk. One place which employs a decent number of programmers wrote a series of scripts which every night download a file from a server and execute it. Another, also with an excellent programming task force, wrote their own SSH based application which logs into every machine on the network and runs whichever commands the admin puts in on all of them, allowing real time pushing of updates to all the machines at once.

Is it any wonder that a large company is scared to have Linux on all their machines or that it really is expensive to maintain? We keep toting/hearing how amazing X is because of the client/server setup, or these days in regards to PulseAudio, let us start hearing it for massive remote management. And remember not to limit this just to installing packages, we need to be able to change system files remotely and simultaneously, with a method which becomes standard.

The other aspect if of course usability, and by usability I mean being able to use the kind of software the company needs. Now for some companies, documents, spreadsheets, and web browsers are the extent of the applications they need, and for that we're already there. Unless of course they also need 100% compatibility with the office suites used by other companies.

What about specialized niches though? That's where real companies have their major work done. These companies are using software to manage medical history, other clientèle meta-data, stocks (both monetary and in-store), and multitudes of other specialized fields. All these applications more or less connect to some server somewhere and do database manipulation. We're really talking about webapps in desktop form. Why is every last one of these 3rd party applications only written for Windows?

The reasons are probably threefold. If these applications worked in any standard browser, we're really providing more functionality in them than should be exposed to the user. Do you want the user to hit stop or the close button in the corner of their browser in middle of a transaction? Sure, the database should be robust and atomic enough to handle these kinds of situations, but do we want to spoon-feed these situations to the users? We also certainly don't want general system upgrades which would install a newer version of the browser to break one of the key applications being used by the company. To solve this problem requires a custom browser, bringing us back to square one when it comes to making this a desktop application.

The next reason is known as catch-22. Why should a generic company making an application bother with anything than the most popular OS by a landslide? We need way more Desktop Linux users for a company to bother, but if the companies don't bother, it's unlikely that Desktop users will switch to Linux. Also, as I've said before, portability isn't difficult in most cases, but most won't bother unless we enlighten them.

Lastly, many of these applications are old, or at least most of their code base is. There's just no incentive to rewrite them. And when one of these applications is made in-house, it'll be made for what the rest of the company is already running.

To get Linux onto the workstation then, we need the following to take place:
  • Creation of standardized massive management tools
  • Near perfect interoperability of office suites
  • Get ports of Linux office suites to be mainstream on Windows too
  • Get work oriented applications on Windows to be written portably
  • Make Linux more popular on the Desktop in all aspects
We have to stop being scared of Open Source on closed sourced Operating Systems, if half the offices out there used Open Office on Windows, they wouldn't mind running Open Office on Linux, and they won't have any different interoperability issues that they don't already have.

We also need to make portability excellence more the norm. These companies could benefit a lot from using Qt for example. Qt has great SQL support. Qt contains a web browser so webapps can be made without providing anything unnecessary in the interface. Qt also has great easy to use client/server support, with SSL to boot. Also, Qt applications are probably the easiest type to make multilingual, and the language can be changed on the fly, which is very important for apps used world wide, or for companies looking to save money by hiring immigrants. Lastly, Qt is easier to use than the Win32 API for these relatively basic applications. If they used 100% Qt, the majority of the time, the program would work on Linux with just a simple recompile.

For the above to happen we really need a major Qt push in the developer community. The fight between GTK, wxWidgets, and Qt is going to be hurting us here. Sure, initially Qt was a lot more closed, and we needed GTK to push Qt in the right direction. But today, Qt is LGPL, offers support/maintenance contracts, and is a good 5-10 years ahead of GTK in breadth of features supplied. Even if you like GTK better for whatever reason, it really can't stand up objectively to Qt from the big business perspective. We need developers to get behind our best development libraries. We also need to get schools to teach the libraries we use as part of the mainstream curriculum. Fracturing the community on this point is only going to hurt us in the long run.

Lastly, we come to Linux on the home computer. What do we need on a home computer exactly? They're used for personal finances, homework, surfing the web, multimedia, creativity, and most importantly, gaming.

Are the finance applications available for Linux good enough? I really have no idea, perhaps someone can enlighten me in the comments. We'll get back to this point shortly.

For homework, I'd say Linux was there already. We have Google and Wikipedia available via the world wide web. Dictionaries and Thesauruses are available too. We got calculators and documents, nothing is really missing.

For surfing the web we're definitely there, no questions asked.

Multimedia we're also there aside from a few annoyances. I'll discuss this more below.

For creativity, I'm not sure where we are. Several years back, it seems all the kids used to love making greeting cards, posters, and the like using programs such as The Print Shop Deluxe or Print Artist. Do we have any decent equivalents on Linux?

Thing is, a company would have to be completely insane to port popular home publishing software to Linux. First there's all the reasons mentioned above regarding catch-22 and the like. Then there's nutjobs like Richard Stallman out there who will crucify the company attempting to port their software to Linux. For starters, see this article which says:
Some of the most important projects on our list are replacement projects. These projects are important because they address areas where users are continually being seduced into using non-free software by the lack of an adequate free replacement.


Notice how they're trying to crush Skype for example. Basically any time a company will port their application to Linux, and it becomes popular enough on Desktop Linux, you'll have these nutjobs calling for the destruction of said program by completely reimplementing it and giving it away for free. And reimplement it they do, even if not as effectively, but adequate enough to dissuade anyone from ever buying the application. Then the free application gets ported to Windows too, effectively destroying the company's business model and generally the company itself. Don't believe they'll take it that far? Look how far they went to stop Qt/KDE. Remember all those old office suites and related applications available for Linux a decade ago? How many of them are still around or in business? When free versions of voice chatting are available on all platforms, and can even interface with standard telephones, do you think Skype will still be around?

Basically, trying to port a popular application to Linux is a great way to get yourself a death sentence. If for example Adobe ever ported Photoshop to Linux, there'd be such a massive upsurge in getting the GIMP or a clone to have a sane interface, and get in some of those last features, Photoshop would probably be dead in a year.

And unless some of these applications are ported to Linux, we'll probably never see niche applications as good as their Windows counterparts. Many programmers just don't care enough to develop these to the extent needed, and some only do so when they feel it's part of a holy war. Thus giving us a whole new dimension to the catch-22.

Finally, we come to gaming. Is Linux good enough for companies to develop for? First glance, and you think a resounding yes. A deeper look reveals otherwise. First off, there's good ol` video. For the big games today, it's all about graphics. How many video cards provide full modern OpenGL support on Linux? The problem is basically as follows. X Windows a system designed way back when with all sorts of cool ideas in mind, where the current driver API is simply not enough to take full advantage of accelerated OpenGL. You can easily search online and find tons of results on why X is really bad, but it really stands out when it comes to video.

NVidia has for several years now put out "evil drivers" which get the job done, and provide fantastic OpenGL support on top of Linux. The drivers though are viewed as evil, since they bypass the bottom 1/3 of X and talk straight to the Kernel, and don't fully follow the X driver API. And of course, they're also closed source. All the other drivers today for the most part communicate with the system via the X API, especially the open sourced drivers. Yet they'll never measure up, because X prevents them from measuring up. But they'll continue to stick to what little X does provide. NVidia keeps citing they can't open source their drivers because they'll lose their competitive advantage. Many have questioned this, as for the most part, the basic principals are the same on all cards, what is so secret in their drivers? When in reality, if they open sourced their drivers, the core functionality would probably be merged into X as a new driver API, allowing ATI and Intel to compete on equal footing, losing their competitive advantage. It's not the card per sè they're trying to hide, but the actual driver API that would allow all cards to take advantage of themselves, bypassing any stupidity in X. At the very least, ATI or Intel could grab a lot of that code and make it easier for themselves to make an X-free driver that works for X well.

When it comes down to it, as tiny as the market share is that Linux already has, it becomes even smaller if you want to release an application that needs good video support. On the other hand, those same video cards work just fine in Windows.

Next comes sound, which I have discussed before. The main sound issue for games is latency, and ALSA (the default in Linux) is really bad in that regard. This gets compounded when sound has to run through a sound server on its way to the drivers that talk to the sound card. For playing music, ALSA seems just fine to everybody, you don't notice or care that the sound starts or stops a moment or two after you press the button. For videos as well, it's generally a non-issue. In most video formats, the video takes longer to decode than it does to process sound, so they're ready at the same time. It also doesn't have to be synced for input. So everything seems fine. In the worst case scenario, you just tell your video player to alter the video/audio sync slightly, and everything is great.

When it comes to games, it's an entirely different ballpark. For the game not to appear laggy, the video has to be synced to the input. You want the gun to fire immediately after the user presses the button, without a lag. Once the bullet hits the enemy and the user sees the enemy explode, you want them to hear that enemy explode. The audio has to be synched to the video. Players will not accept having the sound a second or two late. Now means now. There's no room for all the extra overhead that is currently required.

I find it mind boggling that Ubuntu, a distribution designed for average Joe, decided to make the entire system routed through PulseAudio, and see it as a good thing. The main advantage of PulseAudio is that it has a client/server architecture so that sound generated on one machine can be output on another. How many home users know of this feature, let alone have a reason to use it? The whole system makes sound lag like crazy.

I once wrote a game with a few other developers which uses SDL or libao to output sound. Users way back when used to enjoy it. Nowadays with ALSA, and especially with PulseAudio which SDL and libao default to outputting to in Ubuntu, users keep complaining that the sound lags two or more seconds behind the video. It's amazing this somehow became the default system setup.

Next is input. This one is easy right? Linux surely supports input. Now let me ask you this, how many KDE or GNOME games have you seen that allow you to control them via a Joystick/Gamepad? The answer is quite simply, none of them do. Neither Qt nor GTK provide any input support other than keyboard or mouse. That's right, our premier application framework libraries don't even support one of the most popular inventions of the 80s and 90s for PC gamers.

Basically, here you'll be making a game and using your library to handle both keyboard and mouse support, when you want to add on joystick support, you'll have to switch to a different library, and possibly somehow merge a completely different event loop into the main one your program uses for everything else. Isn't it so much easier on Windows where they provide a unified input API which is part of the rest of the API you're already using?

Modern games tend to include a lot of sound, and more often than not, video as well. It'd be nice to be able to use standard formats for these, right? The various APIs out there, especially Phonon (part of Qt/KDE) is great at playing sound or video for you. But which formats should you be putting your media in? Which formats are you ensured will be available on the system you're deploying on? Basically all these libraries have multiple backends where support can be drastically different, and the most popular formats, such as those based on MPEG standards don't come standard on most Linux distributions, thanks to them being "non free". Next you'll think fine, let us just ship the game with uncompressed media. This actually works fine for audio, but is a mess when it comes to video. Try making a pure uncompressed AVI and running it in Xine, MPlayer, and anything else that can be used as a Phonon back end. No two video players can agree on what the uncompressed AVI format is. Some display the picture upside down, some have different visions of which byte signifies red, blue, and green, and so on.

For all of these reasons, the game market, currently the largest in home software, has difficultly designing and properly deploying games on Linux. The only companies which have managed to do it in the past are those that made major games for DOS back in the day, where there also was no good APIs or solutions for doing anything.

Now that we wrapped it all up from the actual applications side of things, let us have a look at actual usability for the home user.

We're taken back to average Joe who wants to setup his machine. He's not sure what to do. But he hears there are great Ubuntu forums where he can ask for help. He goes and asks, and gets a response similar to the following:

Open a terminal, then type:
sudo /etc/init.d/d restart
ln -s /alt/override /bin/appstart
cd /etc/app
sudo nano b.conf

Preload=yes
ctrl+x
yes

Does anyone realize how intimidating this is? Even if average Joe was really Windows Power User Joe, does he really feel safe entering commands with which he is unfamiliar?

In the Windows world, we'd tell such a user to open up Windows Explorer, navigate to certain directories, copy files, edit files with notepad and the like. Is it really so hard to tell a user to open up Nautilus or Dolphin or whatever their file manager is, and navigate to a certain location and edit a file with gedit/kwrite?

Sure, it is faster to paste a few quick commands into the terminal, but we're turning away potential users. The home user should never be told he has to open a terminal. In 98% of the cases he really doesn't and what he wants/needs can be done via the GUI. Let us start helping these users appropriately.

Next is the myth about compiling. I saw an article written recently that Linux sucks because users have to compile their own software. I haven't compiled any software on Linux in years, except for those applications that I work on myself. Who in the world is still perpetuating this myth?

It's actually sad to see some distributions out there that force users to recompile stuff. No I'm not talking about Gentoo, but Red Hat actually. We have a server running Red Hat at work, we needed mod_rewrite added to it the other day. Guess what? We had to recompile the server to add that module. On Debian based distros one just runs "a2enmod rewrite", and presto the module is installed. Why the heck are distros forcing these archaic design principals on us?

Then there's just the overall confusion, which many others point out. Do I use KDE or GNOME? Pidgin or Kopete? Firefox or Konqueror? X-Chat or Konversation? VLC or MPlayer? RPM or DEB? The question is, is this a problem? So what if we have a lot of choices.

The issue arises when the choice breaks down the field. When deploying applications this can get especially nightmarish. We need to really focus more on providing just the best solution, and improving the best solution if it's lacking in an area, as opposed to having multiple versions of everything. OSS vs. ALSA, RPM vs. DEB, and a bunch of others which are base to the system shouldn't really be around these days.

The other end of the spectrum is less important to providing a coherent system for deploying on. But it does confuse some users. When I want to help someone, do I just assume they use Krusader as a file manager? Should I try to be generic about file managers? Should I have them install Krusader so I can help them? This theme is played over in many variations on most Linux help forums.
"Oh yes, go open that in gedit."
"Gedit? What's Gedit?"
"Are you running KDE or GNOME?"
"GNOME"
"Are you sure?"
"Wait, are GNOME and XFCE the same thing?"

What's really bad though is when users understand there's multiple applications, but can't find one to satisfy them. It's easy when the choices are simple or advanced, you choose the one more suited to your tastes for that kind of application. But it gets really annoying when one of those apps tries to be like the other. Do we need two apps that behave exactly the same but are different? If you started different, and you each have your own communities, then stay different. We don't need variety when there is no real difference. KDE 4 trying to be more like GNOME is just retarded. Trying to steal GNOME's user base by making a design which appeals more to GNOME users but has a couple of flashy features isn't a way to grow your user base, it's just a way to swap one for another.

Nintendo in the past couple of years was faced with losing much of their user base to Sony. For example, back in the late 90s, all the cool RPGs for which Nintendo was known, had all sequels moved to Sony hardware. However, instead of trying to win back old gamers, they took an entirely different approach. Nintendo realized the largest market of gamers weren't those on the other systems, but those that weren't on any systems. The largest market available for targeting is generally those users not yet in the market, unless the market in question is already ubiquitous.

That said, redesigning an existing program to target those who are currently non-users can sometimes have the potential to alienate loyal users, depending on what sort of changes are necessary, so unless pulling in all the non-users is guaranteed, one should be careful with this strategy. Although a user base with non paying customers is more likely to have success with such a drastic change, as they aren't dependent on their users either way. Balance is required, so many new users are acquired while a minimal amount of existing users are alienated.

To get Linux on home computers the following needs to take place:
  • We need to stop fighting every company that ports a decent product to Linux
  • We should write good programs even if there is nothing else to compete with on Linux
  • We shouldn't leave programs as adequate
  • We need a real solution to the X fiasco
  • We need a real solution to the sound mixing/latency fiasco, and clunky APIs and more sound servers isn't it
  • We need to offer tools to the gaming market and try to capture it
  • Support has to be geared towards the users, not the developers
  • Stop the myths, and prevent new users installing distros that perpetuate them
  • Stop competition between almost identical programs
  • Let programs that are similar but very different go their own ways
  • Bring in users that don't use anything else
  • Keep as many old users as possible and not alienate them
Linux being so versatile is great, and hopefully it will break into new markets. As I said before, many routers use Linux. To be popular on the desktop though, it either has to get users to desktops that currently don't have any, or manage to steal users from other desktops while keeping the ones they have. Becoming Windows isn't the answer, as others like to point out, but competing toe to toe in all existing areas while obtaining new ones is. In some areas, we may just have to throw away existing users to an extent (possibly eliminate X), if we want to grab everyone else out there.

Speaking of versatility, has everyone seen this? Linux grown technology does in many ways have the potential to beat Windows and the rest in a large way.

Everyone remember Duke Nukem Forever? Supposedly the best game ever, since it has unprecedented levels of world operability within the game? Such as being able to go to a soda machine, put in some money, press buttons, and buy a drink. With Qt, we can provide a game with panels throughout it where a user can control many things within the game, and there'd be a rich set of controls developers can easily add. Imagine playing a game where you go checkout the enemy's computer system, the system seeming pretty normal, you can bring up videos of plans they plan on. Or notice a desktop with a web browser, where you yourself can go ahead and login and check your E-Mail within the game itself, providing a more real experience. Or the real clincher. You know those games where plot-wise you break into the enemy factory and reprogram the robots or missiles or whatever? With Qt, the "code" you're reprogramming can be actual JavaScript code used in game. If made simple enough, it can seem realistic, and really give a lot of flexibility to those that want to reprogram the enemy's design. We have the potential to provide unprecedented levels of gameplay in games. If only we could get companies to put out games using Qt+OpenGL+Phonon, which they will probably not even consider looking at until Qt has Joystick support. Even then we still need to promote Qt more, which will make it easier to get companies to port games to Linux...

I think Ubuntu has some good ideas for adopting home users, but could be improved upon in many ways. Ultimately, we need a lot of change in the area of marketing to home users. There's so much that needs to be fixed and in some areas, we're not even close.

Feel free to post your comments, arguments, and disagreements.

18 comments:

Dan said...

Wow... I must say, this is your best post to date. I hope i can manage to respond to everything important, but again... wow.

I'm not gonna touch the server issue, because aside from a certain select group of total morons and niche market goers, that's pretty much covered, as you said.

As for workstations, uniformity and automated management across systems is a huge problem, but one that could be solved with minimal effort. As for office software, I don't understand why people can't use open source options cross-platform. I have a certain ladyfriend who's been scared of my computer and its evil Linux for some time now (it used to be a running joke with us), yet she uses OpenOffice on her Windows machine just fine, because she finds it too expensive to keep buying licenses for MS Office. Sure she occasionally receives a file that doesn't convert 100%, but she's able to manage just fine. And I'd be willing to bet that the coderlings have no problem using OpenOffice on Linux for their homework (or do they prefer KOffice?)

I like your points about Qt, I think it's an excellent framework that would solve all sorts of problems if it were more widely used.

As for gaming, I currently have a windows box that I use solely for games, because of the sorry state of gaming portability.

Finally, we come to what I consider to be the pièce de résistance of the entire article, which is the paradigm shift, where you show, quite clearly, that the head of The Militant Supporters of Free Software, Sir-Holier-Than-Thou Stallman, is actually going to destroy free software in the long run with his puristic and elitist ideals. I had always thought that the notion of refusing to accept anything that is not "free software" was utterly ridiculous, but I had never realized that such a notion was actually destructive to the grander basis of that selfsame free software, that by destroying non-free software on the free software platform, you push away those who might otherwise join that platform. It seems like these Jihadist Open Sourcers are so focused on keeping "their territory" free of infidels, they lose sight of the the fact that they destroy that territory in the process. These radicals need to take a step back to look at the larger picture, and realize that allowing non-free software onto Linux would only help the free software world.

morricone said...

I mostly agree with you, but you are not quite up-to-date with the graphics driver issue.

DRI2, GEM, TTM, Gallium3D are the magic keywords.

For homebanking I use Gnucash, but I wouldn't recommend Double-Entry Accounting to the average joe.

The problem with alsa is, that the computers nowadays don't have decent sound cards with hardware mixing anymore. Which isn't going to change with Windows Vista and higher mandating software mixing.

Alsa doesn't introduce the latency. Running Jack ontop Alsa is suffiecent for even audio professionals.

And I agree with you, that deploying closed source apps for Linux is one of the hardest things to do.

Being a Gentoo user since 2005. I saw quite a few big changes and have to say that we are mostly on the right track.

insane coder said...

You just listed 4 different technologies for improved OpenGL. You really think this is an improvement? ATI and Intel is still horribly behind in OpenGL support and much slower than NVidia.

ALSA definitely introduces latency from the application to output. It can be seen in any application with algorithmically generates sound as keyboard input changes, instantly, i.e. games. Professional audio is a different market.

We can't keep blaming drivers. A dozen different cards having lower latency with OSS4 than with ALSA isn't a coincidence.

Software mixing isn't what causes latency, as OSS4 with software mixing and Vista are both fine.

morricone said...

>You just listed 4 different technologies for improved OpenGL. You really think this is an improvement?

I should probably have explained those technologies a bit more. Actually they are all working together. Only GEM and TTM are two competing kernel based memory managers. One or the other will fade out.

>ATI and Intel is still horribly behind in OpenGL support and much slower than NVidia.

Closed source NVidia and ATI are IMHO pretty close. Till we see comparable support with open source drivers will propably take at least another 2-3 years.

>ALSA definitely introduces latency from the application to output. It can be seen in any application with algorithmically generates sound as keyboard input changes, instantly, i.e. games.

I never noticed a difference between running Q3, Q4, UT2004, Doom3 on Windows or Linux.

>Professional audio is a different market.

Yes it's different and low latency is the key to it. I can get as low as 10ms using Jack and Alsa with my mediocre Audigy 2.

>We can't keep blaming drivers. A dozen different cards having lower latency with OSS4 than with ALSA isn't a coincidence.

Unfortunately I never got a chance to test this.

I acknowledge that Alsa isn't perfect, but quite capable of the job. I just don't see OSS becoming the standard again.

panzeroceania said...

I also use gnucash for finance and I believe you can import quicken files although I'm not entirely sure on that point.

I'm a KDE4 user myself but one thing I will say is that while it's the most modern, it certainly needs to work on some bloat. It runs just fine on my laptop but I know many people out there that it would not run at full speed and this could be very frustrating for them.

Kumool said...

well about the user never opening a terminal is quite a bit more work than say
open file manager
go to /some/dir
edit somefile
replace replace replace
(something that can be implemented easily with sed)
make a copy of it and paste it to someplace
(cp /some/dir)
see where im going? when you do something on the terminal its shorter than telling it to do so on a gui. telling to the user to never open the terminal on a unix system is just unnatural because thats a positive thing of unix, doing some things with the terminal is actually simpler of course, i know some people fear the terminal because of its shell, that white looking prompt, but you can add some color to it and make it quite playful and helpful if you dont like configuring then theres the Friendly Interactive SHell http://fishshell.org
which is what i use... mayb thats why i like the terminal :)

also the problem of opening X app and the user not knowing what it is is just a problem of choosing what words to usethe problem could be easily fixed by just saying "edit" or decompress instead of open x app or whatnot


also, theres an mpeg replacement which is ogg, http://vorbis.com
or theora & Vorbis and it beats mpeg by a longshot altho it is horrible that there is no support for mpeg shipped standard i think its better off, because of this part

"If you decide to sell your music in MP3 format, you are responsible for paying Fraunhofer a percentage of each sale because you are using their patents. Vorbis is patent and license-free, so you will never need to pay anyone in order to sell, give away, or stream your own music."

so yeah its better in everyway people and companies need to adopt it, and its open source, the main library is BSD licensed so there is just no excuse to not adopt it since some people dont like the gpl and the main library being bsd and all

also regards

Kumool said...

forgot to add that if you dont want the user to never look at a terminal then they should just save it on a .sh script

insane coder said...

Kumool:
If it really is easier to use a prompt, then the person trying to help should just package it as a shell script. Don't have the user know anything other than "it's a program which will fix your problem". Have them download it and run it.

As for OGG, as nice as it is, it has a bunch of compatibility issues in certain cases. You don't have a guarantee that the end user will be able to view your Theora video properly. Vorbis is pretty good though.

Also regarding OGG, see this.

Rich said...

There is also the famous chicken/egg argument with linux. No one will port their apps if there are no users, and no users will switch without good apps. I also get, respect, and agree with your point about people like Stallman. I appreciate their passion for free software, but sometimes you just gotta give in.

That said, at this point in time, I think the best chance that linux has for mom and pop desktop use, will be if Google decides to develop Andoid into a full blown desktop OS. It's already running on Netbooks.

If they did that, they're a big enough player to get people to start porting the big commercial apps, and to get more driver support. I can't tell you how many time I've heard people say they'd move to linux if all Adobe apps were ported.

That said, I'm a happy linux user. It serves all of my needs and I appreciate the free and open source models.

insane coder said...

Kumool, link above on WP expired (I should've used a permanent link). Anyways content on WP about OGG moved here.

Marcos said...

I totally agree with you.
Some special comments:

* I totally recognize the work that Stallman made for free software, but his jihad mind is totally Old and poor. People like him scares off new users (especially by their appearance, ok that was a joke).

* On my point of view Gnome is stagnated and It seems that is not going anywhere. I'm might be wrong but it seems that the current gnome project is a mess in terms of code (gtk, mono wrappers, tons of language wrappeprs). And also have the problem with gtk, like you said gtk, seems a toolkit from 5 years ago.

* KDE 4.* is sure on the front of gnome, in terms of code and 'evolution'. And unfortunately it's not the primary desktop used today. By Ubuntu pushing Gnome in their users it dims KDE. One way to change this, would be to change the 'default' desktop to KDE. But that is probably impossible since many of the jihad would cry out lout.

R.S. Woods said...

I found this article via Google, and if I may compliment you it is quite nice. It made me think about a lot of things, such as:

What does Linux becoming mainstream really mean? What would the purpose of that be, and would it even be desirable?

Taneli said...

@ Kumool
"theora & Vorbis and it beats mpeg by a longshot"

I'm sorry, but Theora is far behind MPEG-4 AVC (aka H.264) in terms of quality. It doesn't even come close.

Vorbis is pretty good, but Theora is just terrible.

Chris said...

Linux on those half sized laptop thingies is definitely a step forward. The problem is that now they are getting larger amounts of memory and beefier CPUs, half of them can run Windows XP, so this trend may actually reverse again, still at the moment I think that is one of the strongest areas for Linux right now.

Kumool said...

@Taneli
sry, iv only ever watched 1 video with theora (i think it was theora im not even sure) and the quality was terrible but imho thought it was tolerable since it was 10 mins and only 5 mb so that makes it ok i guess (in my standards)...
i have not really used mp4 i dont particularly like looking for players

Kumool said...

the message was for insanecoder too :)

insane coder said...

Kumool:

I myself do a lot of video encoding. I haven't tested Theora in a good year or so, but when I did, it failed to perform as well as the x264 encoder (H.264 format).

A patent free open source 'replacement' would need to perform better than the patent performers, or at the very least, equal to.

Brent said...

Very good article. One thing that I have noticed, as a new Linux user, is that the names of many of the popular applications are quite abstract. Not to say that an oddly named application cannot become mainstream or that a boring name is a necessity, but it does add to the confusion for a new convert. It's not so obvious what a Firefox does, but I can probably guess that Internet Explorer allows me to explore the internet. It's also pretty easy to guess that a Photoshop will allow me to "shop" my photos (at least I can figure that it is some form of photo/image utility), but for all I know a GIMP is like that creepy guy from the pawn shop scene from Pulp Fiction. Lame examples, but I think it should illustrate my point. There are some pretty off the wall names floating around for things in the open source community. Not a major deal for most, but I think maybe a little more intuitive naming schemes (where possible) could be a subtle positive for the cause.