A quick summary of the last article for those that didn't read it:
- Sound in Linux has an interesting history, and historically lacked sound mixing on hardware that was more software based than hardware.
- Many sound servers were created to solve the mixing issue.
- Many libraries were created to solve multiple back-end issues.
- ALSA replaced OSS version 3 in the Kernel source, attempting to fix existing issues.
- There was a closed source OSS update which was superb.
- Linux distributions have been removing OSS support from applications in favor of ALSA.
- Average sound developer prefers a simple API.
- Portability is a good thing.
- Users are having issues in certain scenarios.
Now much has changed, namely:
- OSS is now free and open source once again.
- PulseAudio has become widespread.
- Existing libraries have been improved.
- New Linux Distributions have been released, and some existing ones have attempted an overhaul of their entire sound stack to improve users' experience.
- People read the last article, and have more knowledge than before, and in some cases, have become more opinionated than before.
- I personally have looked much closer at the issue to provide even more relevant information.
Let's take a closer look at the pros and cons of OSS and ALSA as they are, not five years ago, not last year, not last month, but as they are today.
First off, ALSA.
ALSA consists of three components. First part is drivers in the Kernel with an API exposed for the other two components to communicate with. Second part is a sound developer API to allow developers to create programs which communicate with ALSA. Third part is a sound mixing component which can be placed between the other two to allow multiple programs using the ALSA API to output sound simultaneously.
To help make sense of the above, here is a diagram:
Note, the diagrams presented in this article are made by myself, a very bad artist, and I don't plan to win any awards for them. Also they may not be 100% absolutely accurate down to the last detail, but accurate enough to give the average user an idea of what is going on behind the scenes.
A sound developer who wishes to output sound in their application can take any of the following routes with ALSA:
- Output using ALSA API directly to ALSA's Kernel API (when sound mixing is disabled)
- Output using ALSA API to sound mixer, which outputs to ALSA's Kernel API (when sound mixing is enabled)
- Output using OSS version 3 API directly to ALSA's Kernel API
- Output using a wrapper API which outputs using any of the above 3 methods
As can be seen, ALSA is quite flexible, has sound mixing which OSSv3 lacked, but still provides legacy OSSv3 support for older programs. It also offers the option of disabling sound mixing in cases where the sound mixing reduced quality in any way, or introduced latency which the end user may not want at a particular time.
Two points should be clear, ALSA has optional sound mixing outside the Kernel, and the path ALSA's OSS legacy API takes lacks sound mixing.
An obvious con should be seen here, ALSA which was initially designed to fix the sound mixing issue at a lower and more direct level than a sound server doesn't work for "older" programs.
Obvious pros are that ALSA is free, open source, has sound mixing, can work with multiple sound cards (all of which OSS lacked during much of version 3's lifespan), and included as part of the Kernel source, and tries to cater to old and new programs alike.
The less obvious cons are that ALSA is Linux only, it doesn't exist on FreeBSD or Solaris, or Mac OS X or Windows. Also, the average developer finds ALSA's native API too hard to work with, but that is debatable.
Now let's take a look at OSS today. OSS is currently at version 4, and is a completely different beast than OSSv3 was.
Where OSSv3 went closed source, OSSv4 is open sourced today, under GPL, 3 clause BSD, and CDDL.
While a decade old OSS was included in the Linux Kernel source, the new greatly improved OSSv4 is not, and thus may be a bit harder for the average user to try out. Older OSSv3 lacked sound mixing and support for multiple sound cards, OSSv4 does not. Most people who discuss OSS or try OSS to see how it stacks up against ALSA unfortunately are referring to, or are testing out the one that is a decade old, providing a distortion of the facts as they are today.
Here's a diagram of OSSv4:
A sound developer wishing to output sound has the following routes on OSSv4:
- Output using OSS API right into the Kernel with sound mixing
- Output using ALSA API to the OSS API with sound mixing
- Output using a wrapper API to any of the above methods
Unlike in ALSA, when using OSSv4, the end user always has sound mixing. Also because sound mixing is running in the Kernel itself, it doesn't suffer from the latency ALSA generally has.
Although OSSv4 does offer their own ALSA emulation layer, it's pretty bad, and I haven't found a single ALSA program which is able to output via it properly. However, this isn't an issue, since as mentioned above, ALSA's own sound developer API can output to OSS, providing perfect compatibility with ALSA applications today. You can read more about how to set that up in one of my recent articles.
ALSA's own library is able to do this, because it's actually structured as follows:
As you can see, it can output to either OSS or ALSA Kernel back-ends (other back-ends too which will be discussed lower down).
Since both OSS and ALSA based programs can use an OSS or ALSA Kernel back-end, the differences between the two are quite subtle (note, we're not discussing OSSv3 here), and boils down to what I know from research and testing, and is not immediately obvious.
OSS always has sound mixing, ALSA does not.
OSS sound mixing is of higher quality than ALSA's, due to OSS using more precise math in its sound mixing.
OSS has less latency compared to ASLA when mixing sound due to everything running within the Linux Kernel.
OSS offers per application volume control, ALSA does not.
ALSA can have the Operating System go into suspend mode when sound was playing and come out of it with sound still playing, OSS on the other hand needs the application to restart sound.
OSS is the only option for certain sound cards, as ALSA drivers for a particular card are either really bad or non existent.
ALSA is the only option for certain sound cards, as OSS drivers for a particular card are either really bad or non existent.
ALSA is included in Linux itself and is easy to get ahold of, OSS (v4) is not.
Now the question is where does the average user fall in the above categories? If the user has a sound card which only works (well) with one or the other, then obviously they should use the one that works properly. Of course a user may want to try both to see if one performs better than the other one.
If the user really needs to have a program output sound right until Linux goes into suspend mode, and then continues where it left off when resuming, then ALSA is (currently) the only option. I personally don't find this to be a problem, and furthermore I doubt it's a large percentage of users that even use suspend in Linux. Suspend in general in Linux isn't great, due to some rogue piece of hardware like a network or video card which screws it up.
If the user doesn't want a hassle, ALSA also seems the obvious choice, as it's shipped directly with the Linux Kernel, so it's much easier for the user to use a modern ALSA than it is a modern OSS. However it should be up to the Linux Distribution to handle these situations, and to the end user, switching from one to the other should be seamless and transparent. More on this later.
Yet we also see due to better sound mixing and latency when sound mixing is involved, that OSS is the better choice, as long as none of the above issues are present. But the better mixing is generally only noticed at higher volume levels, or rare cases, and latency as I'm referring to is generally only a problem if you play heavy duty games, and not a problem if you just want to listen to some music or watch a video.
But wait this is all about the back-end, what about the whole developer API issue?
Many people like to point fingers at the various APIs (I myself did too to some extent in my previous article). But they really don't get it. First off, this is how your average sound wrapper API works:
The program outputs sound using a wrapper, such as OpenAL, SDL, or libao, and then sound goes to the appropriate high level or low level back-end, and the user doesn't have to worry about it.
Since the back-ends can be various Operating Systems sound APIs, they allow a developer to write a program which has sound on Windows, Mac OS X, Linux, and more pretty easily.
Some like Adobe like to say how this is some kind of problem, and makes it impossible to output sound in Linux. Nothing could be further from the truth. Graphs like these are very misleading. OpenAL, SDL, libao, GStreamer, NAS, Allegro, and more all exist on Windows too. I don't see anyone complaining there.
I can make a similar diagram for Windows:
This above diagram is by no means complete, as there's XAudio, other wrapper libs, and even some Windows only sound libraries which I've forgotten the name of.
This by no means bothers anybody, and should not be made an issue.
In terms of usage, the libraries stack up as follows:
OpenAL - Powerful, tricky to use, great for "3D audio". I personally was able to get a lot done by following a couple of example and only spent an hour or two adding sound to an application.
SDL - Simplistic, uses a callback API, decent if it fits your program design. I personally was able to add sound to an application in half an hour with SDL, although I don't think it fits every case load.
libao - Very simplistic, incredibly easy to use, although problematic if you need your application to not do sound blocking. I added sound to a multitude of applications using libao in a matter of minutes. I just think it's a bit more annoying to do if you need to give your program its own sound thread, so again depends on the case load.
I haven't played with the other sound wrappers, so I can't comment on them, but the same ideas are played out with each and every one.
Then of course there's the actual OSS and ALSA APIs on Linux. Now why would anyone use them when there are lovely wrappers that are more portable, customized to match any particular case load? In the average case, this is in fact true, and there is no reason to use OSS or ALSA's API to output sound. In some cases, using a wrapper API can add latency which you may not want, and you don't need any of the advantages of using a wrapper API.
Here's a breakdown of how OSS and ALSA's APIs stack up.
OSSv3 - Easy to use, most developers I spoke to like it, exists on every UNIX but Mac OS X. I added sound to applications using OSSv3 in 10 minutes.
OSSv4 - Mostly backwards compatible with v3, even easier to use, exists on every UNIX except Mac OS X and Linux when using the ALSA back-end, has sound re-sampling, and AC3 decoding out of the box. I added sound to several applications using OSSv4 in 10 minutes each.
ALSA - Hard to use, most developers I spoke to dislike it, poorly documented, not available anywhere but Linux. Some developers however prefer it, as they feel it gives them more flexibility than the OSS API. I personally spent 3 hours trying to make heads or tails out of the documentation and add sound to an application. Then I found sound only worked on the machine I was developing on, and had to spend another hour going over the docs and tweaking my code to get it working on both machines I was testing on at the time. Finally, I released my application with the ALSA back-end, to find several people complaining about no sound, and started receiving patches from several developers. Many of those patches fixed sound on their machine, but broke sound on one of my machines. Here we are a year later, and my application after many hours wasted by several developers, ALSA now seems to output sound decently on all machines tested, but I sure don't trust it. We as developers don't need these kinds of issues. Of course, you're free to disagree, and even cite examples how you figured out the documentation, added sound quickly, and have it work flawlessly everywhere by everyone who tested your application. I must just be stupid.
Now I previously thought the OSS vs. ALSA API issue was significant to end users, in so far as what they're locked into, but really it only matters to developers. The main issue is though, if I want to take advantage of all the extra features that OSSv4's API has to offer (and I do), I have to use the OSS back-end. Users however don't have to care about this one, unless they use programs which take advantage of these features, which there are few of.
However regarding wrapper APIs, I did find a few interesting results when testing them in a variety of programs.
App -> libao -> OSS API -> OSS Back-end - Good sound, low latency.
App -> libao -> OSS API -> ALSA Back-end - Good sound, minor latency.
App -> libao -> ALSA API -> OSS Back-end - Good sound, low latency.
App -> libao -> ALSA API -> ALSA Back-end - Bad sound, horrible latency.
App -> SDL -> OSS API -> OSS Back-end - Good sound, really low latency.
App -> SDL -> OSS API -> ALSA Back-end - Good sound, minor latency.
App -> SDL -> ALSA API -> OSS Back-end - Good sound, low latency.
App -> SDL -> ALSA API -> ALSA Back-end - Good sound, minor latency.
App -> OpenAL -> OSS API -> OSS Back-end - Great sound, really low latency.
App -> OpenAL -> OSS API -> ALSA Back-end - Adequate sound, bad latency.
App -> OpenAL -> ALSA API -> OSS Back-end - Bad sound, bad latency.
App -> OpenAL -> ALSA API -> ALSA Back-end - Adequate sound, bad latency.
App -> OSS API -> OSS Back-end - Great sound, really low latency.
App -> OSS API -> ALSA Back-end - Good sound, minor latency.
App -> ALSA API -> OSS Back-end - Great sound, low latency.
App -> ALSA API -> ALSA Back-end - Good sound, bad latency.
If you're having a hard time trying to wrap your head around the above chart, here's a summary:
- OSS back-end always has good sound, except when using OpenAL->ALSA to output to it.
- ALSA generally sounds better when using the OSS API, and has lower latency (generally because that avoids any sound mixing as per an earlier diagram).
- OSS related technology is generally the way to go for best sound.
But wait, where do sound servers fit in?
Sounds servers were initially created to deal with problems caused by OSSv3 which currently are non existent, namely sound mixing. The sound server stack today looks something like this:
As should be obvious, these sound servers today do nothing except add latency, and should be done away with. KDE 4 has moved away from the aRts sound server, and instead uses a wrapper API known as Phonon, which can deal with a variety of back-ends (which some in themselves can go through a particular sound server if need be).
However as mentioned above, ALSA's mixing is not of the same high quality as OSS's is, and ALSA also lacks some nice features such as per application volume control.
Now one could turn off ALSA's low quality mixer, or have an application do it's own volume control internally via modifying the sound wave its outputting, but these choices aren't friendly towards users or developers.
Seeing this, Fedora and Ubuntu has both stepped in with a so called state of the art sound server known as PulseAudio.
If you remember this:
As you can see, ALSA's API can also output to PulseAudio, meaning programs written using ALSA's API can output to PulseAudio and use PulseAudio's higher quality sound mixer seamlessly without requiring the modification of old programs. PulseAudio is also able to send sound to another PulseAudio server on the network to output sound remotely. PulseAudio's stack is something like this:
As you can see it looks very complex, and a 100% accurate breakdown of PulseAudio is even more complex.
Thanks to PulseAudio being so advanced, most of the wrapper APIs can output to it, and Fedora and Ubuntu ship with all that set up for the end user, it can in some cases also receive sound written for another sound server such as ESD, without requiring ESD to run on top of it. It also means that many programs are now going through many layers before they reach the sound card.
Some have seen PulseAudio as the new Voodoo which is our new savior, sound written to any particular API can be output via it, and it has great mixing to boot.
Except many users who play games for example are crying that this adds a TREMENDOUS amount of latency, and is very noticeable even in not so high-end games. Users don't like hearing enemies explode a full 3 seconds after they saw the enemy explode on screen. Don't let anyone kid you, there's no way a sound server, especially with this level of bloat and complexity ever work with anything approaching low latency acceptable for games.
Compare the insanity that is PulseAudio with this:
Which do you think looks like a better sound stack, considering that their sound mixing, per application volume control, compatibility with applications, and other features are on par?
And yes, lets not forget the applications. I'm frequently told about how some application is written to use a particular API, therefore either OSS or ALSA need to be the back-end they use. However as explained above, either API can be used on either back-end. If setup right, you don't have to have a lack of sound using newer version of Flash when using the OSS back-end.
So where are we today exactly?
The biggest issues I find is that the Distributions simply aren't setup to make the choice easy on the users. Debian and derivatives provide a Linux sound base package to select whether you want OSS or ALSA to be your back-end, except it really doesn't do anything. Here's what we do need from such a package:
- On selecting OSS, it should install the latest OSS package, as well as ALSA's ALSA API->OSS back-end interface, and set it up.
- Minimally configure an installed OpenAL to use OSS back-end, and preferably SDL, libao, and other wrapper libraries as well.
- Recognize the setting when installing a new application or wrapper library and configure that to use OSS as well.
- Do all the above in reverse when selecting ALSA instead.
Such a setup would allow users to easily switch between them if their sound card only worked with the one which wasn't the distribution's default. It would also easily allow users to objectively test which one works better for them if they care to, and desire to use the best possible setup they can. Users should be given this capability. I personally believe OSS is superior, but we should leave the choice up to the user if they don't like whichever is the system default.
Now I repeatedly hear the claim: "But, but, OSS was taken out of the Linux Kernel source, it's never going to be merged back in!"
Let's analyze that objectively. Does it matter what is included in the default Linux Kernel? Can we not use VirtualBox instead of KVM when KVM is part of the Linux Kernel and VirtualBox isn't? Can we not use KDE or GNOME when neither of them are part of the Linux Kernel?
What matters in the end is what the distributions support, not what's built in. Who cares what's built in? The only difference is that the Kernel developers themselves won't maintain anything not officially part of the Kernel, but that's the precise jobs that the various distributions fill, ensuring their Kernel modules and related packages work shortly after each new Kernel comes out.
Anyways, a few closing points.
I believe OSS is the superior solution over ALSA, although your mileage may vary. It'd be nice if OSS and ALSA just shared all their drivers, not having an issue where one has support for one sound card, but not the other.
OSS should get suspend support and anything else it lacks in comparison to ALSA even if insignificant. Here's a hint, why doesn't Ubuntu hire the OSS author and get it more friendly in these last few cases for the end user? He is currently looking for a job. Also throw some people at it to improve the existing volume controlling widgets to be friendlier with the new OSSv4, and maybe get stuff like HAL to recognize OSSv4 out of the box.
Problems should be fixed directly, not in a roundabout matter as is done with PulseAudio, that garbage needs to go. If users need remote sound (and few do), one should just be easily able to map /dev/dsp over NFS, and output everything to OSS that way, achieving network transparency on the file level as UNIX was designed for (everything is a file), instead of all these non UNIX hacks in place today in regards to sound.
The distributions really need to get their act together. Although in recent times Draco Linux has come out which is OSS only, and Arch Linux seems to treat OSSv4 as a full fledged citizen to the end user, giving them choice, although I'm told they're both bad in the the ALSA compatibility department not setting it up properly for the end user, and in the case of Arch Linux, requiring the user to modify the config files of each application/library that uses sound.
OSS is portable thanks to its OS abstraction API, being more relevant to the UNIX world as a whole, unlike ALSA. FreeBSD however uses their own take on OSS to avoid the abstraction API, but it's still mostly compatible, and one can install the official OSSv4 on FreeBSD if they so desire.
Sound in Linux really doesn't have to be that sorry after all, the distributions just have to get their act together, and stop with all the finger pointing, propaganda, and FUD that is going around, which is only relevant to ancient versions of OSS, if not downright irrelevant or untrue. Let's stop the madness being perpetrated by the likes of Adobe, PulseAudio propaganda machine, and whoever else out there. Let's be objective and use the best solutions instead of settling for mediocrity or hack upon hack.
242 comments:
«Oldest ‹Older 201 – 242 of 242@insane coder:
Seems to me he's really just talking about estimating the sample rate in terms of a clock (for instance system timers) using a specific type of smoothing filter described here.
Clocks are always a bit off from their nominal rate, which is why I also do this in Gambatte.
@insanecoder: RTDSC/gettimeofday() is no good? why?
the issue is not precision, its synchronization (especially over time).
neither the video clock or the audio clock are running from the same crystal that runs the CPU clock or the system clock (gettimeofday() is bad in addition because its not necessarily monotonic, though it requires NTP to make this nightmare come true :)
if you have to use a system timer for stuff like this, make sure its CLOCK_MONOTONIC.
whenever you have multiple clocks in a design, you either have to make sure they all share the same base clock, or you have to use a DLL or equivalent technique to predict one from another. over time, both the cycle counter and/or system time will diverge from the video frame clock. depending on your system, the drift could be tiny, or substantial enough to show an actual sync error within the hour.
of course, this error might be perfectly acceptable in a particular instance. but from a design point of view, its really not the right way to do stuff.
@insane coder: >pipes don't have enough buffering to be useful for audio (limited to about 5kB on most kernels).
oops! i meant 4kB ... this is defined by a kernel constant - there is no runtime way to modify it (unless someone added this within the last year or two).
I'm not sure what you mean by that.
Assume 2 bytes per sample, stereo, 48k sample rate. that's 187.5kB per second. If you dump it into a pipe for some other application to read, you have to know how much buffering you need - it will be some non-zero interval of time before the other end of the pipe gets to read it. This all means that the pipe buffer represents about 21msec of stereo, 16bit audio. If you can't guarantee that the reader will start reading within that 21msec interval, you run out of space in the buffer. increase the bit depth, the sample rate, or the number of channels - the timing requirements get even more demanding. plus its a bit silly copying the data twice across the kernel/user space boundary.
not really a suitable way to pass audio around. shared memory is much better (but more complex).
@insanecoder: i wasn't saying that you can't use read/write. you do need more careful about the semantics though, because compared to file i/o, the return values can be odd, especially with non-blocking mode. it seems as if you agree that the setup code is different, so this really just comes down to an "inner loop" of read/write. to which all i can say is "big deal" - ok, so i accept that given a unix file descriptor that points to something, you can sit in a loop and call read and/or write on it.
All modern OSs to my knowledge don't differentiate between shutdown(SHUT_RDWR) and close().
Lets hope they don't. POSIX makes it clear that Generally the difference between close() and shutdown() is: close() closes the socket id for the process but the connection is still opened if another process shares this socket id. The connection stays opened both for read and write, and sometimes this is very important. shutdown() breaks the connection for all processes sharing the socket id. (see http://www.developerweb.net/forum/archive/index.php/t-2940.html for more details and examples)
Hi dawhead.
I see what you're saying about pipes, but that's only an issue if the programs are bound by time constraints. Not every situation is like that, as in my example which encodes MP3s by piping to LAME.
>ok, so i accept that given a unix file descriptor that points to something, you can sit in a loop and call read and/or write on it.
And I would like that in ALSA too, except I never got ALSA working right without some whole buffering system on top of it which makes it annoying, not to mention, no file descriptor or write(). But it's not that big of a deal. C++ inheritence can deal with it.
>Lets hope they don't. POSIX makes it clear that Generally the difference between close() and shutdown() is: close() closes the socket id for the process but the connection is still opened if another process shares this socket id. The connection stays opened both for read and write, and sometimes this is very important.
Ah yes, you're refering to fork()'ing having the children inherit open descriptors from their parent process. Yes in those situations you'd want to close in the child (especially before calling exec()), but not have it affect the parent.
However the other way around, when you only have a single process, instead of calling shutdown(SHUT_RDWR), calling close() should have the same effect, and to my knowledge it does in modern UNIXs.
@insanecoder: I see what you're saying about pipes, but that's only an issue if the programs are bound by time constraints. Not every situation is like that, as in my example which encodes MP3s by piping to LAME.
a very fair point. i do tend to be obsessed with realtime audio, sometimes my detriment. thanks for pointing that out.
So...the executive summary seems to be:
* OSS4 works OK out of the box for many simple end-user cases, if you can get the damn box open. That's harder than it has to be.
* ALSA works better, if you and whoever developed your application can figure out how to configure and use it. That's harder than it has to be.
* Neither one supports a superset of the other's audio hardware, so if you're a member of the symmetric difference set, you don't have a choice--do the hard work, or enjoy the silence.
* Neither one supports use cases that a user-space sound daemon (and only something of equivalent or greater implementation cost to a user-space sound daemon) can solve. That's why e.g. gstreamer can't replace pulseaudio (unless someone makes a remote gstreamer network protocol), even if gstreamer does a better job of mangling audio than pulse ever will.
Enough summary.
OSS4 is not going to get into the mainline kernel (or any kernel I run) any time soon, at the very least because of its disruptive effect on unrelated subsystems, especially suspend/resume and floating point. Seriously, it's 2009. Get the power management stuff working already. Even USB-storage devices have working suspend and resume now.
It looks like the in-kernel floating point is covered since it can be turned off. I'm not willing to trust my data and everything else my kernel ever touches to "it's got potentially serious, well understood problems, but it's worked well for years," and neither should you.
The arguments back and forth about scheduling latency and where the mixing code belongs are bogus. It makes less than one PCM sample interval's difference whether the samples are mixed in the kernel or in user space. If you put the mixing in the kernel, you add latency to the kernel; if you put the mixing in user space, you'd better tell the kernel to schedule the user space mixer process correctly, and you'd better fix the kernel's realtime process scheduling. Fail to do one of these things and either sound latency will suck, or latency of everything-but-sound will.
That said, process scheduling latency in the Linux kernel has severely regressed over the last year, if the realtime talks at LinuxSymposium this year are to be believed.
pulseaudio really does give you less latency than OSS4 or _one_ of the ALSA API's --specifically, the one that looks like OSS4's low-latency and open/read mode (if you've been reading this far, you should know that there are actually several ALSA API's). Pulse gives you the ability to renege on audio you previously sent toward the sound hardware. This means you can buffer a few seconds ahead (giving you robustness against underrun glitches and reducing CPU/interrupt load) and then change your mind about what's still in the buffer but not yet played. This gives you the ability to modify an audio stream starting as early as the next output sample (if your hardware supports that--admittedly, much hardware doesn't, so you'll get some minimal latency that won't go away no matter what software you use). This is better than the OSS4 realtime stuff or the ALSA period-based APIs can do. You can't make the next PCM sample happen any faster without changing the sample rate, so when you've reduced the latency to less than one PCM sample interval, you're done.
Now, it is correct to say that adding pulseaudio underneath an application that is already coded using the OSS4-style "realtime" algorithm (whether it uses OSS4 or ALSA is irrelevant) is not going to have less latency. It can't. The only way to fix the latency is to rip out the OSS-style realtime code from the application, and use the ALSA or Pulse API's instead.
Of course, the ALSA-vs-OSS4 discussion is completely moot for me. All my use cases involve Bluetooth audio, and as far as I know OSS4 provides no solution for that at all.
I needed to drop a big 'Thank you' in this post. It helped me a lot, and it seems like a nice contribution overall to the community - I am one person who favours simplicity, and you certainly show that this is the way to go. Thx.
Hey has anyone got "skype" workin' yet in Linux ?
The audio mess in Linux is "sorrier" than it ever was JACK !
-just ask Lenard
:)
The Skype issues are not "Linux audio issues" they are Skype issues and you should go pest Skype to fix them.
Said that, the new Skype 2.1 for linux actually support Pulse Audio and works great on Fedora 11 - finally!
Skype folks finally got of their asses and released a new Skype for Linux.
Thanks, great article, just what I wanted to see, images of 'sound stacks'. What with pulse, gstreamer, now OpenAL all getting into the act, it is very confusing
and only a picture will do.
It's a wonder the original signal arrives at it's destination intact, so many places to drop and/or mangle it lol
sixerjman: That's the point, it rarely DOES arrive fully intact when using all these layerss.
I would like to point out that, although, according to dawhead/Paul Davis, OSS's realtime capabilities are inferior to ALSA (or, better, the JACK+ALSA combo), and that JACK's design makes it nearly impossible to make things wrong, zynaddsubfx works much better in OSS mode than in JACK mode, and it has quite strict realtime demands. It is zynaddsubfx's fault, of course, but still, although I perfectly know that JACK is a very good thing, and that it generally works very well (well, after being properly configured, which isn't always very easy, especially with sub-par hardware, but then again, one might argue that you're not even supposed to do low-latency stuff with integrated soundcards, although this is debatable - think about live usage), but still, an OSS driver happens to be, in this case, much more reliable than a JACK driver.
The only problem is that when I'm using zyn, I can't output anything else, which is quite a nuisance.
Anyway, although I am ONLY a user (with programming interests, although very, very, limited), I'd like to point out that I've noticed kind of a wrong attitude: "design over substance". Which is better in the long run, but it's detrimental for the users having to cope with imperfect implementations of very good ideas. Unix has always been about "simplicity" and the whole "worse is better" philosophy. Yeah, yeah, you can argue that it actually might have been not the best choice in the long run, but that's because of lack of coordination and direction.
Is it so hard to have things that "just work", at the moment, for everyday usage, while a team design a better alternative behind the scenes? And only switching to the better designed alternative when it's sufficiently mature for everybody to actually benefit from it? Who cares if the needs of the few aren't properly provided for, the common userbase needs working audio, the developers of general purpose audio software (such as media players or whatever) don't need the "hardcore" features of ALSA or JACK (that's why by the way JACK isn't, and SHOULDN'T, be the standard for audio. Stick to doing one thing, and do it well), who cares if the file metaphor is limited for audio, who cares if me (a musician) won't be able to use MIDI that well (I'd like to point out that general purpose usage of MIDI isn't provided by ALSA - or is it? Maybe things have changed - meaning that to just listen to a MIDI file you need Timidity, which makes me shudder. Or Fluidsynth, which I have never used but I think it's overcomplicated for the common user), after all, who needs realtime audio will install JACK and ALSA and probably cope with audio dropouts and the such in general purpose audio (on my laptop, for example, the login sound of Ubuntu is always a pathetic garble, and listening to music results in dropouts on heavy load - without using PulseAudio, mind you! I tried it and it was crap).
--- continued
But then again, I can imagine why LAD, dawhead, etc. prefer sticking by ALSA :P
1) If OSS becomes an alternative, the efforts on sound (both on the part of the developers and the distributions) in Linux will become diluted, in order to provide support for OSS or ALSA depending on which part you're siding :D
2) If OSS eventually "wins the battle", which may well happen if only it becomes more Linux-friendly, dropping those features that make the Linux advocates cringe, and eventually, oh, I don't know, actually resulting easier to maintain, easier for the developers to produce general purpose audio, etc... well, that would mean for JACK and ALSA being relegated to a niche (JACK is already a niche, but it works best with ALSA, which at the moment is "mainstream"), possibly resulting in development for ALSA becoming even slower, and resulting detrimental for realtime audio in the long run. But then again, this may be avoided if there were true coordination and direction under the Linux world. By the way, if this were achieved, it would also be easier, IMHO, to get sponsoring from commercial entities, because it would mean having a robust developing scheme which would appeal more to companies, and in this scenario both general-purpose OSS and realtime JACK + ALSA could coexist.
But this is OT.
I would like to point out that, although, according to dawhead/Paul Davis, OSS's realtime capabilities are inferior to ALSA (or, better, the JACK+ALSA combo), and that JACK's design makes it nearly impossible to make things wrong,
I have never made that statement. What I have said is that a pull-style ("callback") API like the one used by ASIO, CoreAudio, PortAudio and JACK (among others) makes it more likely that application developers will get the design of their programs "right" in the sense that they work when used in low latency "realtime" situations. It doesn't stop someone from writing an application that still gets this wrong, it just nudges them in the right direction.
[ re: zynaddsubfx ] ... the OSS driver happens to be, in this case, much more reliable than a JACK driver.
no, when zyn is used with OSS it uses substantially more buffering than when used with JACK AND OSS doesn't tell zyn that it didn't meet deadlines AND neither ALSA nor OSS actually do anything about not meeting deadlines (ALSA can, but is generally not asked to by applications.) With JACK, tracking such failures is part of the design. zyn has recently had its JACK "interface" reimplemented in the yoshimi project, and its finally capable of almost meeting RT deadlines, which means that changing patches, loading instruments etc. no longer causes audio glitches. Was that because JACK was broken? No, it was because zyn author's, although capable of producing a beautiful sounding instrument, didn't really understand RT audio programming. thankfully, he made his software open source and others have been able to step in and clean these issues up (more or less).
Is it so hard to have things that "just work", at the moment, for everyday usage
Given the absolute failure of anyone to (a) define what "things just working" actually means (I don't mean handwaving descriptions) and (b) implement an full audio subsystem capable of handling ALL the needs of users, from those who want desktop bleeps and media players doing their thing, to portable device users (oh no, there's hardly any of them!) who care about battery life, to musicians and audio engineers who need very low latency while monitoring ... yes, its pretty damn hard. And uninformed speculation by people who self-admittedly don't understand the details of this sort of thing doesn't help. Not one bit.
2) If OSS eventually "wins the battle", [ .... ] well, that would mean for JACK and ALSA being relegated to a niche (JACK is already a niche, but it works best with ALSA, which at the moment is "mainstream"),
When you know this little, its generally best to either sound more speculative or shut up. JACK has had OSS support for years, and its works more or less as well as it does with ALSA.
By the way, if this were achieved, it would also be easier, IMHO, to get sponsoring from commercial entities, because it would mean having a robust developing scheme which would appeal more to companies, and in this scenario both general-purpose OSS and realtime JACK + ALSA could coexist.
I don't mean to be rude, but comments like show that you really don't understand any of the basic, critical concepts about audio on Linux, what OSS and ALSA actually are, how they interact with applications, how users interact with them and so on. Neither ALSA nor OSS (and certainly not JACK) are, by themselves, solutions to "the problem" that people keep talking about. If someone waved a magic wand tomorrow and the linux kernel reverted to the use of the OSS driver set, and all applications magically got rewritten to use the OSS ("unix") API, the audio situation for users would not have improved in any way whatsoever. Almost none of the concerns of users that have been voiced in this blog, its comments or elsewhere would have been addressed. We'd have a single audio API ... hmm, wait a minute, didn't PortAudio, OpenAL and others exist before ALSA came along. Hmm. Well, anyway, we'd sort of have a unified audio API except that it wouldn't be AND we'd still have all the basic design concerns that led to ALSA in the first place. Score: bullshit 1, users 0.
So, if you want to put energy into this, please focus on the right parts of the stack.
dawhead:
Thing is, you don't have to cater just for the needs of the developers or for the power-users.
I have been a Linux user since 2001, I'm not a programmer (although I'd really like to delve more into the subject, as I'd really like to contribute). My first distro was Debian (I think it was Woody... 3.0), and I remember that for me, a Linux newbie, setting X to work with my video card (a Matrox G400 Millennium, supposedly one of the better supported cards) was a plain headache. Only Mandrake (as it was called at the time) or Redhat would work, although usually with impaired functionality (regarding color depth, resolution, refresh rate. I've seen some of these problems even on not-so-ancient Ubuntu releases!) Fast forward about 4-5 years. Now most of the things work fine, or much better than before.
But I still know that audio sucks. Which is a pity. I've tried installing OSS4, and it does seem to work better (although I need to test it a bit more, of course). Other users have reported this as well.
You might say, it's a configuration problem. But regular users DON'T CARE! Do you at least understand this? I don't want to risk sounding like on of those Macheads which repeat "I need to productive and concentrate on my work" as a mantra, but they do have a point. Developers don't have to cater for other developers. They only do to ensure that the end-user gets what he wants.
All I know is that in my nearly 9 years of experience with Linux, audio has been a pain. I really want to support Linux (and in my music stuff, I have also tried using Ardour, which I know that has loads of potential, but yeah, then again, the lack of documentation is a bit unnerving) and zyn has been my main synth.
As I repeat, I don't really know enough to comment on OSS or ALSA, what I was saying, or actually trying to say, is that if there is someone at least knowledgeable among the audio developers saying that the advantages of ALSA over OSS aren't strong enough to counter the strengths of OSS, maybe they do deserve some attention, instead of being dismissive.
Oh, and please DO read the smileys. All the second part of my post was kind of tongue-in-cheek :D (although I imagine you'd see a change to OSS as detrimental for your needs).
Thing is, you don't have to cater just for the needs of the developers or for the power-users.
I never said anything about power users or developers. Let me tell you something I learned at the last Linux Plumbers Conference. In another year, the most common linux audio platform by far will be portable computing devices aka smartphones. The requirements of the users of these devices don't really look anything those of a desktop user. They don't look like those of a musician or audio engineer. Somehow, however, there needs to be an audio subsystem on Linux that can support all 3 of these happy go lucky end users. Where do you imagine most of the money is being spent on improving audio support right now? I'll give you a hint: its NOT on the desktop. We're fortunate, however, that ALSA and PulseAudio appear to be capable of providing what is needed to at 2 of these constituencies (low power portables and desktop users), and ALSA is capable of providing the kernel side of what the 3rd group needs.
You might say, it's a configuration problem. But regular users DON'T CARE! Do you at least understand this?
Of course I understand this. Its very easy to solve configuration problems when you fail to solve a half dozen of the related issues. Given that OSS uses a design that is unacceptable to the kernel team, fails to solve a number of application level API issues that people wanted (and to some extent, still want) solved, and fundamentally brings nothing to the table that actually deals with the problems we have to address, maybe you can grasp why the people who ACTUALLY WORK on the audio infrastructure don't give it very much thought. OK, so installing it solved some (all?) of your problems. Great. It doesn't address the problems of a lot of the users that you claim to care about.
All I know is that in my nearly 9 years of experience with Linux, audio has been a pain.
Nobody is denying this.
As I repeat, I don't really know enough to comment on OSS or ALSA, what I was saying, or actually trying to say, is that if there is someone at least knowledgeable among the audio developers saying that the advantages of ALSA over OSS aren't strong enough to counter the strengths of OSS, maybe they do deserve some attention, instead of being dismissive.
Several audio developers have commented here and elsewhere. We've discussed at HUGE length the benefits of ALSA over OSS, at the same time as admitting some of the hairiness of ALSA. We stated over and over that the strengths of OSS (a trivial API, in kernel s/w mixing) don't outweigh its disadvantages (a kernel API, in kernel s/w mixing). If I, or anyone else is being dismissive of OSS, its not because we don't know its strengths. Its because its strengths do not overcome its limitations and because we're tired of arguing with people for 8 years who think that because a particular kernel driver tweaks the registers on their consumer audio interface a little better than ALSA that we should toss out the entire, emerging audio stack on Linux.
OK, so installing it solved some (all?) of your problems. Great. It doesn't address the problems of a lot of the users that you claim to care about.
But what I'm saying is, at the moment, isn't maybe OSS the best thing available for the needs of at least part of the userbase? It's actually a question. Some developers here (even those not in favour of OSS) have hinted that for trivial audio, OSS may be, if not a better choice, at least a easier one at the moment. The very usage of JACK requires JACK-enabled clients*, which means that JACK usage is pretty much exclusive. (Although it is true that it's easy to stop JACK).
* : or maybe it doesn't, maybe there actually is a simple and hassle-free wrapper for GStreamer or whatever, which most of the available software isn't, I haven't looked into it.
But what I'm saying is, at the moment, isn't maybe OSS the best thing available for the needs of at least part of the userbase?
So your proposal on how to fix the things for part of the userbase is to continue on with the problem that this original blog post was about: the crazy proliferation of audio APIs
It's actually a question. Some developers here (even those not in favour of OSS) have hinted that for trivial audio, OSS may be, if not a better choice, at least a easier one at the moment.
OSS is not "better" for trivial audio. It has a simpler API. This last week, I've been doing some audio development on OS X, which is frequently acknowledged as a platform with some immensely excellent audio applications and all round good media support. If a linux developer has a problem with the ALSA API, then god help them if they ever try to develop on OS X, where, to do the job "right" they would have to deal with so much stuff that they might just blow their mind. Devices being unplugged? Playback or capture properties of a device suddenly changing from underneath you? Sample rate changing because another started up? The idea that somehow ALSA is "too complex" for those poor liddle widdle people who just want to write an audio application ... its pathetic. You want a simple audio API? Use a library. Don't try to change the entire kernel stack just because you're not willing to deal with the full complexity of the task.
I think the biggest issue is not complexity alone. It's confusion, and lack of documentation.
Anyway, if developing in OSX is complex, although its libraries and its sounda architecture are superior, well... isn't Linux about being an alternative? (Of course it is, otherwise there wouldn't be a reason to use it and to develop for it, aside from "philosophy" or price-related reasons). I may not be a programmer, but I do have a scientific education (I'm a medicine student) and I understand differences in methodology and the benefits in having a simpler workflow. The easier it is, more software might eventually be developed, which will also may be less prone to bugs, but of course you already know this.
I may not be a programmer, but I do have a scientific education (I'm a medicine student) and I understand differences in methodology and the benefits in having a simpler workflow.
If you want simple, use the Pulseaudio simple API. It'll work in simple cases, and quite a few non-simple ones too. Just don't be surprised when it does something you didn't expect because you didn't read the rest of the manual (e.g. it defaults to about 2 seconds latency when recording--because that's the most efficient recording mode, and you didn't specify what latency you wanted).
Don't whine to the larger Linux audio community about some version of Pulse you tried being crap because your distro of choice did a half-assed job of hacking a snapshot of Pulse into the system and didn't bother debugging it or the applications that depended on it before release. That's really your problem and Ubuntu's problem, not Linux's, ALSA's, or for that matter even Pulseaudio's problem. Other distros waited for the bugs to be fixed and integrated before flipping the switch on the defaults.
Nuh-uh!
It doesn't work that way!
The average user doesn't give a second chance. The average user doesn't care if the fault is in the distro, in the software itself, etc.
Thing is, there may be thousands of distros, but Ubuntu is, in a way, the most representative of them all (at least nowadays, don't tell me there are Fedora, Slackware, Debian, or whatever, because the average user _will install Ubuntu_). _I_ know that PulseAudio was probably badly implemented, and will probably give it a second try as soon as I read widespread comments about it being stable enough today, but as in music, if the single is crap, why should you buy the album? You might say "but the single is no good, you should listen to the album". Well, why waste money or time to go to the shop, listen to it and THEN decide to buy it? It may be the record label's fault, it may be the producer's fault, but who cares. Most users won't be so understanding. Most users are prejudiced, want everything straightaway, and working out of the box.
If your answer is "well, then we don't need that kind of users" you're probably a fool, since Linux needs all kinds of users, these ones as well (by the way, the absolute majority, most people complain because they can't even find the official MSN client).
(By the way. the problems I had with PulseAudio involved crashes, having to hard-reset, or at least having to terminate the X server and login again. Which, I may say, is not so good).
OK, so you're trying to compel me to fix Ubuntu problems for Ubuntu users? The Ubuntu community must fix Ubuntu's problems. Why you'd think anyone else should (or can) fix the problems in Ubuntu defies my understanding.
I'd accept a contract that involved contributing to Ubuntu, but otherwise I wouldn't touch it. It seems that the first thing I have to do whenever I start using Ubuntu is fix stuff that wasn't broken in the first place in other distros. Not only minor problems (dash anyone?), but serious issues that should have stopped the release process dead in its tracks.
I actually had to go out of my way this year to do unpaid work to prevent serious, data-eating bugs in Ubuntu from being propagated to the distro I use (literally, I had to have a patch reverted that an Ubuntu community member extracted from Ubuntu code and submitted to my favorite distro as a feature request). This is the quality of contribution that I have come to expect from Ubuntu over the years. (To be fair, some of the patches are good, but you have to check each and every one.)
I really have no sympathy for people who continue to suffer from Ubuntu bugs after being told over and over again to switch to one of many alternatives that are less broken for their specific workloads and use cases (yes, even the specific use case of "working out-of-the-box"). They're as bad as people who continually complain about life on Windows, but continue to use it themselves and refuse to switch to anything better. None of us can help them if they won't help themselves.
At the end of the day that sort of issue is why I use the distros that I use: they fixed all these bugs (or never introduced them in the first place), so I don't have to. This means I'm able to work on problems that have not already been solved elsewhere in the Linux community, which is a whole lot more productive (not to mention lucrative, since I'm paid to do this sort of thing) than helping Ubuntu catch up all day.
I should point out that many have tried to fix Ubuntu (and not just the audio stack, but Ubuntu's other problems as well). The lack of success so far is due to many overlapping technical and political issues, but lack of reasonable effort from outside of Ubuntu is not one of them.
We all have our own end-users, and we do care about their experiences, and we do give our own end-users a good experience (or at least a better one than they have elsewhere--otherwise, they wouldn't be our end-users for long). We have our own projects with our own deadlines. We have fixed every bug we know of on our own critical paths, and we have published our results. The best support I can offer to Ubuntu at this point (other than all of the above) is to ask them to try harder to keep up, please.
But bear with me for a moment. Seen from the perspective of a Linux user for 9 years, I think that unfortunately, for the sake of Desktop Linux, the Linux community (users and devs alike) should just "swallow the bitter pill" as we say in Italy and support ONE distribution. Or maybe just two or three of them.
You do realize that only Ubuntu, today, has the visibility and the support (not only from Canonical itself, but from the hundreds of forums, books, e-books, blogs, etc.) that make it the only obvious choice for a first-user, right?
I'd switch to, oh, I don't know, openSuSE or Debian or Slackware or Fedora, but in my years of trying Linux distros (even wasting days of my time) the only distro that ever worked neatly has been Ubuntu. And SuSE, but SuSE isn't/wasn't as well supported. Debian is rock solid, but historically not very easy for the average user (although they tell me that things have actually changed), and the stable tree is sometimes years old (and testing isn't too bleeding edge anyway), although this may be a positive side to it (Ubuntu has shipped beta versions of even Firefox - although quite stable, but still, beta! What kind of an impression is it going to make?! I actually think it's insane). Fedora never used to be slick in my experience, but then again, things may have changed. And I think that the Cathedral vs Bazaar model doesn't work for Desktop Linux anymore, since there is sort of a hybrid situation, where there is one "Cathedral" (Ubuntu), several "Bazaars" (I know it may be wrong or simplicistic, but try to understand what I'm trying to tell) which offer the _same_ thing, only in a different packaging. The difference now is more like "big chain vs. local shop", where Ubuntu offers you payment by instalments, additional warranty for free and sale prices (although there might be a catch) while the small shop struggles to survive. More distributions have been useful in the past, when there were many different alternatives which basically competed with each other on the same level (and each offering some strengths and weaknesses), not now that there is a bigger, easier to use, more marketed, better funded distribution.
But, going back IT, there really has to be a working solution. Me being basically a layman, I am not interested in API strengths or elegant design (although I can understand the need for them), what I'm saying is that today, getting audio to work flawlessly is much harder than on Windows or Mac OSX (at least, regarding general purpose usage, JACK is a different beast altogether, and I feel somewhat more in control when using it compared to ASIO or CoreAudio), and the thing is, _years_ have passed by with unresolved issues. MIDI playback has been an issue since I first installed Linux, and it's been 9 years of not having it (I repeat, I vehemently refuse having anything to do with Timidity). Audio playback doesn't feel smooth. I have to be then extra-careful when I use some OSS-only apps (such as zyn) because if I use them while another app is using the audio device, the system literally fucks up. It used to crash heavily before, now it's an even more annoying "grinding to a near-halt", with me actually being able to fix everything by opening a virtual terminal, waiting for it to appear, login, wait for the password dialog to appear, enter the password, waiting about 40 seconds for the command line, ps aux, etc. etc. and it's absolutely frustrating (yes, I have the newest ALSA installed, as I said, it's kind of better, it doesn't fuck up totally, now I can actually recover the OS without having to hard-reset :D ).
And for me, seeing that, yeah, "Linux audio is a mess, we know", but nothing has changed during the years, is frustrating, even because I'd even like to contribute, but I am not a programmer, and nobody seems to care enough about these shortcomings, while in the last years I have seen SO MANY projects, most of them even promising, starting, churning out a couple of releases, and then it's hayballs or what the hell they're called rolling all over the place.
Hi dawhead.
>If someone waved a magic wand tomorrow and the linux kernel reverted to the use of the OSS driver set, and all applications magically got rewritten to use the OSS ("unix") API, the audio situation for users would not have improved in any way whatsoever
I disagree with you.
I waved a magic wand over my install, and switched everything to OSS, and I'm having better audio support out of my system. Everyone who comes to me with a machine giving them audio problems in Linux, I install OSS, all problems solved. Same with people who use software I support.
Clearly this magic wand is improving cases - at least the ones I deal with anyway.
>Given that OSS uses a design that is unacceptable to the kernel team, fails to solve a number of application level API issues that people wanted (and to some extent, still want) solved
Granted it may work for me but doesn't work for everyone else is quite possible.
>and fundamentally brings nothing to the table that actually deals with the problems we have to address
It deals with all the problems I have to address. Maybe not in your workloads, but it does in mine.
>OK, so installing it solved some (all?) of your problems. Great. It doesn't address the problems of a lot of the users that you claim to care about.
I care about my users. My users who tell me software I supply support for doesn't sound right in ALSA, or the sound is lagged, or other issues, I have them install OSS. Virtually every time, problem solved. So it does indeed solve the problems of users I care about. I don't necessarily know if it's any worse for users that I don't care about.
>If I, or anyone else is being dismissive of OSS, its not because we don't know its strengths. Its because its strengths do not overcome its limitations and because we're tired of arguing with people for 8 years who think that because a particular kernel driver tweaks the registers on their consumer audio interface a little better than ALSA that we should toss out the entire, emerging audio stack on Linux.
I'm looking at this objectively. I've ran ALSA on a dozen different sound cards, with problems on each of them. Switching to OSS v4 on the same sound card fixes the problem. Same goes for users I speak to. Does this mean every ALSA driver is buggy except for obscure ones which myself nor anyone I speak to uses?
Then there's working with the API. I tried my best with it, yet users still complain about problems. So I'm told I'm probably using the API wrong, so instead I should use a wrapper. So I go and use a wrapper, which has the same problems, at which point I'm told the wrapper is programmed wrong, so I should go use the ALSA API directly or a different wrapper. So I start over again and wash, rinse, and repeat.
Instead of saying it's just one broken driver, or using the API incorrectly or other excuses, lets just admit there are serious problems in certain use cases. Either they get fixed, or users of these use cases are forced to use something else. Building a larger audio stack on top of underpinnings which aren't working for everyone isn't helping.
>So your proposal on how to fix the things for part of the userbase is to continue on with the problem that this original blog post was about: the crazy proliferation of audio APIs
That's not what the original blog post was about.
It was about some back-ends or APIs working better for some people than others, and distributions need to provide easy ways to switch back and forth between them, so an end user can easily use whichever works best on his machine for his use cases.
You are damn right, just fix the sound lacks / sound lock / sound issues in Ubuntu/Mint Linux & other linux distributions and I will happy like hell.
This is a critical point as far as a distribution is concerned,
@Reece Dunn, OSSv4 does not restrict access to /dev/dsp. It has full software mixing in kernel, Better than ALSA. That alone is for the most part, universally accepted.
if you were talking "exclusive access", like how jackd deals with OSS. Exclusive access is not normal for standard latency audio applications. Regardless, it can still be disabled on the OSS conf.
Your garbage FUD is about OSSv3, which as the author stated... simply not relevant and not related to "today".
Furthermore, we do not say "Linux has bad ATAPI support" , because it was once true circa '04.
OSSv3 is deprecated, OSS in abbreviation refers to OSSv4 (or latest incarnation) by default.
I want to create a model for a musical instrument. I need something that is fast, clean, mean and responsive with simple APIs. I think OSS4 will fit the bill.
Has the state of sound in Linux improved at all in the three years since this blog post was written? Maybe it's time for an update. :)
Would love to see an update of this post. What has happened since 2009?
Hi.
Thanks for this post. At last i've got clean sound now with your suggestion (leave ALSA and install OSS instead).
I can't beleive we are in 2013 and i have to go back to an *old* solution, ment to be the *wrong* path to fix sounds issues in my brand new Debian squeeze (poor sound quality).
Something is wrong with Linux Desktop.
I'm a new linux sysadmin (coming from a strong Solaris sysadmin experience) and i'm as impressed by linux in pro context (may not be the most powerfull OS in some situations, but the ROI is really good i think) as i'm disappointed by it in the Desktop field. It's such a pain to do what should be basic. Don't want to troll here, but i do think something is really wrong in all that. Playing your mp3 songs should be straightforward and not requiring root, investigations, guesses or whatsoever !
It is nice to read the information provided in your blog and i like this information because it is based on reality and i like this information. And it provides knowledge and useful information to the visitors of this site and i would like to visit this site again.192.168 ll
Thanks for sharing this information. Try out FMWhatsApp from here
Hi....
Sound in Linux really doesn't have to be that sorry after all, the distributions just have to get their act together, and stop with all the finger pointing, propaganda, and FUD that is going around, which is only relevant to ancient versions of OSS, if not downright irrelevant or untrue
You are also read more Instant Loan App
Thanks for sharing such an informative blog.
Linux training in Pune
best content thanks for sharing
body to body massage in bangalore
Post a Comment