Tuesday, September 13, 2011

Why Linux Sucks on Desktops and How to Save Your Ass

In terms of pure resource usage, performance, stability and security Linux wins. Pick any distro (from Debian, Scientific, OpenSuse, Mint, Arch, PCLinuxOS...) and compare it with Windows 7, you'll know what I mean. Discard X-desktop and the stuff beyond, pure linux shell is perhaps the most powerful tool for computing. I've never been a Windows guy. But quite often I came back to it at odd times when I am almost pissed off by the so called direction (or lack of it) in the Linux world. So what're those minor glitches that sour the desktop experience.

Here's why Desktop Linux Sucks

Desktop Graphics Drivers

If you've a plain Intel IGP on your mobo and you are not a gamer, almost always you'll have a smoother experience. Similarly you'll have a painless experience with older realtek, atheros, huwei and many other devices. But if it's new, shiny and of non-standard (as per linux driver support) you're stuck. Take for example Nvidia optimus graphics technology related to switching between IGP and discreet graphics. It's been more than two years yet the graphics stack is half-baked. ATI/AMD side of the story, especially concerning the recent Fusion series APUs, is more grim. Though AMD was much vocal a year back regarding open source drivers for its fusion series APUs, the driver support for Linux is lame at best at the moment of writing this post.

I have an Asus 1215b EEE PC based on Fusion platform that sports a C-50 APU (AMD Ontario CPU + Radeon 6250 GPU). Windows 7 runs quite well and offers a thrilling graphics experience powered by UVD and DirectX 11. Linux? Fusion graphics is muddy with multiple wrappers, drivers and methods. When kernel 2.6.38 flaunted of having Fusion APU support through Gallium drivers it didn't disclose that it was limited to just decent graphics. You can't expect anything beyond, forget the support for UVD and 3D acceleration till next couple of years. For better graphics experience you're left with distro-specific fglrx drivers, xvba/vaapi wrappers and suitable xorg pieces. But the distro-specific drivers are generally dated, so, I pulled in the latest catalyst driver sources from AMD and compiled them for Debian Squeeze. I had a good-go, but the graphics performance was inferior to that of Windows 7.

Correct the Basics

Blue Screen of Death is history. Modern Windows OS (XP onwards) ensures you land at least on a basic vga mode if the install disk lacks proper display drivers. Then you're ready to install the proprietary drivers. But linux graphics problems sometimes slam you with a black screen (call it Black Screen of Death), and if you are unlucky you even can't enter to a rescue shell. Sure, there are dozen of cheat codes [(nomodeset, radeon.modeset=0, nvidia.modeset=zero, intel.modeset=0, if kms is messed up) or (vesa="numeric resolution value" for a vga screen or xforcevesa) or (some similar acpi cheat codes on the kernel line)] to put you on a workable shell. Who cares with these not-so-dirty but definitely-cryptic codes? Distributions should come up with fool-proof measures to land the users on a vga desktop without much fiddling around.

Desktop paradigm is gone. It's the time for mobile computing where sleep/suspend/hibernate/resume is very necessary. Linux world has been fighting with these features for years. These features work fine with standard distributions running on standard hardware. Sadly, they are far from being stable in case of very new or esoteric hardware.

Bewildering Choice vis-a-vis Rapid Development

Choice is good. But bewildering choice is very bad. Mass look for a few working applications, not a million shoddy clones. Situation is slowly improving in this regards. Thanks, the leading and serious flavors such as RHEL (and its clones), Debian, Arch and most recently Mandriva following frugality as far as choice of applications and desktop environments are concerned. Less configurations and less packages means less clutter. The bewildering choice and plurality in design philosophy decrease the mindshare. It also kills much of developers' hours in re-inventing the wheel. This coupled with rapid development worsens things further. Take the most popular distributions of our time, Ubuntu. Though it pulls packages from Debian testing/unstable it puts efforts in developing a few packages and polishing them. It follows a fully automated packaging and testing. However, given a 6-months release cycle, it must not be putting more than a month towards real development. Who'll expect fidelity from such a fleeting women!

Linux != Open Source. But the later is blamed for the plurality in Linux. For example, in Windows, if a certain version of package works it works. But in linux that's not always true. For example pidgin 2.7.3 on Windows, owing to the singularity of platform and API standards will be the same across XP, Vista and Win 7. But the pidgin on Fedora might behave differently than it does in Debian. The difference lies in how the particular software is packaged across various distributions. The same is true in case of some core components such as kernel. Kernel 2.6.38 in Debian backports repository is not 100% the same in Remi's repository meant for RHEL and its clones. The same trend is true in case Ubuntu, Mandriva, Arch and Slackware. Each original distribution has its peculiar set of patches for kernel, and particular build flags and dependencies for a particular software.

Features vs. Polish

Firefox undoubtedly has more options than Chrome, and OpenOffice is more versatile than any other proprietary office suite. Both are feature-rich, but both lack polish. Firefox is trying to catch up chrome on desktop. But still it lacks the philosophy of chrome, frugality. Firefox still caches aggressively like a hungry beast and sometimes forgets to flush. OpenOffice is jumping from Sun to Novel to Document Foundation. It's as slow as a sloth. Performance improvement is a long due for OpenOffice (now LibreOffice).

So, How to Save Your Ass?

Hardware and Distribution

1. Choose standard hardware. Save the output of your "lspci" commmand using any liveCD and post the text across popular forums to know which distribution fully supports your devices. Of the supported distributions choose a stable one from Ubuntu LTS, Debian stable, CentOS or Scientific Linux. If you're hardcore gamer, forget linux for a while.
2. Pin up the critical packages. If your current setup runs your devices well pin up the core packages such as xorg, kernel, sound-base packages and other device drivers so that future upgrades won't break your setup. I've faced sound problems, graphics hells and many booting problems related to upgrading core packages.
3. Don't tinker much. Choose your favorite distribution, customize it to your liking and forget. No need to always put newer bits and pieces. Newer is not always better. Even all new features may have nothing to do with you. Go by perceivable experience regarding performance, features and stability, not by numbers and benchmarks.

Personality and Distribution

1. If you really want to learn linux and expect a painfree experience for a longer future, choose Arch or Slackware. The things you learn here will last for ever. And a perfect Arch or Slackware setup will rarely go wrong.
2. If you want to make living out of Linux go with Scientific Linux. Because it's perhaps the most sincere clone of RHEL the present king in the enterprise world. Though it doesn't replicate RHEL in bug-for-bug philosophy, it's more predictable and open than its more popular cousin, CentOS.
3. If a great no-nonsense home desktop is all you want choose one from PCLinuxOS, Mepis or Mint. All three guarantee a superb desktop experience out of the box. PCLinuxOS gathers the best from across entire Linux distros, Mint does Ubuntu much better and Mepis polishes Debian to the extremes for a hasslefree desktop experience.
4. If you don't fall into any of the above and are apathetic to Windows. Choose FreeBSD, tame it with extra caution and make it your own. It's very very unix to the core and very systematically designed. If you don't want to shed that extra sweat choose OS X. Buy Apple hardware or assemble your Mac following insanelymac website and put OS X. OS X Mach kernel is heavily inspired by Unix. You will get many of the POSIX features including Bash shell.

That pretty well sums my two cents!

Thursday, June 9, 2011

Why AMD Fails


With the merger of ATi and AMD the new AMD Fusion platform offers far better value for money performance than Intel. However, AMD is still no more near Intel when it comes to mass adoption. Why? The reason is twofold:

1. Creates hype early but comes to party very late: Take for example the Fusion APU hoopla. AMD announced this groundbreaking technology 6 years back and kept on shelving for very long till Intel appropriated similar technology into Sandy Bridge. Sure, even now Fusion is a better proposition against any of Intel's Atom architecture. But sadly, Atom has become ubiquitous before Fusion knocks the door.

2. Great hardware but poor driver/software support: Creating new technology and throwing benchmarks of the same is not everything. AMD raved for its DirectX 11 and UVD support on the Fusion platform. It's great. But failed measurable in bringing out opensource drivers in a comparable time limit. Even today opensource Fusion driver support is bad at best. Be it gallium, catalyst or the inbuilt drivers of kernel 2.6.38, none work as well as they were tauted. The whole of device innovation trashed due to lack of proper software support. Whereas, Intel has promptly provided device drivers for Sandy Bridge. Intel's open-source VA-API implementation into graphics hardware is better off in many regards than AMD's XvBA under Linux. These days, VA-API works quite well and Intel even recently introduced support not only for video decoding, but also video encoding, using the VA-API library with the new Sandy Bridge hardware. Intel's next-generation Ivy Bridge support should be the same way. The recent Intel SNA technology has boosted Sandy Bridge as well as earlier Intel IGP performance to a great extent. Here AMD lags.

Weird Dependency of Packages in Linux World

Often, even today, I come across many problems related to package dependency in linux. Most of them suck while you are installing certain packages offline or building some packages from source. However, the worst still yet, as per my opinion is those weird packages that you can't remove. If you dare to remove them you're going to uninstall some key packages from the system. That may render your PC useless!

There are thousands of such cases. Here I will cite just four: fortune cookies, cowsay, libthai and libgweather.

Last night, I was trimming Linux Mint Julia on a friend's netbook. The netbook was running on a bare-and-basic intel N450 platform. I removed as much packages as possible. But while removing those weird four packages I got frightening warnings.

While removing libthai, I got a warning (click on the pic below to see it on true aspect/size) that it was going to remove alacarte, artha, bleachbit, avidemux and some 94 other packages. Damn, what libthai has to do with my computing life! I never browse Thai websites for work and/or fun. Why the heck it's dependency with some core/key packages? Why not libindic or libafrica (if they exist at all) is such a dependency?


Next weird dependency is the combination of fortune-cookies (fortune-min, fortune-mod, fortune-husse) and cowsay. Try to remove them you'll get a notification to remove ubuntu-minimal, mintsystem and some other core/key packages (click on the pic below to see it on true aspect/size). WTF? Why the gnome or mint (or whosoever) people have bundled these fancy programs as mandatory. Any fortune or cowsay message is a nag for many. Yet, they can't remove them. Sure, there are workarounds to stop fortune and/or cowsay. But... ?


Finally, why is that libgweather an integral part of gnome? Remove libgweather packages warns to remove gnome-panel and indicator applet as well (click on the pic below to see it on true aspect/size). What was wrong in making it as an optional package? I just like clock, vol applet and network monitor on my gnome systray; sane and usable.


As I said there are hundreds such weird packages that are built as key dependency with some other really important packages. Seems, the linux world is leaning more towards fancy than function.

Monday, May 30, 2011

Linux Kernel 3.0 is Not Far Away


Linux kernel 2.6.39 just released, much earlier than expected. Reasons?

It's coming of age after releasing 39 updates to kernel 2.6 line. The Linux 2.6 kernel series is now on its way to its 40th release in the past seven years of development. Linux 2.4 series had about 24 releases prior to Linux 2.6.0 being released and the 2.4 series as of today is up to Linux 2.4.39. And Linus (alongwith the community) is getting ready for the next gen kernel - 3.0. However, 2.6 series will still get patches and updates as still does 2.4 line.

Is it just a change in the versioning scheme or are there much under the hood? Well, Jump from 2.4 to 2.6 line had some striking features, the same will happen now. Most importantly, kernel 3.0 will remove some old cruft it gathered in its life of, say, roughly 20 years.

This jump in versioning won't reflect in the package list of enterprise linux distributions such as Red Hat (and its clones) who are adamant when it comes to security and stability. That means Red Hat, CentOS, Oracle and Scientific will stick to 2.6 series for roughly a decade, and will backport only some select features.

Thursday, May 19, 2011

How to Get Rid of "Unlock Keyring" Message in Linux after You Changed Login Password

I've been running Linux Mint Julia for a quite a long time now. Only yesterday I updated the entire system (including kernel, firefox, chromium, openoffice, pidgin and all that). Also changed my Login password due to some security reasons. No ugly surprises. No lag in performance. But everytime I started chromium browser it popped up a "Login keyring" message that read "Enter password to unlock your login keyring".


It seems Seahorse uses the login password as master password to unlock its passphrases. Sadly, when the user changes the password, it is not updated to Seahorse and that "Login keyring" popup comes up.

Visited both Mint and Ubuntu fora for a fix. All the solutions there from were pointing to Seahorse (two items in System >> Preferences: Passwords and Encryption Keys, and Encryption and Keyrings).

However, the easiest and unfailing fix to avoid this message it is to remove the ~/.gnome2/keyrings/login.keyring.

Saturday, May 14, 2011

Preparing a Bootable USB to Install Windows 7 on Asus 1215B


Asus 1215B perhaps put the components together around Fusion APU the best way. Great design, brushed aluminum finish and quality components. However, with 12" form factor it could not house an optical drive. It's the typical problem of notebooks with sizes below 13". So, if the product ships without any OS preinstalled you are left with just two options - 1. try installing an OS using a borrowed/bought USB optical drive, and 2. prepare a bootable USB with the OS of your choice.

I don't have a USB optical drive, and none around me has the same. Bootable USB was the way to go. Had it been installing Linux I could have done that either using plain dd command or unetbootin tool. But from what I see this gadget still doesn't have out-of-the-box support in Linux land. I am sure things will improve with kernel 2.6.38. Meanwhile Windows 7 is the best OS for this notebook. But, creating a bootable Windows 7 usb on a linux machine demands you to get dirty with the commandline.

I did try packing Windows 7 iso image into a USB drive using dd (the most common disk copy method, what I did was "dd if=/home/msahu/Window7Ult.iso of=/dev/sdb1"). The installation started but got stuck mid-way. Then I followed the old tried/tested formula, and it worked like a charm. Here is the rundown of the same:

1. blanked the usb drive
dd if=/dev/zero of=/dev/sdb1 bs=446 count=1

2. ran fdisk
fdisk /dev/sdb1

3. removed all the partition of that usb drive, created just one primary partition and turned on the boot flag

4. converted the usb filesystem to ntfs
mkfs.ntfs /dev/sdb1

5. finally extracted the contents of Windows 7 iso image to the usb drive and booted Asus 1215B using that drive

Saturday, April 23, 2011

Idiot Indian IT Sales Guys on Loose

Goal: Getting an ultra-portable power-efficient notebook
Territory: Delhi's much hyped PC Hardware market, Nehru Place
Result: Meeting the dumbest of IT sales representatives

Indian IT force is growing, but the PC Hardware market is definitely not coping with that pace. It's still kind of mom-and-pop culture. I just came back from Nehru Place, tauted as the biggest market for IT products in India. Visited the outlets of big brands - Dell, HP, Lenovo, HCL, Acer, and a few shoddy brands such as Chirag Greenputers, eSys and Intex. Everywhere I found the representatives following the prospective customers and trying to throw their USPs, sometimes those USPs have nothing unique about them. Paradox!

Visit to Chirag computers (or greenputers, as the company calls it) was a laugh riot. A sales girl came up and handed me a brochure. Before she could start throwing tantrums I started catapulting her, "What's so green about your greenputers?". She went on and on..."Our computers save energy. They come with Windows 7 pre-installed and integrated automatic sleep modes preconfigured. Hence, use less energy. Each of our products has got Energy Star certification. Our computers have much storage space to save your documents that you won't need to maintain hard copies, you'll save on paper....", and she was confident throughout.

My next reaction, "WTF! You are such a beautiful girl, but why are you throwing such dumb points. Automatic sleep mode is neither unique to your products, nor a unique feature of Windows 7. AFAIK, it's been there since Windows 2000 and the hardware of that time, or before. Besides, computers generally never run out of space due to documents, it's the audio, video and graphics stuff that consume maximum bytes. Moreover, every other hardware manufacturer/assembler sells computers with Windows 7 and spacious hard drives comparable to yours, or sometimes even more. Are your products, in any ways, superior to, or different from, Dell, HP or Lenovo?"

She stood still and I went out.

Next I spotted a bunch of dumb representatives at Lenovo outlet. A suave gentleman told the features and prices of a dozen notebooks on display. "Do you have X120e, it's the one launched 4 months back, it sports an AMD Fusion e350 APU?" That gentleman moved to a group of his fellow representatives for an answer. After a few cross-questioning with that group he came back to me and told, "Lenovo doesn't have any notebook with that model number and there is nothing like such Fusion e350 APU, what's it, a graphics card, chipset, processor...?"

This time I did not utter that "WTF", instead I browsed the official websites of Lenovo and AMD on my tablet. Showed him, not only does that model exist but it's a top seller also. Then, I told him at length about the Fusion series APUs. And also made him understand that world is not restricted to his showroom.

I had similar funny experiences with the guys at Dell, HP and HCL. Most of them were aware of only a few stock phrases such as: Dual Core, i3, i5, i7, Atom, Nvidia, ATi... nothing beyond. All of them are still in the believe that AMD processors produce much heat. Whereas AMD processors that came after Athlon Neo II run much cooler than any of the Intel chips. They looked quite puzzled when I told them that not just AMD and Intel make processors, there are VIA, Cyrix, Tegra and a dozen others...

None I found had the faintest idea about various processor families, chipsets, platforms and the upcoming products. None of them knew any difference between selling computers and potatoes.

This is the situation in Delhi, let's forget about the rest of the country.

BTW, I am going to buy Asus 1215B.

Tuesday, March 22, 2011

Is CentOS Dieing?

There must be serious issues within the project though the core personnel don't acknowledge it!

RHEL 6 was released on 2010-11-10, RHEL 5.6 was released on 2011-01-13 and RHEL 6.1 Beta was released on 2011-03-22. Both Scientific Linux and OEL have released v. 5.6 and 6. CentOS counterparts are nowhere to be found. With every date slipping seems CentOS 6 will see the light of the day sometime in May 2011. Probably Redhat will push 6.1 through the door by then. The long delay (almost 5 months) in bringing out v.6 has triggered some black-comedy posts in the CentOS fora such as : "The 'C' in CEntOS means 'Closed'!" and "Things are getting from EL6 to HELL6.

I don't know why such annoying delay in rebuilding packages from a stable upstream. CentOS 6.0 development is not development proper, but the rebuilding of some hundreds of packages. CentOS has an easier job of doing a release. Projects such as Debian and FreeBSD do a heck of a lot more. It's multiple times more difficult to release a new FreeBSD or Debian than it is to do a rebuild. Wonder how long it would take to rebuild the system if CentOS base system would have the number of packages that Debian has. Both Oracle and Scientific Linux also do a lot more than just debranding and recompiling, but they don't slip the dates like CentOS, and if at all they do, there's due communication for the same.

CentOS now follows what? SL, OEL? Why this late-to-the-party strategy? What is the aim of CentOS? A simple technical satisfaction? If so, it should be out of the way and drop that Enterprise tag. The lack of communication regarding the state of development and this unprecedented delay in the major releases steadily turning it into a hobbyst's distribution. The lack of communication also seems like a deliberate decision to keep users in the dark. They should come up and say users that the project is closing and that they should look for alternatives, preferably Scientific Linux. CentOS had promised (though, of late some centos guys deny) regular updates within 72 hours, security errata with BugFix and Enhancement errata within 2 weeks after more rigorous testing. Now the delay is serious enough!

There's apprehension that Centos is probably not going to survive for long. The developer group is really too small and the method that they use to prepare and subsequently deploy Centos is too slow. Scientific Linux and OEL are infinitely superior in every way (paid developers, planned schedules, better communication, etc). IMO, Scientific Linux is no less stable, it's just that CentOS has gained the reputation for the earlier timely and good releases. And there's this inertia of change on the mindshare. However, the recent irregularity will definitely force a lot of CentOS user to move to Scientific Linux, and it's for good.

Saturday, November 13, 2010

RHEL 6 has Nothing Noteworthy for Home Desktops

Red Hat Enterprise Linux 6 Final shows up on 10th November 2010, almost 44 months after its previous major release (RHEL 5 was released on 14th March 2007). But at the time it came, it's already bit obsolete for desktop use. Of course, desktop has never been a sweetpot for Red Hat. But was it really tarnishing it's rock-stability by riding a few versions up on some packages? What was holding RH back from appropriating KDE 4.5 series, or for that matter jumping to GNOME 2.32? Sure, it must have backported some goodies from Fedora 13 and 14, but they work underneath, the worry is that it'll put on these DEs till, say, 7 to 10 years. Moreover, KDE has undergone many improvements from its 4.3 to 4.5 versions. Same can be said of GNOME. Debian Squeeze's desktop-readyness (if you consider the DE, system utilities and application software) is more modern compared to Red Hat. Red Hat yet again, indirectly proved that it's not for desktops.

Some enterprise stuff are also bit obsolete. For example, the perl and python versions of this major release are at least a year old. The rock solid redhat stability also leans more towards servers. Though it aims at customers who don't care the version increments but a lot of bug fixes, it still cherry picks bug-fixes. So moving to a later point-release doesn't always solve a problem. For example the boot-delay bug (very important if you value desktops) that crept into RHEL 5.3 is still there in the latest 5.6 beta and it will probably remain in 5.8 (if it ever comes). You can expect similar glitches in RHEL 6.

Thursday, November 4, 2010

Why I Prefer Debian to RHEL: Top 5 Reasons


RHEL still remains the fodder for all those academicians and enterprises. For example, if you want pursue a course in Linux you are advised to do RHEL cos that's what enterprises care. Gradually they advise others to learn/use RHEL even for home desktops. Perhaps, that's why RHEL was synonymous with Linux some years back till Ubuntu made inroads with a bang.

Given desktop usage, I'd choose Debian over RHEL (I'd play dumb if you ask me about servers, I'm still learning), anytime. Why?

#1 Debian does not have those enterprise crap

Install any version of RHEL and it's contemporary Debian counterpart on dual-booted desktop. Maintain a comparable set of applications. Compare both the installations. You will wonder why RHEL includes those enterprise crap on a desktop installation.

#2 Debian cares for desktop users

Server people once make an installation and forget for years. But the story is different on desktops. It shows RHEL never really cared for desktops. It never makes any significant work to improve boot and performance of a desktop system. Generally, the bugs that matters to a home desktop is pushed down the priority list for years and years. Here is one such case.

#3 Debian though cares stability pushes updates more quickly than Red Hat

Most often Debian and RHEL will end up in a tie if they battle for stability. But Debian is very prompt when it comes to major releases. Look back, RHEL 5 was released in March 2007 around the same time of Debian 4 (Etch). Meanwhile, Debian has released Debian 5 (Lenny) and frozen Debian 6 (Squeeze) which may go gold later this year. Any answer from RHEL? It may take almost 4 years for RHEL to release the next major version. Though RHEL frequently churns point releases it shows enough aging compared to Debian.

#4 Package management is lot easier on Debian

Compare yum, pirut or yumex with aptitude, apt-get or synaptic, you will always find *apt* a lot faster/simpler and superior to the *yum*y stuff. Besides, kernel recompiling and compiling packages from source are a lot easier in Debian than in RHEL, though, with 30000 packages in debian repo you will probably never require it. RHEL is more of a closed source OS in an opensource ecosystem. Taming it to make desktop-friendly will force you to an error-land.

#5 Debian is definitely less of a resource-hog and much snappier than RHEL

I've tested the 2nd Beta of RHEL6 and compared it with the Debian testing (squeeze). Those looking for a proof can compare RHEL with Mintified Debian Squeeze. Needless to say, Debian revolves circles around RHEL when it comes to boot speed and system responsiveness, with less memory footprint.

How about this