I have been a daily linux desktop user since 1997 and I will freely admit that part of the enjoyment for me was getting everything working and setting up my perfect desktop environment.
These days, as many others have pointed out, choose the right hardware and everything just works.
At my company, we are currently in the middle of upgrading aging Windows 10 desktops in the Customer Service department to Ubuntu LTS. So far the feedback is universally positive from the CS agents. Ubuntu runs faster on the existing hardware and that's about all they notice. Chrome is still chrome and that's what they use for all the CS apps, including voip calling.
I have found its super dependent on what hardware you get, Especially on laptops. I always pick hardware that works perfectly with linux and I am left with a very premium experience but when using linux on bad hardware like macbooks and broadcom wifi everything just doesn't work right.
Yep. Exactly this. If you get a laptop that a GNOME developer uses, chances are you’ll run into maybe two issues a year. But try to get GNOME to run on two 4K displays and an RTX.
But here’s the thing. Linux has won. Desktop is a dying market. Linux is literally the most used operating system on phone and in the data center, which is where it counts.
The DE has gotten much better transitively from all the work that’s gone into those other use cases. But we still have a gamer market that is incredibly proprietary holding up the middle finger to those of us who think proprietary drivers and codecs are evil relics of a less open world.
If you really want to see Linux on the DE, boycott NVIDIA, or write them a letter.
Actually, tablets are a dying market.[1] Laptops are a declining market. Fewer desktops are being sold, but they're lasting longer because there's no reason to replace them.
This seems right. Apart from store PoS systems I haven't seen a tablet in years and my desktop is 5 years old without any signs of needing replacement. I think it might last another 5 years.
So, as we move forward, all these desktops will end up having stable and mature support. Drivers are often lacking for bleeding edge components, but stuff that was released 3 or 5 years ago tends to be very solid unless it's something obscure nobody has ever seen.
Microsoft doesn't want you to have stable and mature support. They want you using the latest everything and paying for it each month. Microsoft pushed hard to force users to upgrade to Windows 10.[1] Even today, many enterprise users refuse to convert. They don't want Microsoft's new rent, rather than own, software. They don't want Microsoft looking into their machines. They don't want to store corporate data in Microsoft's cloud. But they will convert, like good sheep.
> Linux is literally the most used operating system on phone
What makes this such a victory? The fact that Android uses a Linux kernel is totally meaningless to the overwhelming majority of Android users. The bootloaders on their phones are almost always locked down and even if that weren't the case, if their kernels were swapped out with something BSD derived how many people would actually notice?
What is the objective with promoting Linux? Adoption for the sake of adoption? Getting the Linux kernel onto the largest number of CPUs, just to make that number go higher and higher? Is pursuit of some number the real goal, or is the goal actually something to do with user empowerment and liberation/freedom? A billion tivoized smartphones with linux kernels certainly optimizes for that number of CPUs, but from a user liberation/freedom standpoint Linux/Android is virtually moot.
That Android gives the user more freedom than iOS is completely incidental to what kernel it uses. Android could be a platform open to third party app stores and sideloading that ran on a BSD kernel or even something completely different. To the average Android user, their Android phone is actually less open than Microsoft Windows, which also allows third party 'app stores' (e.g. Steam) and 'sideloading' (so normalized on Windows that it doesn't even use a special term like that.) Whether a system will be open to third party developers without any gatekeeper is a matter of company politics, not something that's emergent from what kernel is chosen.
So yeah, pop open the champagne. Linux won and I'm going to need a lot of champagne to enjoy this hollow victory.
A large majority of Java developers wouldn't say "Java won" regarding Android, because it is in fact Google's J++, not compatible with a large amount of random packages out from Maven Central.
>But here’s the thing. Linux has won. Desktop is a dying market. Linux is literally the most used operating system on phone and in the data center, which is where it counts.
It "counts" for whom? It sure doesn't count for me as an end user who wants to run some bloody programs on my laptop.
Also whether it's Linux or anything else on the phone it's totally irrelevant to the end users. The actual layers the users see are the Google stuff and the Google Java etc APIs. By porting these they could -- and will -- move Android to Fuchsia tomorrow and nobody will even notice.
When we wanted Linux to "win" in the 90s it was the whole desktop Linux (or, if that was there at the time, a mobile phone Linux), complete with desktop environment, userland, etc.
Not as a backend that might or might not be Linux for all the user's care.
> BSD or what have you and none of those users would notice
FaaS and CaaS doesn't negate that there is a kernel API you're interfacing with. It absolutely makes a difference. Claiming that kernels are completely interchangeable is extremely naive, but then again, your first statement about hypervisors is so absolutely, confusingly incorrect that it's difficult to take what you say seriously.
Or, you know, it was a slip or a bad syntax, and you took the least charitable interpretation possible ("so absolutely, confusingly incorrect") and then took that to further extremes "that it's difficult to take what you say seriously".
>FaaS and CaaS doesn't negate that there is a kernel API you're interfacing with
Only in as much that it can't itself be emulated...
> If you really want to see Linux on the DE, boycott NVIDIA, or write them a letter.
I was watching a video on youtube (can't find it now) about gaming on Linux, and it suggested to update the PPA and install the latest drivers. (this was after I refreshed my desktop to be Ubuntu Budgie)
So I did that, installed latest NVidia Drivers. Tried out some steam games, and went to bed. Next day I couldn't boot into the desktop, got stuck in a loop, couldn't remove the drivers and get into desktop at all. It completely bunged my computer. Apparently alot of people had similar issues.
That's just Nvidia. I had the same in Windows two years ago. Windows update automatically installed the latest Nvidia drivers, and after a reboot the screen was black after logging in. I assumed something got messed up (because Windows) so reinstalled Windows. It worked fine for a couple of hours, then I rebooted and had the same thing. Apparently that version of the driver was buggy with my hardware (GTX 1070), so I had to go into safe mode, downgrade and stop Windows automatically upgrading it.
This is literally my only (big) complaint about Linux. Video drivers just SUCK.
I have NEVER had a plug and play solution for display from my laptop. I do expect to be able to plug in an HDMI cable and be able to give a presentation. And I expect to be able to do this with an nvidia card. But they have always failed me and I don't understand why this problem still exists (I would actually be interested if someone knows).
About a month ago, I bought an RX 580 [1] and it worked plug-and-play (zero setup) on Fedora and NixOS. On the Windows side of things, I needed to go to the AMD website and download drivers in order to get out of reduced resolution.
It's an Nvidia problem. Nvidia's proprietary drivers suck and they're nasty to anybody who tries to develop open source drivers for their cards. AMD and Intel GPUs have been flawless plug-and-play for years now. AMD plays nice with linux developers and you should give them, not Nvidia, your money.
Exactly if s wireless card didn’t work in Linux you’d hardly say Linux is a problem. You’d say don’t use that card
You wouldn’t be super happy if you just bought that PC, you’d understand thou. Another example if you buy a car and it doesn’t have android or Apple integration you’d be “damn I should have researched it”
I was surprised when I saw the newest Vega firmwares show up in my /lib/firmware/amdgpu/ with a kernel update only 6 days after the release of the last AMD GPU -- and on Manjaro, where updates are supposedly delayed a little. AMD touted "day one Linux support", which I don't know if they fulfilled, but they're certainly more committed to Linux than Nvidia.
Unfortunately, Nvidia's so dominant (and rightly-so from a price-per-performance unit standpoint).
Also, though: would someone on Ubuntu or a point release distro (which is the majority of Linux end users) have these new firmwares, ever, until they reinstalled to the next OS version? Since hardware driver support seems so tied to the kernel version, I would think not unless they manually updated the kernel (which, not only is that scary on a distro meant to run with a specific kernel but it's something they have to go out of their way to do), which also means they'd miss out on DE improvements and other hardware/protocol support by as much as a year or more, depending on their upgrade cadence. TL;DR: rolling release provides the best Windows-like experience, and an Arch-based distro that emphasizes usability and mitigation of updates that break the system is what best fills that niche, so Manjaro master race.
>Unfortunately, Nvidia's so dominant (and rightly-so from a price-per-performance unit standpoint).
I'm not sure about the price-per-performance. I just bought a new system with a Vega 64, and if you take into consideration I could use FreeSync instead of paying an extra $200 for GSync in a monitor, it works out better.
On Windows sure (I assume), but that goes out the window when you use Linux. On Linux with an Nvidia card you're paying for more downtime/breakage, much worse performance with the FOSS drivers, incompatibility with wayland, etc.
You know... If the thing doesn't work, price doesn't matter that much. I prefer something more reliable that's not as fast (I've been using Intel GPUs for many years now)
And rightly so, like you said. That's why I can't take the other user's advice and just shell out money to AMD.
And while I do love Manjaro, I still have this problem. In fact I get no detection of my HDMI port. When I had Ubuntu I could (sometimes) get it to display if I restarted the computer with the HDMI plugged in.
I'm also not sure why the linux devs don't take this (or at least not that I've seen) as seriously. Linux is in such a good state now that it is easier to convert people. But this problem prevents A LOT of people from switching, and rightly so.
As for the NVIDIA sucks and doesn't play nice. There needs to be a better argument than that. Like why? There's so many people developing on linux with their cards. They are dominant. Most gpu programmers use cuda and a significant amount of ML researchers are using linux boxes. It doesn't make sense (to me) to just say f you to all those developers. Nvidia doesn't have a motive to push people to Windows.
I'm using an RX 560 on Linux[¤], I added a PPA to get the newest stable driver and Mesa, and overall it works fine, with a couple of minor annoyances that may be fixed as I'm writing this.
One thing that doesn't seem to work is hardware acceleration for Youtube videos in Firefox. Playback in a window is fine, fullscreen isn't. Another is that dual-link DVI did not work, so I had to change to using DisplayPort instead, though that issue may have been fixed in the current driver, I haven't checked. For me that's fine, I wanted to switch to DisplayPort anyway, to use the audio output on my monitor.
Those are incredibly minor issues compared to the woes of using the proprietary Nvidia driver. Not to mention the sordid history of graphics drivers on Windows.
While it says that, it doesn’t actually link to anything about the modern amdgpu driver - developed officially by AMD and now part of the vanilla kernel - and the majority of the complaints are regarding nvidia.
At this stage, I’d say I’ll believe it when I see it. I don’t doubt that Google has the capacity, but replacing a kernel that works is fraught with peril, and it could very well be that Fuschia remains yet another Google research project.
I think the point is just that the Linux kernel is irrelevant to the Android operating environment. It could run on anything Google wants it to and may do so in the future.
They can't. I mean they can on their phones. But that's a tiny percentage of the total Android HW. All the big SoC vendors would have to get on board and rewrite their milions of lines of their kernel code on top of Fuchsia. Android is also not just phones, but also on tablets, tv boxes, etc.
This would just cause even more fragmentation in the Android ecosystem.
> All the big SoC vendors would have to get on board and rewrite their milions of lines of their kernel code on top of Fuchsia.
Apple's PPC to x86 transition was successful, as was their Carbon to Cocoa transition. If you want device driver specific examples, Microsoft's Win9x to XP transition was successful, if lengthy, as was its transition to WDDM for video drivers.
SoC and other hardware vendors go where the money is; they may grumble and drag their feet but they will make the transition if it's required of them.
> Laptops and 2-1 devices are the future desktop, and Linux has hardly won there.
That is more due to how OEMs lock out non-windows OSes from running on their hardware. If users were given a choice, the "windows tax" alone would be enough to convince people to give linux a try, particularly in the low-end segment.
- Linux is an implementation detail in Android, isn't exposed to userspace (not part of Java/NDK official APIs), Treble made it even less upstream like and who knows, it might even be replaced by Fuchsia's Zirkon
- Apple products being locked in doesn't have anything to do with laptops and 2-1 devices being the future desktops
As someone who owns a 2-in-1 (Lenovo Yoga 900 series), it works almost perfectly out-of-the-box.
Screen rotation, pen support, virtual keyboard, they're all there and get the job done. The only thing that doesn't happen out of the box is that my keyboard remains activated when I switch to tablet mode.
I'm having way more issues with proper scaling on my 4K screen than I do with the 2-in-1 support.
> Linux is literally the most used operating system on phone and in the data center, which is where it counts.
Since phones are completly locked down, and Linux is the symbol of freedom by software, I'm not sure of that.
Besides, companies still needs desktops (or laptops) to work. Nobody is going to do accounting on Android or iOS. And I'm certainly not going to dev on something else than Ubuntu.
> and in the data center, which is where it counts.
Do you have any stats on this? As far as I know Windows and Linux are very competitive on the server and depending on the analysis Windows comes ahead.
I assume this is true for most other cloud providers. But that doesn't take into account all the companies running racks of Windows servers in their basements.
Keep in mind that this is Azure, the cloud provider with almost certainly the most Windows installs. They try to push all their existing corporate clients to the cloud, and from what I've seen in enterprise environments, they're very successful at that. I do not expect AWS or Google to have anywhere near Azure's numbers of Windows VM's deployed... I wouldn't be surprised it their hosted Windows VM's would be a rounding error compared to MS/Azure's install base.
I have a MacBook that is malfunctioning under MacOS (it's running at 60-80% CPU usage when idle, known problem, no fix available). I tried installing Linux on it, couldn't get it working at all.
I have a Dell XPS15, specifically chosen because the XPS line is supposed to work well with Linux. Numerous problems, all related to drivers. But the main problem is that every time Windows updates it wipes out GRUB.
I figured that my problem is that every laptop I've bought has been designed for a non-Linux OS. So I've ordered a Purism laptop, which should arrive any day now. Hopefully actually buying a laptop designed and built to run Linux will provide a better experience.
I'm using a dell XPS 13 (one of the new ones) and it works wonderfully. I don't run windows on it so I don't have any issues with my bootloader and I saw advice on the internet to get one with no nvidia gpu so I got one with intel graphics.
It really does feel like a OSX level premium feel. The only bit I am missing is fractional dpi scaling which is apparently on the way but turning on big text in accessibility mode works well enough for now.
At work most of our developers run Linux. The hardware mixture is Dell (XPS 13, XPS 15), some Thinkpads and two who use System76. Several of us use 4K displays, and I can't remember the last time anyone had trouble with a projector. Maybe the older XPS 15 might have, once upon a time. It has an Nvidia GPU and so I want to believe it has been flaky at least once, but I cannot recall a discrete incident. All the rest have the standard Intel graphics and everything's peachy.
Distros are all either Ubuntu LTS or Linux Mint with default DEs. I am pretty sure one fellow switches to a tiling WM some of the time, but I forget which.
>But the main problem is that every time Windows updates it wipes out GRUB.
From my experience, this issue is from trying to use an MBR on Linux while Windows uses an EFI. I've never seen Windows mess with another EFI on the ESP. This issue is made worse by programs like unetbootin being terrible at using EFI.
ThinkPads usually just work with Linux. I bought a second hand T470s last year, and nearly everything works without any issues under Linux. Even my TB3 dock (some Chinese unbranded device) just worked when I plugged it in.
The only think that doesn't 100% is the fingerprint reader, apparently you need to setup your fingerprint on Windows first.
The XPS 15 doesn't have a linux "developer edition", but it still plays quite well with Linux (even if it's less "plug and play" than the 13 model).
It comes with Nvidia Optimus and that is still not supported on Linux (thanks to Nvidia) so you may want to either turn off the dGPU or use bumblebee. The HDMI port is wired to the Intel GPU so it should just work.
In the future, just to be sure, I advise you to check the Archwiki: the most popular laptops have a page there, reporting all the working and non working stuff and eventually some work-arounds
Oh I'm not blaming Linux - the problem is definitely manufacturers not testing their machines for Linux. I'm especially mad at Dell because the XPS line is supposedly "Linux friendly" and yet not. Last time I buy a Dell.
I'm travelling at the moment, so a desktop games machine is not an option. And I have to test stuff on Windows. But when the Purism turns up I'll relegate the XPS to games machine duty ;)
This is another issue. Have a problem? Don't worry, someone will helpfully tell you to try another distro. Have a problem with that one too? Don't worry, someone will helpfully tell you to try another distro.
Why would it? Hardware support boils down to new kernels, mesa stacks etc and how many non-free drivers the distro wants to include. Opensuse (Leap) is not particularly cutting edge and is more restricted concerning non-free components than many other distros (e.g. why does it need pacman?).
So, no. It does not have "the best" hardware support and no distro has "by far" more hardware support than all other distros.
> Why would it? Hardware support boils down to new kernels, mesa stacks etc and how many non-free drivers the distro wants to include.
Disagree. At least traditionally package choice and configuration by distro maintainers made a huge difference, as proven by the fact that problems could be solved by just fixing a config file or adding a package from the standard repo.
I've been using Ubuntu LTS versions since 12.04 on Thinkpad T and X series laptops and I'm a very happy camper - out of the box Ubuntu doesn't suck for me, it "just works". I moved from OS X on latest Apple laptops to make my daily job (interaction design + web development) more productive (e.g. workstation running the same OS as servers, tooling etc) but now it's my preferred OS + hardware combo from a end-user perspective. I have to switch back to an Apple machine for testing and pairing with co-workers at least once a week and between the new Apple laptop keyboard, the random reboots (awaking from sleep), shitty web font rendering and intermittent errors relating to Apple ID, I don't miss it. I really loved OS X quite a few years ago but between the latest hardware (don't get me started about cords/dongles needed for a 2018 Macbook Air) and growing list of OS X quirks I'm always happy to return to Ubuntu 18.04 on my Thinkpad T450s.
I think many of the points raised in the article affect people making desktop software for Linux rather than end-users of desktop Linux. It seems like a global list of issues for the entire desktop Linux ecosystem - which is totally valid but I think a more accurate title of the article might be "Why developing desktop software on Linux sucks" or "Why creating a desktop Linux distribution sucks" because I think my desktop Linux setup rocks!
Just an aside, at a previous job, I worked with devs that used Macs, where I had a Linux VM on a windows laptop (and we deployed to Linux). Numerous times, I found bugs in coworkers code because they ignored case sensitivity in filenames. Yes, OS-X is BSD and unix based, but by default, the file system is cases-insensitive, like Windows, amd apparently if you make it case sensitive, you can brake a lot of popular Mac software.
What file explorer do you use? Nautilus pisses me off. I'm 100x more productive with Explorer on Windows.
Some features I'd like:
- Being able to open the context menu for the current folder, even if there are enough files to fill the view, without going up a level
- Being able to jump to files/folders in the current directory by name without opening search results
- Being able to add functionality to the context menu
He's right, of course. The Linux community has been in denial about this for years.
At the kernel level and close to it, the areas that consistently give trouble are video/GPU support and audio support. GPUs are hard, but there's no excuse for the mess in audio persisting for a decade. Video/GPU support is tough, but the current situation, where you have a choice of five different NVidia drivers for the same board, all with different bugs, is not good.
As the author points out, regression failures are a big problem. The sheer bloat of Linux has made it unmaintainable. And who wants that job? Big chunks of important code are abandonware.
> The Linux community has been in denial about this for years.
That's not right. We all know support for some hardware is spotty and we all have learned to avoid that. My laptops tend to use Intel GPUs, for instance, because I want to work on them, not fix them.
I'm eyeing that new Lenovo thingie with an epaper keyboard, but I know it'll run Windows and probably never be upgraded because nobody will write the drivers to keep that thing alive past Windows 12.
> you have a choice of five different NVidia drivers for the same board, all with different bugs
> The sheer bloat of Linux has made it unmaintainable.
Nope. It's still moving forward and it's still quite reliable. All my workloads run on it (except my pets that run on FreeBSD and OpenIndiana because I get a kick out of managing different OSs).
> Big chunks of important code are abandonware.
There is a process to move obsolete codebases out of the kernel. That's why you can't use one of those half-IDE CD-ROMs that came with "multimedia kits" of the early 90's.
> Nope. It's still moving forward and it's still quite reliable. All my workloads run on it (except my pets that run on FreeBSD and OpenIndiana because I get a kick out of managing different OSs).
That's the denial there.
I'm a Linux guy. I post this from a linux distrib.
But realistically, we do have a huge amount of technical debt, and less and less incentive to work on them.
Case in point: every time we touch something to improve it, we break things for one year or two. Pulse audio ? Took 4 years to be stable. Systemd ? 3 years at least. Network manager crashed for 6 good years, and still can't work decently with sleep mode.
We manage to provide features because the linux kernel devs are incredibly competent. They also limited the bloat to a manageable stack on their side. But around that it's the far west.
If complexity's got you down, you might have better luck with OpenBSD. Coming from Linux, you'll be amazed by how simple it can be and how much of it just works.
Though non-intel graphics are still shit on it afaik.
> Case in point: every time we touch something to improve it, we break things for one year or two. Pulse audio ? Took 4 years to be stable.
Honestly, it seems to me like what you're complaining about is the nature of FLOSS development i.e. we do it in the open and collect feedback from users rather than spending billions we don't have on focus groups.
Also, I remember early PA days and it certainly did not take 4 years to be usable, but it did took Ubuntu about that time to get it right. That's however a problem of holding Ubuntu as the Linux disto, which is honestly a whole separate rant I could get into.
As for systemd and 3 years, I am honestly not sure what you're talking about. I've been on it since 2012 and it has been mostly smooth sailing since the beginning.
People aren't in denial people are by and large aware of the issues an average person experiences when using Linux and the resources or lack thereof available to reverse engineer an entire universe of unfriendly hardware vendors work to enable joe blow to take his $399 walmart special and install linux on it.
If people want linux on the desktop to offer a more polished experience the only way about it is for everyone to put their money where their mouth is.
If "Linux" were a company we would be justified in demanding a fully finished product before buying but free software is a resource that already benefits billions even if they only interact with it via android, or web services.
"As the author points out, regression failures are a big problem. The sheer bloat of Linux has made it unmaintainable. And who wants that job? Big chunks of important code are abandonware."
This seems to be unsupported supposition that we are supposed to take as received wisdom.
Sorry, but chances are the $399 Walmart special uses an Intel chip with integrated GPU and, because of that, will work out of the box. Perfectly.
Lack of hardware support happens mostly on the other end of the spectrum, with very high-end graphics cards no kernel developer has ever seen from up close and multi-card setups that are essentially unique. I feel sorry for the people who need that but unless hardware manufacturers start to properly document their stuff, make it available to Linux kernel developers and START PAYING MONEY to have drivers developed (like vendors do with Windows) it'll not improve.
You mean people with economical pressure to be profitable ?
As a Linux user, I know I represent a fraction of the market. I'm grateful when people invest in us, because I know it's a great move in principle, but it's not always one money-wise.
Building hardware is HARD. Selling is HARD.
Being dismissive to the people not being able to provide linux support is not going to win them to our cause.
Unfriendly is assholes who ship hardware that neither follows standards nor bothers to ship with documentation. I'm not dismissing them nor will we win them over to any side because they aren't even part of the conversation they only converse with oems buying millions of units.
> Selling more hardware is usually good for a hardware manufacturer. So is having fewer people returning their equipment because it doesn't work.
It all depends of the ROI. In hardware the economy at scale is at play, and it doesn't go well with niches.
> Most kernel developers would be ecstatic just with proper documentation of the hardware being sold.
I agree. That's mostly a matter of culture. Many companies won't publish docs, because they are either afraid of competition, pirating, or looking stupid.
The nature of open source development is there are duplicates. The distribution usually takes chooses from the alternatives the better option and makes it the default. In the GPU driver issue you are raising what is the problem. Fedora comes with nouveau out of the box and you can install an alternative very easily.
You may have a gripe but proliferation of alternatives is an unusual gripe, there are multiple browsers music players videos players file editors and even multiple DE options
I guarantee if there was one music player option there would be a lot of people unhappy and a new project starter immediately
I've been using Linux on the desktop since Mandrake was a thing; if there's one thing I'm tired of it's having the "choice" of five broken implementations of an app rather than just one that works (the reverse is also annoying, when one broken monolith crowds out five working predecessors).
As someone who also uses Linux since Mandrake, I am tired of people telling me there should be just one desktop, file manager etc. There needs to be competition in FLOSS also, nowdays I use both desktop environments, depending on the hardware and mix and match components. For example I prefer Okular even on GNOME.
If people so desire a 'unified experience', you have Windows/macOS. I came to Linux as a refugee from these and am tired of the attempts to pull it in the same direction. Not everything has to be the same, in fact that's a terrible world to live in.
I am also tired of people saying things on Linux are "broken", they aren't any more broken than on macOS/Windows. Granted, you may have to get compatible hardware, which is only fair considering that's what you're doing when purchasing a Win/macOS machine and yet on Linux there's somehow this grand expectation that any random cram HW should just work. You don't expect anything not designed with macOS in mind to work there, so why Linux?
I use macOS at work and experience not so rare kernel panics. There was also just a bug in Premiere blowing up speakers on the MBP, being allowed to log in without a password, APFS logging encryption password in plain text etc. Yet somehow no-one is as strict on that as insisting on Linux somehow 'not working' even as am sitting here being productive on it for over a decade.
I've been using Linux since about the same time, and while I find the five-broken-implementations thing rather exhausting at times, I am more than thankful for it. It's one of the reasons why Linux (and FOSS software) is so damn useful.
Just look at Windows land, where they do have one implementation to rule them all, more or less (Windows 10). Everyone who dislikes its telemetry or almost touch-only interface is either stuck with Windows 7, which won't be an option for much longer, or is stuck venting against Microsoft and grumpily installing hacks like ClassicShell to make things a little more bearable.
When Gnome 3 came up, everyone who liked the new direction kept using it, everyone else moved to Cinnamon or Mate (or XFCE, KDE...)
It's not just about competition, it's about being able to pursue different visions and different objectives.
Users only see this in terms of choice, but there's a great deal of value about it for developers as well.
I'm using Gnome shell since the early days and, so far, I'm very happy with it. But, then, I also use macOS on my Macs and I'm very happy with them too. All of them run everything I throw at them just fine.
I find the lack of options to customize GNOME irrelevant - I'm way past the day I cared about the wallpaper or the icons or the colors of the window chrome. I pay attention to what's inside the window, not its border.
The other day I fired up a Solaris 10 VM so I could give Wikipedia a proper Solaris/CDE screenshot and I was surprised it's actually still usable - the terminals are responsive and the rest, oh well... You don't want to use a GUI to copy files, do you?
Pretty much this, I'd say. I know people who like its design and find it very comfortable. I'm definitely not one of them but fortunately no one heeded the calls to just build one desktop environment to rule them all :).
I wish that there would be one unified API for creating desktop programs on Linux. Right now it's somewhat coalesced on GTK/GNOME and Qt/KDE, though there are a number of others out there.
I use Linux in a VM for very hobbyist level embedded development (think Arduino and the like). Driver problems are non-existent, all of the technical problems are non-issues in this environment. The problems that I see are all to do with the lack of a common set of services for building a GUI application.
Why do my text editor and Arduino IDE use different file pickers? It's because my text editor uses the KDE API, but the Arduino IDE uses something else. GIMP uses yet a different file picker from the other two. LibreOffice uses yet another file picker, that's similar to Kate's but slightly different. I'm sure that installing Atom and VS Code would introduce me to two more file pickers.
The reason for this is that each of these programs uses a different GUI toolkit and, as a result, has a different concept of what a file picker needs to look like. Some of them don't even agree on which order the Open and Cancel buttons should be.
Network transparency is another thing that suffers from this. On Windows, you can basically use a UNC path (\\server\share\path\to\file.txt) almost anywhere because the entire system from the file picker all the way down to the file APIs knows about UNC paths. In Linux, KDE apps do this one way, GNOME apps do it a different way, and command line tools need you to somehow mount the target server before you can even think about it. I last seriously used Windows about 14 years ago and I still miss this greatly.
None of these are insurmountable problems, but it needs someone to make a decision about the one true way to do things.
>someone to make a decision about the one true way to do things.
Things don't really work this way in free and open source development. There is no one person to make decisions, consensus is reached when the quality of something raises "above the bar" and actually improves things for all involved parties. If someone wants there to be an über-library that serves everyone's use case then it's up to them to go and do the work to build that.
And it has been getting better in this regard. For example KDE and GNOME used to have their own IPC, multimedia & audio mixing backends, but now both have converged on DBus, GStreamer and PulseAudio, in part because these were intentionally built to be flexible low-level solutions. I'm sure there are more examples of this too but those are the first that come to mind.
You're absolutely right. I wonder if something like DBus and PulseAudio could happen with my UNC pain point.
With the assumption that the goal is for "vi //server/share/file.txt" to work the same as "notepad.exe \\server\share\file.txt" does on Windows, here are my thoughts.
First off, notepad.exe doesn't really care about the fact that it's a UNC path. It just opens the file with CreateFile (either CreateFileW or CreateFileA).
There would need to be replacements for the libc file functions. These could be a shim in front of libc, or baked right into libc. Note, there's a LOT more needed than "just" new file functions - any functions that do anything with paths need to be looked at. Shells would likely need some changes to work properly, though it's not like the Windows shell can truly do much with UNC paths - copying files to/from works, but you can't cd into them.
How does it ask for credentials? If it's via DBus, a desktop environment provide the authentication prompts, but what about a pure-commandline system? Maybe the transport is just SSH and relies on the existing public key authentication? But what if you're just doing a one-off thing and don't want to set that up? Using SSH is probably a decent idea since it's got authentication, security, and a file transfer protocol, already built in.
On top of all of this, when you open //server/share/file.txt for writing, what does that actually mean? Is there a file descriptor? How does that work with the kernel? Does libc now manage all file descriptors with only a subset corresponding to kernel file descriptors? Could a pure user-space solution fake this well enough to actually work? Would this need to be a FUSE filesystem along with some daemon to automatically unmount the remote servers when the mount is no longer needed? Would it be something like the automounter, just a lot better? Does a kernel need changes for any of this to work?
This is one of those things that touches so many layers and potentially interacts with so many parts of the system, potentially all the way down to the kernel.
My guess, and I don't actually think this will happen, is that Apple will do something like this on Mac OS X and have a reasonable mapping to the BSD world underneath, then someone in the Linux community will come along and do something similar in a way that's better suited for Linux. As a parallel, Apple came out with launchd in 2005 to replace init scripts, systemd made an appearance in 2010 - both do very similar jobs, with launchd tailored to the needs of MacOS and systemd tailored to the needs of Linux. Maybe something similar could happen with UNC-like file sharing.
All that has been doable for quite some time, you could mount SMB shares like that with smbfs since early releases of Samba, and later with the CIFS fs driver. You do need root to mount things that way, so it isn't ideal.
For the more complicated stuff it can be done but not everything is available via a simple GUI. GNOME and KDE have their own virtual filesystem layers in userspace, GVfs and KIO, I don't know what KIO does but GVfs supports a bunch of network backends and has a FUSE driver that can mount its own virtual filesystems and expose them to outside applications. So the features are there but I don't think they are well-presented right now, maybe someone can prove me wrong though.
It would have been nice if the kernel had better support for fine-grained control over filesystems like HURD or Plan 9 do. But instead it was decided that it was better to handle those things with userspace daemons, so that's where we are now.
These aren't the same thing though. The GNOME and KDE VFS layers only apply for applications written for those APIs. It's not a universal thing.
Being able to mount a CIFS filesystem is fine, but it's not the same thing. In Windows, you can basically use a UNC path anywhere because CreateFile knows how to deal with it. The point is that you don't need to mount the remote filesystem (the Windows-equivalent being mapping a network drive).
What I'm really looking for is the user experience, not the underlying protocol. On Windows, I can just go "notepad.exe \\server\share\file.txt" and edit the file, on Linux I need to either use a KDE application or go through the ceremony of mounting the remote filesystem. It's the fact that the feature is silo'd into GNOME and KDE (and the fact that it doesn't even exist on Mac OS, but that's another issue) that bugs me.
There is currently no kernel interface that I know of to do that, and I don't think it would be too hard to hook into an open() on an invalid path and try to do something (mount a network fs, call out to GVfs or KIO, etc), but I can tell you you will meet resistance if you try to because things like "//stuff" and "smb://stuff" are already valid local file paths in Linux. So I leave it up to you to figure out how to do this without breaking things.
Yeah, this is definitely not an easy problem to solve given the design of Linux.
I don't know why I didn't remember this earlier, but I actually explored this a number of years ago and came up with two things that are close, but not quite there:
First was to use a systemd automount unit[0], but I didn't really get anywhere with it. From the looks of it you have to know all the possible things you could want to automount, it can't do wildcards. Being able to do some kind of pattern matching on the requested path and translate that into a mount command would go a long way to making this work.
I also explored the good old automounter[1][2], but it has a lot of the limitations that systemd's does. It does have the advantage of supporting host maps, which gets me a bit closer to what I'm looking for. The unfortunate thing that remains is that this is NFS instead of a modern protocol. If this were somehow backended on sshfs, I suspect it would be quite useful. Of course, sshfs is missing the concept of shares but that's not a showstopper by any means. Authentication becomes a problem since the automounter probably can't ask the user for a password, and may not even know which user is requesting the mount.
I have no idea how well either will work in practice. Modern Linux on the desktop is a very different environment than the one the automounter and NFS were built for. The systemd automounter looks like it serves a very specific purpose and can't currently do what I want.
Maybe all we really need is a modernized automounter and/or some extra features in systemd's automounter. These could lead to to "vi /net/server/share/file.txt" working as expected which, quite honestly, is basically the same as what I suggested earlier.
> I also explored the good old automounter[1][2], but it has a lot of the limitations that systemd's does. It does have the advantage of supporting host maps, which gets me a bit closer to what I'm looking for. The unfortunate thing that remains is that this is NFS instead of a modern protocol.
What limitations affect you?
(At home, I have linux running on an HP MicroServer as my NAS, it exports filessytems via NFS. Other machines run autofs with the hosts map, so for example my wife's desktop - and mine for that matter - auto-mounts NFS shares on-demand and she can open any file directly in any application by accessing /net/$hostname/$path).
NFSv4 is pretty modern ...
I believe this should also work for CIFS, if the server-side supports unix extensions (to do user mapping on a single connection), but I haven't had time to try it in the past day in my limited time at home.
> Authentication becomes a problem since the automounter probably can't ask the user for a password, and may not even know which user is requesting the mount.
If you have Kerberos setup, NFSv4 does the right thing ...
If you don't have Kerberos setup, then you're probably ok with just normal NFS user mapping.
Interesting, I'll have to give automount another look.
The last time I tried it was years ago, so I can't remember what limitations I found. If I get a chance to do this in the near future I'll report back.
gvfs does some of what you ask. I guess you could trick open() with LD_PRELOAD.
For the dbus/polkit authentication prompts, I've seen it work on the command line but have no idea how it works. If anyone wants to donate, I'll spend a day and half a bottle of good whiskey and come out with a blog post.
In Windows, you can basically use a UNC path anywhere because CreateFile knows how to deal with it. The point is that you don't need to mount the remote filesystem (the Windows-equivalent being mapping a network drive).
> Network transparency is another thing that suffers from this. On Windows, you can basically use a UNC path (\\server\share\path\to\file.txt) almost anywhere because the entire system from the file picker all the way down to the file APIs knows about UNC paths. In Linux, KDE apps do this one way, GNOME apps do it a different way, and command line tools need you to somehow mount the target server before you can even think about it. I last seriously used Windows about 14 years ago and I still miss this greatly.
I just use NFS and autofs. Sure, it's a few seconds more effort to set it up, but it's a once-off cost.
In recent years, Windows has become a mess in the UI space. Mac OS X has fared a little better, but it's also becoming a mess.
It makes me sad. Years ago programs written for either operating system tended to follow the UI standards pretty well, with the main exception being games. Microsoft started to try new things with Office, so if you wanted to see where the standard was going you just had to look at where Office was.
> I'm sure that installing Atom and VS Code would introduce me to two more file pickers.
I’m sure they will both use the same one.
I don’t like Electron because slow and consumes too much RAM, but I have to admit they do have that unified API. It’s quite high-level, and relatively stable because there’s only a single implementation.
How about a list of all the things that really work great?
I've been using Linux on the desktop for nearly 20 years and I'll have to say it's fantastic, despite the occasional headache, which seems to be at a far less frequency than other major desktop operating systems.
I agree, I read the present list as “people with those concerns should not be using Linux”. Example if you require MS only software (photoshop) then stay on windows. I mean its just not practical for every software vendor to support multiple OSs.
IMO Linux on the desktop is in a remarkably fantastic state. There’s A LOT of really great distros and software that just works. My daily driver is 7 year old Chromebook with xubuntu 18.04, I do java development on this thing!!!
I've said for years that Linux works for grandmothers, and for me (and a bunch of others).
It doesn't work for people who:
- need a locally installed copy of MS Office or other Windows only software
- IT admins that stopped learning a long time ago
- etc
As others point out, for som of us using Windows or Mac is a hassle. They're slow (30% longer compile cycles, don't even get me started on git), missing important customizability,m no built in universal package management etc.
Windows has abysmal performance in its equivalent of the VFS layer i.e. path handling, directory listings and so on.
Funnily enough the main reason is probably that compiling the Linux kernel is one of the most filesystem taxing workloads and also one of those kernel developers care most about.
Not the person you're replying to, but I fall in the same boat as them as far as Linux experience goes.
And for me, I find I have way more trouble doing things on Windows/Mac than Linux. I think it's really more a way of thinking about how a "desktop OS" is supposed to work. People coming from Windows expect things to work the same, and that's just not the case.
Likewise, when I unluckily find myself on some closed-source box, _very little works how I expect_. And man is troubleshooting harder, because there are so many "surprises."
My point is, I think blaming the operating system is not the answer - users need to adjust their expectations and open their mind a little.
This is a somewhat poor analogy, but it's sort of like a Chinese citizen (closed-source user) becoming a citizen of a democracy (open-source user). The government is going to work differently, and you can't claim democracy is broken just because it's so different from authoritarianism.
"This is a somewhat poor analogy, but it's sort of like a Chinese citizen (closed-source user) becoming a citizen of a democracy"
I like that analogy (even though I am still looking for that pure democracy/open source government).
In Linux you have the freedom to do allmost anything with the system, but you have to know what you are doing, as the system usually does not stop you, when you are about to do anything stupid.
Windows makes me mad, when it tries to manage me. Like "Yes I really want to use this computer without firewall or antivirus, because it is not connected to the internet and never will be because it serves another purpurse."
To do this you need to mess with obscure registry settings, the default behavior of windows is enforcing it and nowdays also updates, because most users don't know or care what they are doing and are used to be told what to do.
So I believe it is good that I can do anything with my system, but everybody started as a newb once so a more beginnerfriendly version could be helpful.
But Linux main problem is hardware support, and fixing
broken audio/graphic/wifi driver is something which can drive away very experienced people. (it drove me to ChromeOS for my laptop)
I definitely do agree that "onboarding" could be improved. How I dunno. To me at least, it seems like I hear a lot of success stories from the tails of the spectrum - power users and developers on one side / the complete opposite on the other. And then for everybody in the middle, there's no other way to put it than it's almost a shit show:
On the software side there a million and a half different ways to do everything, and often an insane amount of "noise"/outdated info that needs filtering through to find what's relevant to your specific needs. Even at the lowest levels of the stack there is no "the one way", and I think all that uncertainty (especially from the beginner perspective) can make it feel like climbing a mountain.
Hardware, as you mention, is tricky if you don't know what to look for (and why would most people). At least from a longtime Linux user's perspective, it's incredible how much better things have gotten (since the 2.2 days in my case). But there's a ways yet to go, and it's by far the roughest where it's the most visible (ie the trendy bleeding edge). Part of that is just the nature of "lag" in open source development between code getting written, released, and finally showing up in your distro. That cycle can sometimes take 6 or 8 months, especially for hardware :(
Not that this helps users with existing hardware, but
* definitely always google before you buy (model name + "linux" and read the first page or two of results)
* stick with a non-high-DPI resolution screen
* WiFi, I've had the best luck with Qualcomm/Atheros, Intel, and Realtek (in that order)
* Graphics, get AMD. NVidia cards can work well enough with their proprietary driver, but the out-of-the-box experience is crap. Intel works great too, as long as you don't need it for anything heavy.
* Audio, for me the last time I had trouble was with one of the earlier Sound Blaster Audigy cards. Have stuck with onboard codecs since and honestly never had a problem.
I actually thought about that analogy for a while before, but rather used anarchy/libertarian vs. authorian/dictatorship ...
(basically the same point, only more radical)
Anyway:
"On the software side there a million and a half different ways to do everything, and often an insane amount of "noise"/outdated info that needs filtering through to find what's relevant to your specific needs. Even at the lowest levels of the stack there is no "the one way", and I think all that uncertainty (especially from the beginner perspective) can make it feel like climbing a mountain."
Yes. Even for simple things like a screenshot, there are a million ways. Not a problem in itself, but when you come from windows where this is a "print" command and I did not think there could be a reason to do it differently, but on some distros it is. I run into it a few times, "print" did not work, so googling:
You want to do a screenshot? No problem, just install this via terminal, or this, or type in those commands and there you go..
WTF? I just want a screenshot? How is this not standard?
Now this seems to be mostly solved, on XFCE he even asks me what to do with the just taken screenshot, after I hit print (save, view, ..) Oh in general, I really love XFCE).
But unfortunately:
"Not that this helps users with existing hardware, but
* definitely always google before you buy (model name + "linux" and read the first page or two of results)"
this is not for newbs either. Newbs do not know the difference between gpu and cpu.
And they certainly do not order single components to put together their PC.
Newbs need a company who does that for them. Compose a PC/Laptop which components who are supported and work well together, so what purism does.
But suddenly we are not on the mass market anymore ... and we see the price difference.
So the problem remains complicated, with no easy solution.
Sorry, this is impossible to do once you tried HiDPI. The difference is overwhelming; I consider non-HiDPI screens an obsolete technology like CRTs.
I do use Linux desktops; thankfully, the HiDPI support is much better these days than it was even 2 years ago. Both Gnome and KDE work relatively fine.
Aye that sucks. Regressions are the worst, knowing it used to work. I had to look up that platform, knew it was getting old... and primarily for netbooks... I don't think I'd hold my breath for it getting fixed by AMD at least :(
I'm tech-savvy, but I use Linux because I'm tired of troubleshooting constant Windows problems. Maybe the OS-X world is better? But I've had the same DE for over fifteen years and nobody's tried to make it touch-friendly or stick ads in it.
The only thing that ever crashes is the web browser. I keep my computers no less than five years and they run just as fast as new, usually physically failing rather than becoming computationally incapable due to anti-virus and bloat slowing a system down.
Sure, it helps to be tech savvy when installing any operating system. I'm not sure that I'd agree that most of us enjoy troubleshooting, it just goes with the territory... always has even on Windows and Mac. The tech savvy and troubleshooting tend to come into play with non-standard / niche / unsupported hardware configurations and/or running bleeding edge / complex configurations. Most desktop and laptop configurations aren't that.
The main thing a non-tech savvy user needs to worry about when considering Linux is to generally understand that hardware support lags a bit. So they should do a few web searches on the make/model of hardware they want to use + 'Linux support' before they dive in. If they don't see page after page of glowing success stories, they have their answer and should steer clear of that configuration for the time being. If they do see lots of success stories, read a few of them to see if their eyes glaze over at what is written or if it seems pretty straight-forward and they can follow what's being said. No tech savvy required.
I contest that many of us tech savvy folk still don't like Linux, and not just because it is different, but because it has significant problems that make it more of a headache for our workflows than other OSs. I wish that weren't the case. I'd love to be using an open OS, but Linux's ways of doing things doesn't mesh with how I work.
And then there are those of us that really can't work on a Windows machine because it just doesn't mesh with how we work. With Windows Subsystem for Linux I can finally at least get some work done on Windows but honestly it still feels like swimming with my arms tied behind my back.
I have a friend who was complaining about Windows 10 so I set him up with Ubuntu. He's about a dumb as it gets with computers and he has never had an issue with Linux. The few things he's installed have been in the software store and he clicks the apt update once a week. He's much happier and less frustrated with Linux vs Windows.
Cool beans mate. I think you are describing someone actively updating their system because it isn't too onerous contrasted to it being a good idea (risk:reward).
Many, many years ago a decision regarding Window's software management was made and ever since it means that updates take sodding ages and sometimes require multiple reboots and are generally unpleasant. One day that will be fixed - it is not normal.
I assume they're not comparing their own issues with Linux vs. the issues of less tech savy people with other OS, but vs. their own experiences with other OS.
Corrupted grub sounds like a hard drive failure. Regarding nic and graphics drivers it is possible to find hardware with good hardware support it is probably impossible for volunteers to fix all possible issues with closed source or proprietary software/hardware. Consider choosing accordingly.
If you want someone to select the hardware for linux compatibility consider buying hardware that comes with linux.
None of these have ever happened to me, and I started with slackware. (Maybe once around y2k a nic wasn’t supported but after waiting six months it was.)
Windows dual boot specifically likes to corrupt it or has in the past. I've periodically checked in on nix desktops over the last 20 years. Red Hat, Centos, Fedora, Suse, SLED, Mandrake, Slackware, Gentoo, Gentoo on a SGI O2, Debian, Ubuntu, Mint, Open BSD, FreeBSD, DSL, Puppy Linux, Solaris, Open Solaris, IRIX, Arch, OsX, the venerable hackintosh, and last off the top of my head BeOS which was somewhat POSIX compatible if not a nix.
So, the crap that sucked in 1998 is the same stuff that sucks today. Inconsistent clipboards, graphic driver/X11 support, multi monitor support and debugging/positioning issues, poorly documented or improperly configured out of the box network management tools, firewall, etc. tools. Boot loader failures, and (more significantly recovery). Inconsistencies between Qt, Gnome, Kde apps, graphic sub system freezes. Pulse or whatever sound system of the month suddenly failing one day. DVD playback, MDADM failures, Drive partitioning and resizing difficulties, File system corruption (is it disk based sure but windows is less prone to it on the older file system types, ZFS, XFS, etc. are nicer).
If you want to compile an application, run a server, etc. nix beats out windows any day. If you want to be able to install this great new linux thing you heard about on your existing computer, surf the web, manage your photo collection, hook up your scanner to copy in those old pictures of your kids, setup your Nvidia card and play some games on Steam, find and install the latest or a specific version of an app with out hitting the command line and typing things out like madison, then Linux is not the desktop for you.
edit and don't get me started on high end xeon and intel chip support/speed step handling, or version upgrades running successfully.
Note that the same author also lists all the problems for Windows 10 (http://itvision.altervista.org/why-windows-10-sucks.html), so at least he's holding equal ground. Think carefully about the pros and cons, and then choose one (Or maybe both, if you are willing to dual-boot or use a VM)
I feel like a lot of the usability quirks that Linux has are trying to shoehorn a multi-user system into a single-user context. For example, there is so much work done (and even complaining in favor of doing that work in this article) to make it possible for multiple people to sit at the same computer. Nobody does that! Most people have more than one computer! Why do people spend their free time on that use case?
This could be a long rant, so I'll keep it short... but someday I'm just going to rip the concept of users out of Linux and see what it looks like. Oh no, you say, malware will get you! Unlikely. Malware running as my user can fuck over my life just as easily as malware running as root. So why even pretend that that's a good isolation model? It doesn't prevent any attacks.
(As for how Linux in 2019 is doing... I recently switched back to Ubuntu for a desktop. Whenever I lock the screen and have DPMS enabled, it forgets that I have two monitors and that I want 200% DPI scaling when it wakes back up. What? Back in my day you had to hard-code the resolution and monitor configuration in the X11R6 config and there was no way to change it without restarting the X server. May I please have those days back? At least once it started working, it kept working.)
I dunno, I would like to be able to have a restricted account to give it to my daughter. Same for mobile device - she's too young to have her own but I would like to give her my iPad to run YT Kids for example and only that so some kind of multiuser-ness is needed but a different one. I'd say SE linux tries to achieve that but it is far from being usable.
In the 90s, when our desktops didn't have user accounts, we used third party software to lock people into restricted environments. It wasn't perfect, but neither are user accounts.
That is orthogonal to Linux's user system, though. That model says things like "your daughter can have 25 file descriptors". It is not useful for this use case.
Right? The only reason user accounts exist at all in Desktop OSs is that all of them today were originally server OSs. Placing restrictions on user accounts is only useful for protecting the system from users, which is a valid concern on a network with shared resources but worse than useless on a personal desktop.
Mobile OSs got this right: on a personal device, the permissions model should be applied to the applications.
> Placing restrictions on user accounts is only useful for protecting the system from users, which is a valid concern on a network with shared resources but worse than useless on a personal desktop.
True for home users; not necessarily true for corporate users-- where computers are IT-managed (i.e. "don't let end users fuck them up") and may be shared (which is highly situational-- the degree to which computers are shared varies highly from company to company, or even deployment to deployment).
Heck, it's not even unheard of to end up with multiple "simultaneous" users on a single-seat desktop machine-- every major OS these days supports some form of fast user switching, which will leave one user's programs running while another user's physically sitting at the machine.
A few things, since I work in IT: we don't give a damn about your workstation. It's a fungible resource. Reimaging is easy and relatively quick. We even let you have local admin because who cares. Users are not prone to playing around with settings they don't understand, in my experience. If someone was constantly needing their workstation reimaged we'd probably just fire them for being incompetent. Ideally, the OS would be completely separate from the applications and configuration and be immutable, and that would go a long way towards eliminating those kinds of problems.
We managed to share home desktop computers in the 90s without significant problems, even though the OSs we used didn't support multiple user accounts at all. And there's no reason you need user accounts to accomplish what you're describing. You can still have profiles (preferences, application configs, etc), and you can encrypt them with a passphrase if you have any reason not to trust others using the same device.
> Heck, it's not even unheard of to end up with multiple "simultaneous" users on a single-seat desktop machine
A vanishingly small use case inside an already vanishingly small use case.
Not everyone lives in a first world country. I live in the Central Asia, and $200 a month is a decent salary here. And it is normal to share a computer between different family members.
I still don't think users are the right model. For example, look at how Amazon allows multiple users to access one of their shared computers (it's a service called EC2). They don't use Linux users. You get root and the other people using that machine are protected from you.
I believe there is now an option (or maybe it's the default) in Windows to run IE under a hypervisor, to totally isolate it from the local machine. This is moving in the direction of providing something useful.
Though to be fair protecting the OS doesn't make much sense to me. I guess it's nice to guarantee that your computer will boot no matter what you do to it, but again, that is not the problem people are actually facing.
The corporate threat model involves things like protecting people from getting an email that says "click this OAuth button to give this malware access to your email". None of the critical software is running on a user's workstation, so whatever is going on there doesn't matter.
> Mobile OSs got this right: on a personal device, the permissions model should be applied to the applications.
I cannot agree. If I want to run another browser instance, I cannot do that on a mobile system. Maybe it would possible to do with some support from browser's developers but with multiuser system I need no support from developers, I can create new user and run another browser that would believe that it is the only browser instance running.
It is not just browsers. I can easily experiment with program configs, for example. Something doesn't work, I want to check is it due to some application configs or it's installed plugins. All I need is to create one more user and to start program instance from that user.
Android relies on SELinux, and SELinux allows much more than old user system, but SELinux is much bigger headache when you are trying to use it in a way that was not supposed by Google. So in reality SELinux on Android allow me to do nothing, I even cannot run app that requires access to a contact list without allowing it to access a contact list. It would be nice to create one more user on Android with empty contact list and to run this program from that user. Moreover I'd like to create user with faked contact list, faked browser history, with faked all the private information, and to run most of the apps from that user.
Nothing you're describing is a problem with the model, just the implementation. All of those things are easily solvable by being able to choose how the program sees the world. You can do it today with the various namespaces in Linux.
Yes, I agree. It is problem of implementation. But nevertheless it is a problem. I didn't tried namespaces myself, but I'm pretty sure that to do in in Linux it would be harder for me to get it to work. Even if I became familiar with namespaces.
Multiuser model is settled, simple and transparent model, that just work. You have abstraction of user and abstraction of file access rights, and it is all that you need.
namespaces have no such a simple model. How can I run program from other user but give it some special rights to access this git-repo in my home directory? Do I need to write a special C-program for that? Or maybe existing tools already can be configured with some obscure xml-file? I do not know, and maybe I'm mistaken, but knowing the general laws of linux software development, I guess that the best software for it I can find is a complex overengineered corporative tool with bells and wistles, and the easiest way to use namespaces in my case is to write C-program. (If I'm mistaken, please, correct me at least by stating my mistake aloud, or better point me to a docs, please.)
And here we came into real issue. To write a good C-program for my tasks, I need to start thinking as a software designer, to invent new simple model of process separation, that allow me to solve 90% of my tasks with ease, and the rest with some headaches, but everything must be possible. The only way to do it in a week is to refuse to think, and to replicate multiuser model on top of namespaces. But I need no replica of multiuser model because I have an existing one. What the point of discarding multiuser model, just to move to another implementation of multiuser model?
The only good thing I see is a lack of need to log into a root shell to create or delete users and groups. It would be nice, but I'm not ready to spend a week to write a C program and then unknown amount of time to maintain that program, just to stop using su/sudo for such tasks.
So, I can agree that multiuser model is bad for a linux desktop. But we have no real alternative. And the mobile OS approach is the worst. It reminds me of DOS, where you can work with one process at a time, where you cannot run two copies of a program, where all is done in the single global namespace, and any process can do anything it wants. The only choice you have is "to run program or not to run".
> namespaces have no such a simple model. How can I run program from other user but give it some special rights to access this git-repo in my home directory?
In Linux terminology, launch the program in a new mount namespace with a rw-bind mount to your home directory. You can do this with firejail, bubblewrap, or minijail easily and without a config file.
>The only reason user accounts exist at all in Desktop OSs is that all of them today were originally server OSs.
> "We managed to share home desktop computers in the 90s without significant problems, even though the OSs we used didn't support multiple user accounts at all."
That's not true. Window 9x series had user accounts (with no security between them.) This was beneficial to users because computers were expensive (and still are to most people..) so personal computers very often weren't personal. Having separate accounts, even without security, allowed individual users to configure the system to their personal preference and helped with file organization.
Those aren't OS-level accounts, they have no (local) permissions system applied to them, they're just profiles. Regardless, I can assure you that basically nobody used them in the 90s.
I can assure you, they got a lot of use in the 90s, and they were used for exactly the same reason people still make separate accounts for their family members on their computers today. It's not about security. It's about keeping separate preferences.
Your hypothesis that XP somehow forced the concept of separate accounts on regular home users because NT was used on servers is just bizarre. People who wanted separate accounts were doing it on 98, and people who didn't simply ignored it and all shared one account. The UX for different family members sharing a single computer by signing into it existed before the NT kernel was in use around the home. The implementation changed when Windows went to NT, but the UX did not. And given that the UX of separate accounts was already appreciated by users, the more robust implementation made possible by NT was a no brainer.
>Mobile OSs got this right: on a personal device,
When it comes to a PC, "personal" is a misnomer. Failure to understand that is the root of your confusion. You are presumably at a place in life where your computer is your computer, not shared with others, like your cell phone. But when it comes to PCs, that perspective is a privileged one. It's evidently not important to you that numerous people be able to use your computer, but it is important to others. The UX of the device that lives in your pocket needs to be different from the UX of a device that sits in the middle of your living room for the whole family to use, like a television.
> I can assure you, they got a lot of use in the 90s
Alright, maybe that's a regional thing or something. I knew of no one who did that.
Regardless, even you admit that it was not about security, so why have user accounts? Simply being able to change the profile is sufficient.
> The UX of the device that lives in your pocket needs to be different from the UX of a device that sits in the middle of your living room for the whole family to use, like a television.
I still contend that this is an incredibly tiny use-case today, precisely because mobile devices have largely supplanted the role the 'family computer' used to serve. More importantly, that use case can be served without user accounts.
You got downvoted, but I mostly agree. I think mandatory strict selinux rules, containerize all user programs, or similar approaches are now more important for the threat vectors we face today on the desktop. I just discovered Apple's feature of requiring you to press a couple keys to authenticate a new keyboard. Doing that on GNU/Linux would be very hard.
That dialog (the "your keyboard cannot be identified" dialog) is about layout detection, not "authentication". IIRC, it's also skippable (but I haven't seen it in a long time; I've mostly switched over to Apple keyboards and they don't trigger that dialog).
>Year 2015 welcomed us with 134 vulnerabilities in one package alone: WebKitGTK+ WSA-2015-0002. I'm not implying that Linux is worse than Windows/MacOS proprietary/closed software - I'm just saying that the mantra that open source is more secure by definition because everyone can read the code is apparently totally wrong.
Huh? Those 134 vulnerabilities were found because people can see the code. If it were closed source, they would probably still be there today.
I get that some people don't like linux, but some of these examples are just ridiculous.
Linux is administered by ssh therefore administrators don't know how to check so therefore they don't bother to update systems because "they're afraid that something will break." C'mon.
Linux as the open-sourced work of brilliant software developers wouldn't power most servers if it sucked.
But could designing good desktops need more than just good code?
Good kernels successfully run code.
Good desktops successfully help users. I guess different goals require different designs?
Edit: to clarify, I didn't mean desktops don't require well designed software. Just had in mind that a desktop also have to take human psychology and human limitations into account.
Wait, you are confounding some things. A software design is good when it allows all of its parts to be elegant and meet the requirements. In no way does that say that a desktop OS is required to be designed badly.
Linux is powering servers and high performance computing because it is good at these things: mostly static hardware configuration, set up once during system installation high performance, modularity and the ability to inspect a deeply running system of you are an expert. It ticks all the boxes for these specidic environments.
On the desktop, not so much. For example, the concept of device files is hindering use cases that should "just work". When I plug in USB headphones, a new audio device is created. Fine. But I need to enter the device file name or ALSA device string onto half a dozen programs to use it. All I would want is to have the audio rerouted automatically. Pulseaudio was touted as the solution to that problem, but ar what cost? We're now literally stacking audio systems on top of audio systems and sacrifice to arcane gods to have it work.
When I plug in a USB drive, I now have to look up its device file name in order to mount it manually. The software stack required to automount it from a desktop environment is atrociously complex, because it requires root privileges to mount a device not listed in /etc/fstab with a user flag. And because any number of drives can be connected in any possible order, no entries in fstab can be made.
This clash of UNIX-like concepts and modern user expectations is what is holding Linux back. The underpinnings are not bad. They were just designed for a different task.
So, yes, you can build a user friendly OS. Yes, it can have a clean design. But it won't be called Linux anymore.
> This clash of UNIX-like concepts and modern user expectations is what is holding Linux back.
It did not stop OSX from achieving exactly what you're talking about, and if you look closely, its core is very explicit about its Unix underpinnings.
In my opinion it's not a "Unix" problem, but a bazaar/scale problem. In the bazaar world, ideally multiple competing solutions would pop up, ideas would be merged and which one or a few would end up top. The problem is that to implement even a single a good desktop system - and I mean top to bottom, not just a DE - would require a staggering amount of resources under a single unified goal and vision. The Linux desktop market is simply not big enough to support even one, never mind multiple competing systems. In the server space, Linux is absolutely massive, and doesn't have this problem.
Just curious, but I had the same issues, turns our I didn't install gvfs and associated handlers for mounting unmounting drives. Turns out installing it resolved most of my USB connection issues
I know that there are solutions (I use KDE, so it works differently there). The point I am trying to make is that UNIX device files were never designed to deal with the dynamic hardware configuration we see today, especially on laptops with periphery gerting plugged in and yanked periodically. And some of the solutions, like gvfs, are overly complex user space workarounds. A system that accounts for these dynamic usage patterns would have to look differently. But it does by no means have to be ugly code. In fact, it would probably be much simpler and more elegant than the current Linux desktop user space.
I disagree with the phrasing. I'd say that good desktops enable users. One of my big peeves about Linux Desktop culture is that they see users as beneath them. They want to "help" users by wrapping them in straight jackets to keep them from hurting themselves and shining a laser pointer on the wall to entertain them. Have a problem with the Linux Desktop? Well, "normal users" don't do whatever it is you're trying to do, and you're not a C graybeard or you'd fix it yourself, so you don't exist according to their model of the universe.
Sorry, I actually failed to find a better word than "help", I meant: to offer the best experience to its user. And we humans have wildly different skills and expectations. I still think that user-facing software design needs more psychology on top of good code.
"Good kernels successfully run code. Good desktops successfully help users. I guess different goals require different designs?"
A good desktop needs to run code successful, otherwise every small bug can make a big annoying glitch.
I guess it is more a question of optimizing.
The hardcore linux user uses the terminal and a text editor mainly and kind of despise GUIs. Linux seems to be optimized for them as they are the most active ones using it.
And this use case works perfectly.
GUIs are mostly a thing that was added because of the "newbs" but does not get used so much by the core - so they suck as the core group are the ones who know how to fix things.
This was the situation when I started to explore the linux world and some things changed, but not much.
But Linux main problems are hardware issues, which is partly because of linux enforced OpenSource nature which wants drivers to be opensource and included in the kernel. And traditional industry does not like that approach. And given the small marketshare of linux desktop ... they don't really have to.
The only thing about Linux I never really like is the package system.
Like if you want to install a new software, you usually don’t get .exe or .dmg. If you are lucky, the developer or some fans took the time to package it. Then you can do ‘apt-get’ ‘yum’ or ‘pacman’. However, packages got stalled and sometimes don’t match the original author intent. You can also build from sources, but it takes time and you have to know a bit of CLI. It never felt true freedom to me. But more like whatever the community feels make sense for whatever distribution weird dictactorship. Just a feeling and I still love and support Linux.
I have the complete opposite experience. Everything I use is nicely packaged in Debian/Ubuntu. At most I need to add a new package source and then I get automatic updates. In Windows and OSX you used to need to manually install a bunch of packages from a bunch of locations, with very dubious security. No wonder they've both implement app stores.
So strange, as i'm the polar opposite. I vastly prefer having everything update at once and be able to check versions, etc. I use homebrew and chocolately when I'm on macOS or Windows respectively to get that experience back.
I can understand trying to resolve dependencies can be a royal pain, espescially trying to find the distro's specific naming conventions, like libpq-dev vs postgresql-libs
Between Flatpak and AppImages we have good solutions for both people who prefer package managers as well as people who prefer a single file that can be run on double click.
As far as solutions goes, this problem is solved. They just need to be better known by software distributors.
Flatpak is over-engineered garbage. You still can't even install things on different disks without setting up an entirely new 'installation' or whatever they call it. And you still need a repo. Meanwhile I can trivially make most Windows software work from a USB drive I can carry around between computers.
AppImage can do that too, since it is a lot less over-engineered, but sadly very few developers use AppImage and it even distributions like Nitriux that claim to support AppImages don't display icons for them. Could be trivially solved with a standard for embedding icons in ELF, but the unix world hates the very concept of a program that isn't spread all over the file hierarchy and isn't full of hard-coded paths so it'll never happen.
I'm surprised to find that you have good luck with that; last time I looked, it was a nightmare for anything that didn't explicitly support portable mode because everything expects to save stuff into the registry.
There are several tools to help with that, like the PortableApps.com launcher, JauntePE, or even Cameyo. Programs that just use config files can usually be dealt with without those kinds of wrappers by changing an environment variable. But if you don't mind not bringing the config with you (and it often isn't an issue in my experience), you can usually just drop the folder on the drive and you're done.
But, it works, especially for the use case of the "I need the latest and greatest versions of two packages".
> AppImage can do that too, since it is a lot less over-engineered, but sadly very few developers use AppImage and it even distributions like Nitriux that claim to support AppImages don't display icons for them.
So, use one of the other 20 distros where AppImages just work, perfectly, out-the-box.
Ignoring the issue with the oft-used fallback of distro roulette as a Linux evangelist defense mechanism, it isn't that they don't work, it's that their icons aren't displayed. There is not a single distro, including the ones that actually supposedly embrace AppImage, which displays AppImage icons correctly.
The OS should give you only some essential end user applications (a bare-bones text editor, a terminal, a browser) and then the user should get their specific use applications (DAW, 3D modelling software, game engine, etc) from the application developer, not the OS maker.
I fail to see how the package management in a Linux distribution could be considered a walled garden. Commonly it's cited as the exact opposite of that (see the table here: [1]). What type of freedom are you missing in something like Debian, where you can edit and recompile the core components of your system at a whim, the system even supporting you through source packages, and where anything outside the package management is just a "git clone" and a compilation away? I really don't know what more anyone could ask for. Certainly we don't want to go back to the 90s way of downloading unsigned .exe files from some random guy's web site and then going hunting for matching DLLs?
The freedom to not have to jump through ridiculous hoops like compiling from source just to use an application the distro didn't deem worthy of inclusion in their repo.
> to use an application the distro didn't deem worthy of inclusion in their repo.
How did you come to this conclusion? Did you request the package be included in the distro you use? If so, and it wasn't provided, provide the bug report/feature request link ...
My point is I shouldn't have to beg some third party to include it. The developer made it, I want to run it, there's no need for anyone else to be involved in this process.
I keep hearing that, and it is of course possible, but the point is that hardly anyone does it. That's what makes it meaningless that it can be done.
And it isn't insecure. If I trust the developer and get the software from them, it's just as good as trusting a repo maintained by random internets who have been known to not only not keep software in the repo up to date, but actually introduce vulnerabilities that weren't there before!
> The only thing about Linux I never really like is the package system.
I think you just haven't used it enough to understand the advantages. If you don't need the lastest-and-greatest released last week versions, the package system is much more efficient that individually downloading hundreds of packages.
While Mac OS X has homebrew, it is still deficient in my opinion compared to most distros (because casks don't get upgraded by default).
> Like if you want to install a new software, you usually don’t get .exe or .dmg.
Why would I want either? Both have lots of issues. For example, by default:
* No auto-update
* Duplication of libraries and other files I already have
* Spotty updates (e.g. who can be sure whether all libraries used have been patched by the latest version you have)?
> If you are lucky, the developer or some fans took the time to package it.
This is the case for > 99% of the software I use, even obscure stuff. For the other < 1%, I package it for the distro I use, and submit it so it is available by default in future releases.
These days, many popular packages also provide .appimage files (similar to .app files on Mac OS X) or publish Flatpak's (including Slack, Spotify, VS Code, Skype etc. etc.), and these can be used on any recent distro.
> Then you can do ‘apt-get’ ‘yum’ or ‘pacman’. However, packages got stalled and sometimes don’t match the original author intent.
Sometimes the original author doesn't know best ... in most cases packagers upstream their changes or discuss them with upstream.
> You can also build from sources, but it takes time and you have to know a bit of CLI. It never felt true freedom to me. But more like whatever the community feels make sense for whatever distribution weird dictactorship. Just a feeling and I still love and support Linux.
It seems like you never exercised your ability to vote, and think that everyone else is dictating to you ...
Here is what makes Linux work well for me: only run LTS versions. Never build your own kernel. Never install anything that isn't in the package manager.
You clearly have never used Linux in any form. There are multiple package managers, granted.
However, one strength they all share is: "you usually don't get .exe or .dmg"! Absolutely! Apps are integrated and not simply add-ons as they are in Windows or Apple land. When I want to install say libreoffice or wireshark I simply ask the system to install them. I absolutely do not browse the internet and download something, extract it and run some "installer". When I update my system, all apps and the OS are updated in one go.
My system is curated for me, end to end, to a greater or lesser extent. When I update, all my system is updated - OS, apps and all.
If you need the last version of Wireshark, it’s getting complicated. And when you have a setup with Wireshark that works why update it? I am dubious that Wireshark can be an attack vector so security updates won’t be useful. Installing last versions is very straight forward when you are on Osx or Windows. You just go to the website and download it. Plus you get the original binaries not a doctored version to fit whatever launcher and log organization is currenty trending at Ubuntu HQ. Don’t get me wrong I still love my Ubuntus. But this we need to rewrite all software to fit our distributions is not a strenght. I think this line of thinking is kind of shared by Linus himself.
Going to dozens or even hundreds of different websites to download the latest versions of software and then manually installing them all isn’t fast at all.
c.f. apt-get upgrade or similar, which takes a few seconds.
>c.f. apt-get upgrade or similar, which takes a few seconds.
Except when the package isn't included, or it isn't the version you needed, because then you have to spend 40 minutes trying to install all the dependencies and building the package from source, instead of the 3 minutes it would have taken to install an .exe on Windows.
> because then you have to spend 40 minutes trying to install all the dependencies and building the package from source, instead of the 3 minutes it would have taken to install an .exe on Windows.
No, you take 2 minutes to install it using flatpak, or download a .appimage file.
And if neither of those are available, you spend that 40 minutes packaging the software, and submitting it to the distro you use for inclusion.
(If it was already packaged, but not new enough, that's a ~5 min job to do the update for your distro)
> And when you have a setup with Wireshark that works why update it? I am dubious that Wireshark can be an attack vector so security updates won’t be useful.
Ignoring feature enhancements and bug fixes for a moment, do you really think it improbable that there are security issues in a piece of software whose entire job is to sit on the network and record everything that it sees and then translate and interpret it?
Because the "evil app" is just a specially crafted JPG that your browser opened up that triggers some edge case in the old unpatched version of Wireshark that is inspecting traffic.
Wireshark (usually) runs in promiscuous mode; it'd be "an evil app is already able to send something visible from your network interface" - which to be fair, might help for internal uses. It does usually mean that anything on your local network can attack you.
You have a very limited imagination. Wireshark is regularly attacked -- it parses random data it gets off the network, therefore it can be attacked using deliberately malformed packets. See https://www.wireshark.org/security/ for more info.
Every single of these vulnerabilities are crashes and infinite loops. The worst thing you can do - if I am running a 5-year old Wireshark - is to make it crash if I am visiting one of your web services. And you probably can already do that in newer versions if you dig enough.
Please give me an example. The Linux kernel is about a week old on this laptop. The current version of Windows server is called 2019, the last one was 2016 (rofl)
But you have to live under the wing of your distro mantainers, instead of installing whatever you want from any third party. Isn't Linux supposed to be about freedom?
Yes, you can install software using tarballs, but it's not usable for 90% of users, and not because of the distribution model, but because of the lack of standardization in a good, easy to use application-installing API.
I get the base OS along with a very long list of apps with any distro. For example Arch, Gentoo, Debian, Ubuntu, Fedora, Suse etc - all will curate a lot of apps for me. If I want to go off piste I download the code and crack on. I have several compilers to choose from LLVM, gcc int al out of the box.
tarball installs is pretty much close to the state of Windows app installations. You have to find the bloody things, download them each time and hope you have found the right one and not a trojaned one. Each one needs its own update routine and will not be updated when the rest of the system is updated.
The Windows and Apple software distribution model is archaic compared to all Linux/BSD etc distros.
>tarball installs is pretty much close to the state of Windows app installations.
Except that they're hard to install for a novice user.
>You have to find the bloody things, download them each time and hope you have found the right one and not a trojaned one. Each one needs its own update routine and will not be updated when the rest of the system is updated.
None of those are problems for me, the actual problems are: installing all the dependencies and then executing the make commands. Both of those could be solved with a well designed API.
>The Windows and Apple software distribution model is archaic compared to all Linux/BSD etc distros.
The Windows and apple software distribution works for all my use cases, the Linux model doesn't (eg: if I want many versions of the same package, if I want a package that isn't included in the repos, etc).
Also, from a philosophical and aesthetical point of view, the Windows & Mac distribution model is better: you get your OS from the OS developer, and you get your specific-use applications from the developer of said application. You are not dependent on a single entity that supposedly knows what you need better than you.
> You are not dependent on a single entity that supposedly knows what you need better than you.
This right here is what Linux Desktop doesn't seem to understand. The whole culture is ingrained with the attitude that they do, in fact, know what you need better than you.
> But you have to live under the wing of your distro mantainers, instead of installing whatever you want from any third party. Isn't Linux supposed to be about freedom?
You don't. Yes, but if you don't take advantage of the freedom, that's not freedom's fault.
> Yes, you can install software using tarballs, but it's not usable for 90% of users, and not because of the distribution model, but because of the lack of standardization in a good, easy to use application-installing API.
See Flatpak. Go look at what software is available at https://flathub.org/apps , but both GNOME and KDE have application managers that can install from Flatpak repos (configured to use Flathub by default), and possibly the distro's native package manager as well (via PackageKit).
"you usually don't get .exe or .dmg"! Absolutely! Apps are integrated and not simply add-ons as they are in Windows or Apple land. When I want to install say libreoffice or wireshark I simply ask the system to install them. I absolutely do not browse the internet and download something, extract it and run some "installer". When I update my system, all apps and the OS are updated in one go."
I think a lot of this is learned, or conditioned behavior ... there are a great many UNIX packages that could very easily (and very nicely and conveniently) exist as static linked, single file executables.
This is how a lot of software distribution worked in the 90s - for say, some Solaris tool or whatever. Granted, the distribution mechanism (some .edu FTP site somewhere) was totally insecure, but the packaging mechanism was great.
Most of these aren't Linux on Desktop issues that any day to day user will ever encounter. A lot of them yes, but stuff like the author's issues with x.org being a bad piece of software are completely orthogonal to anything a desktop user of Linux will care about. This list would be a lot better if it were cut to just the major issues.
Why is the defense for "there are problems with Linux" always some appeal to the workflow of a strawman 'average user'? A class of people which, I might add, still show no interest in using Linux.
This is the silly thing. There is no objective 'works' or 'doesn't work' that applies to 'the average user' because there are so many use cases and configurations.
And it doesn't matter. For a large bunch of people Windows has been crap for its whole existence, but it works with X, or is the only thing compatible with Y, or because no one demonstrated anything better. I.e. exactly how the iPhone got such traction (also with the marketing).
Linux is [i]just as good[/i] as Windows and vice-versa, where the use case is compatible...
Because as a general rule the power users already have minimal problems and can fix them pretty easily. So the next concern is people who aren't familiar with the system, i.e. average users.
I've been using a Google Pixelbook (their flagship Chromebook product) as my main development machine for the past couple months, and I love it.
The ChromeOS "Crostini" project allows you to run a full Debian instance in a container, but with the other benefits of a Chromebook and ChromeOS. Each new release of ChromeOS brings better enhancements to Crostini, e.g. right now backups are manual but a native "one click" backup is coming soon - https://www.aboutchromebooks.com/news/crostini-linux-backup-... . FWIW some of the tools I run are the IntelliJ suite, Atom, postgres, Docker, etc.
IMO Linux on the desktop is awesome, it just happens to be running in a container on ChromeOS.
I use Linux for gaming almost everyday with Steam/Proton. It works flawlessly 90% of the time. I find no issues with my linux desktop whatsoever. It is stable, I rarely restart it, updates so far didn't break a thing.
To be viable as a general desktop environment for non-technical users, that has to be true not just for some users like yourself, but for all but about a few in a thousand users across a very wide cross section of available hardware.
> across a very wide cross section of available hardware
This is a sticking point I just cannot get behind. When you're purchasing a computer running Windows, it is optimized for Windows. When you're purchasing a Mac, it is optimized to run macOS.
So it should logically follow that if you want hardware optimized to run Linux, you should purchase that specifically. Expecting Linux to work flawlessly on any random junk is a feat you're not expecting of any other OS.
Therefore by that logic, for Linux to be good enough on the desktop, it has to ascend to places no other OS does.
What? When I'm purchasing a computer, any computer, I know with (near) 100% certainty it will run Windows flawlessly. I definitely expect Windows to run on any random junk.
> What? When I'm purchasing a computer, any computer, I know with (near) 100% certainty it will run Windows flawlessly. I definitely expect Windows to run on any random junk.
So do you expect Windows to run flawlessly on a Chromebook? I'd guess not. When you're purchasing random hardware in a store it most likely comes per-installed with Windows and has been made for and tuned for Windows.
It's just that Windows has such marketshare that the vast majority of computers are per-installed with Windows and have drivers primarily for Windows.
> Create a universal packaging format for bundling software which supports signatures, weak dependencies, isolation (aka sandboxing/virtualization), clean uninstallation and standard APIs to make it possible to integrate an application with your DE.
Well this battle is lost. There was always the RPM/DEB/PKGBUILD split. But rather than unifying the standards, we now have Flatpak vs Snap split.
It is seriously frustrating that when distros can agree on core infrastructure stuff like systemd-vs-upstart and wayland-vs-mir ... we still have a software distribution split that is more political than technical.
This ultimately hurts linux - because there's never going to be that clear monopoly in the packaging space. Someone or the other is going to say "I can only package for X. All others can go figure it out themselves".
I dont know... maybe APK (android) has won as the predominant linux packaging format ? I'm kind of waiting for ChromeOS as the true Linux distro.
It's not a political issue if you want to force distros to move their entire packaging infrastructure to a backward incompatible format that might still not be compatible with other distros because they ship specific versions of some packages which are in addittion packaged in a very particular way.
Switching to RPM/DEB/PKGBUILD is not a simple problem, because the problem isn't really which packaging infrastructure is used.
Everyone for some reason expects that GNU/Linux should work on every hardware configuration. I don't understand that, honestly. Why is there no such requirement for MacOS? Why do Windows-certified hardware have to work flawlessly with GNU/Linux? Just buy a Desktop/Laptop certified for GNU/Linux and stop complaining.
Probably because nobody expects much from MacOS anyway?
One relevant part of the Apple world is, that it works within it's narrow world of Hard- and Software. This is how they sell it.
There is much much more Win-certified hardware out there. Much of it isn't used (for Win) anymore and therefore cheap as hell for excample. This is the hardware you'd expect your software to work with. This is where it's worth to invest time in.
> One relevant part of the Apple world is, that it works
You already figured out, why people enjoy using Macs. It works and I don't need to cycle through three different system compositions before graphics, audio and WiFi work.
Quite the opposite. I'd call that solid evidence that they don't owe the market infinite backward compatibility. Nevertheless, those of lesser means will always seek out low-cost solutions like installing Ubuntu on a 10 year old laptop and they will be predictably frustrated when things don't work. When an incidental fraction of your audience is "all the world's poor and downtrodden", you're going to get a lot of people looking for solutions. To some extent that works in your favor: from those poor and downtrodden will very likely emerge some brilliant hackers and technical leaders.
All this is true, but not really fair. Count up the issue raised by the OP, and Nvidia really has the most power to stabilize the linux desktop. I read that gamers who want peak performance on Linux can get the proprietary Nvidia drivers running well. So it's possible, but it will never be without the friction of open source running along side signed proprietary binaries.
What I would love to know is why linux isn't equal or superior to Apple and Msft in power management.
Linux seems equal in power management to Apple. On my ThinkPad X1 Carbon with the 57 W-h battery I get a good solid 8 hours of work and often the system consumes as little as 4W. My MacBook Pro with a 54.5 W-h battery runs for at most 4 hours. Now, it's true that I had to edit dozens of little configs to get the ThinkPad with Linux up to where a ThinkPad with Windows is right out of the box, such as enabling PCIe ASPM and making it still be enabled after resume from suspend, but to my knowledge there are not such power efficiency hacks available to the user on macOS, and it's bad by default.
The biggest power efficiency hack for macOS is to avoid running JS anywhere other than in Safari, and there as little as possible. Chrome and Firefox both eat battery like crazy. Firefox worse than Chrome, but either will take a couple hours off your battery life versus Safari. And running lots of heavy JS tabs or "apps" will do even worse.
Not that that necessarily has anything to do with the differences you're seeing in particular, just worth noting. Wrong browser, 20-30% worse battery life. Wrong site open in some tab in the background, 30-60% worse battery life. The overwhelming majority of what takes my Mac battery life from 8+ hours to 4 or less is Javascript or some heavy Java IDE or something. User software that gives a damn about how much power it uses makes a huge difference, well outside of just "don't run video games on battery power" and other obvious stuff.
Do you have a link to a good guide for setting up power stuff on the X1C? I've found a few but they conflict, so I'm keen to hear from someone who actually has it all working well.
Anyone thinking about gaming on linux should be buying AMD and avoiding nvidia these days, there drivers are fully open source and because of that they work well out of the box and are constantly improving.
> So it's possible, but it will never be without the friction of open source running along side signed proprietary binaries.
AMD are proving otherwise.
> What I would love to know is why linux isn't equal or superior to Apple and Msft in power management.
I think it has now, my old dell laptop now get's a lot more life running ubuntu than it ever did on windows. I get 3-4 hours or more out of it and used to only get a couple under windows.
This article is wrong on numerous on its claims either because some are just false, or some are skewed by a perception of how things "should" work which is utter bollocks.
Like LTS not being suitable is complete nonsense. You can easily add a repo to have up to date GPU drivers and never have to worry about it.
Steam having only indies games misses completely the Proton era.
I could go on and on. It seems like a hit piece from someone who is out to prevent people from considering switching.
> All native Linux filesystems are case sensitive about filenames which utterly confuses most users. This wonderful peculiarity doesn't have any sensible rationale. Less than 0.01% of users in the Linux world depend on this feature.
The horror...
Though I wonder why users would be confused since this only really applies to the command line and there were (multiple) other points complaining about users having to use the command line.
user: cd Foo
bash: cd: Foo: No such file or directory
user: wget https://www.microsoft.com/en-us/software-download/windows10/win10.iso && echo "screw you linux!1!"
I've been running KDE Neon on NUC for a while, and Kubuntu before that since KDE 3.5 days. But still it's not ready to become my primary desktop.
The main issue for me is RDP-like (not VNC-like) remote desktop experience. Without that I'm not even gonna try.
I mean the kind where I check my desktop before I leave for work, resume the session remotely from work (when compiling or whatever), and pick up when I get home. With performance which makes one forget it's a remote session, and bidir clipboard sharing.
So until then, Windows on the desktop it is. I can always run Linux on the NUC or in a VM.
I've been quite happy with NoMachine[0]. It has probably even more options than RDP (file transfer and drive sharing options, various graphics and audio options, etc.) You can even watch videos through it if your connection is decent. You can pick up an existing X session or start a new one.
I just tried, again. Even on LAN (same switch in fact) the performance is worse than what I got when I was RDP'ing to my desktop in Norway from a hotel in Hawaii. And it didn't forward audio at all.
I'll check it out from work tomorrow, but so far it's usable but not exactly great when compared to RDP.
I have one Windows machine and it doesn't have a monitor attached. All of my access to it is via RDP. It's actually incredible how well it works. Resizing the RDP window works as you'd expect, it automatically adapts when I connect from a HiDPI device vs a non-HiDPI device, it's fairly bandwidth-friendly, sharing local folders with the remote machine is easy and works well, etc.
VNC, even Apple's Screen Sharing implementation of it, is an incredibly poor substitute.
Actually yeah, lots of software that was compiled for Windows 95 really does still run fine on Windows 10. Games are a big exception. What doesn't work is 16-bit Windows applications, at least not without DOSBox.
Yes lots of programs still work if you just do this little thing and maybe this little hack which is the same as not working unless you use hacks. Which brings us back to what I wrote.
The main issue you'll face on trying to run old 32bit Windows 95 applications on modern 64bit Windows 10 is that the installers often are 16bit and 64bit Windows do not support 16bit applications (which was actually technically possible, at least for protected mode 16bit applications which is the majority of them, as shown by Wine being able to run almost all 16bit programs under 64bit Linux).
Once you go past that hurdle though (and it is often very easy to do that since almost all of them tend to use InstallShield and there are workarounds for it... or in some cases, you can just copy the files manually :-P) the applications themselves tend to run just fine. For example here is Visual Studio 6 running on my Win10 machine (with a demo from DirectX 7 SDK) [0] - all i had to do was to edit an .ini file to disable the Java VM setup. Similarly here is Borland C++ 5 compiling my C engine [1] - BC5 works just fine (and i actually use it very often since it compiles the code instantly and offers an ok debugger). Outside of compilers, here is Wally [2], a texture editor/painter/manager for Quake, Quake 2 and similar games - the program is from around 1998 but works perfectly fine in Windows 10.
I use and run a lot of old applications for Windows and generally i almost never have issues getting them to work in Windows 10.
One notable exception is games (of which i also have A LOT of older titles) that have a combination of DirectX abuse and memory bugs that in 9x weren't noticed due to looser checking. My go-to solution is to drop dgVoodoo2's dlls in the game folder and run RTSS to limit the framerate at 60fps - this solves 99% of the issues.
Interestingly (but expectedly, considering the issues mentioned in the article) the situation is reversed with Linux. I tried recently to play several older game demos (i have around 30 of them) made for Linux around late 90s, early 2000s. Most of them worked fine (after installing a missing .so or two and an OSS-to-ALSA bridge), with the main exception being Shogo which relied on Gtk1 that i just couldn't figure out how to make work in Debian (which doesn't ship Gtk1 at all).
This last part is what reverses the situation for Linux: older applications relying on older versions of Gtk, Qt, etc just do not work at all and often you can't just grab the older .so files and drop them in, as they have big webs of dependencies that often go outside just libraries (like scripts, data files, etc). Which is also why i really dislike both Qt and Gtk - the former, being a C++ API, cannot guarantee a stable ABI even if they wanted to, while the latter being written in C could provide a stable ABI but the developers just don't care at all about stability.
Ironic that the most stable (API-wise) and backwards compatible desktop tech in Linux is Wine.
PS4 is basically a modified Free BSD. So, "fixing" Linux desktop is probably doable, but extremely difficult to profitably do. Valve seemed interested when Windows introduced the App store and seemed to be going App store / Play store way, but now that Microsoft is losing interest in Windows, they don't have as much incentive to pursue Linux.
PS4 is not really comparable. (Nor is macOS, for that matter.) It's one thing to get some *nix flavor to work on standardized hardware. It's another to make it work on the wide variety of computing devices where you could install Windows.
True, but hardware compatibility isn't the only thing wrong with Linux desktop. There's a lot of broken stuff on the software side too (ex: X windows as the article points out). In fact, hardware compatibility hasn't been that much of an issue on desktops in recent times. We have a couple hundred Linux desktops (not laptops) at my university, and what I find is that if you're using last-gen Intel CPU with iGPU on a desktop with Ethernet, hardware compat isn't much of an issue at all.
Linux "sucks". True. For last two weeks I was trying to install a distro, that will work wirh my Broadcom internet adapter, but no success. Also BT did not work well. Of course I could get Wifi to work after a heavy and bloody battle, and only at 2.4 Ghz, but then BT stopped working. The point is, that all those are working fine under Windows 10. No hassle. My question is, why there is no Linux distro, that can work just fine out of the box?
I am not a system programmer specialist like 95% of PC users. Millions of people would like to install Linux, but just cant do it, due to lack of knowledge. Average person, just want to download, install and run a system, without thinking of finding lost drvers, kernels, waste time to search internet, to find a solution. I went trough dozens of distros, and could not find one, that would work with my ACEPC T11 mini PC. Windows 10 just do.
Broadcom chipsets are notorious under Linux, because its had to be reverse engineered without support from Broadcom. The reason it works out of the box on Windows 10 is because the drivers are fully supported, with Broadcom engineers providing a full implementation of everything - if they did the same for Linux then it would work perfectly there as well.
The problem is that a lot of these issues would require co-operation and agreement from multiple different teams to have a solution that would cover the whole system. But in the Linux world nobody really has the authority to make things work and play nice nice and to manage and design the whole system architecture. Distros do what they can of course, but they don't want to deviate too much with custom patching from upstream. So their work doesn't solve all of these problems.
The year of the Linux on desktop is always 10 years from now ;-)
Ps.
I'm a Linux user and I quite like it on a desktop, wouldn't bother on a Laptop anymore, getting things like suspend to RAM or hibernate, bluetooth, wifi to work reliablly is just too much effort.
"Linux has a 255 bytes limitation for file names (this translates to just 63 four-byte characters in UTF-8) - not a great deal but copying or using files or directories with long names from your Windows PC can become a serious challenge."
File and folder names can't be longer than 255 UTF-8 code units in Linux, which means they can contain 255 US-ASCII characters, but only 127 Cyrillic characters, 85 Chinese characters, or 63 emoji. In Windows it's different, because file names can contain up to 255 UTF-16 code units. This is 255 characters in almost every language (but only 127 emoji.) So, if you create a file name with 100 Chinese characters in Windows, you can't transfer it to Linux (or upload it to a Linux web server, for example.)
Windows, on the other hand, has problems when the full path to the file (eg. C:\folder\file) is more than 259 UTF-16 code units, but it's getting better at this, and newer Windows apps normally handle longer paths just fine.
Maybe I'm not understanding that limit correctly but it looks like the whole path cannot be longer than 260 characters. I've hit that limit when transferring files from linux to windows.
As you mention, the linux limit is in bytes, so the issue could appear in some character sets, but it still looks like the path limits are a lot more drastic under windows.
I wondered about this as well, especially as Windows (10) still has issues with deeply nested (=long paths) structures. Especially in conjunction with OneDrive (forced upon me at work) it's an annoyance.
_For instance Microsoft and Apple regularly update ntoskrnl.exe and mach_kernel respectively for security fixes, but it's unheard of that these updates ever compromised the boot process._
I had updates rendering my computer unbootable by messing up the MBR at least twice.
It did used to be true. Windows 10 and its adoption of no-QA and "our time is worth more than the users'" parts of open source culture put an end to that.
This issue of Windows update and other lock in trends is 80% of the reason I just (in the last month) switched to a Linux desktop. The other 20% is that the ui on Linux has finally progressed enough that I could consider it the least worst option. That is a first since I tried many times and wound up reverting to Windows.
So far the only thing that really bugs me is that I have to apply fixes in the terminal. Like installing a printer only to find the scanner function was installed but wasn't configured - and the only way to get it to work is read a few forums until you can figure out which of the myriad installed utilities will actually return the info you need to configure the driver, etc. It's often a few, because the first half dozen results are wrong!
Strangely enough, I don't find anything in Linux to rival Notepad++, WinSCP, or AutoHotKey - just the type of thing I was expecting to find new and improved. Those programs just have a better UI, and great functionality, to get things done without going full command line commando or buying into a the resource bloat of full ide. Right now I am using both sublimetext and visual studio code, because they both have things the other lacks. I could just use np++ for everything before. Doubling my ram made that less painful than it would have been otherwise. Obviously a very personal itch, but one I suspect a number of people who are on the cusp of accepting a Linux desktop might have.
But I trust Linux's stability since I have had zero issues in decades of administering my servers, and don't worry about updates the way I do on a less informative OS. Specifically, once you are used to apt, the list of packages that will be installed tells you a lot about what might break or not on your system. And there will always be a way to revert to older libs if necessary, which might not be true on a locked OS.
It's not in the way I use it. I found the plugins for notepadqq seriously lacking. I haven't used kwrite in years, I guess I should give it another try.
With linux, I do find that if you have a problem, finding solutions from the internet is quite easy. Some of the distributions like Arch, has very detailed documentation and active forums, and IRC channels that finding help is not difficult, unless you have some exotic setup.
OTOH, getting help for windows issues is often more difficult, if you visit windows forums, often the suggestions, solutions talked about are pretty generic, since one can't peek under the hood the amount of diagnostics possible is limited. Even if a problem solved in windows by reinstalling/restoring from backup etc, most often one does not what exactly caused the problem.
I'm really happy with Linux on the desktop. Yes, occasionally I have problems with graphics drivers, but normally not (NVIDIA), both on and older desktop pc and a laptop, and I really prefer it.
I believe that those problems are the price of free development. When everybody can contribute anything, which is a good thing in itself, there is no streamlining or refining of the end product. There are soo many interfaces and standards which often are used in parallel that it's actually an achievement when a Linux system is still able to boot. Open Source is a good thing, but not in regards of user experience and stability. There is no guarantee for anything. When it works - it works. When it doesn't work - people will hopefully fix it, or at least contribute resources. When it doesn't work and nobody fixes it, what then? I wish I had the expertise so that I could fix everything myself if needed, but let's stay realistic. Nor have I resources to spare. I have to rely on others' efforts.
I've cross-compiled a Linux system from scratch for PowerPC once and it worked great. It worked up until I needed to recompile glibc because of some bugs threatening system security. Then everything went to hell. My prime interest in Linux and Open Source ceased to exist at this very moment. The dependency system in Linux is a nightmare, because basically there is none that guarantees consistency. How are cross- and circular dependencies even possible? And that's the price for freedom - chaos. It's great that anything works somehow. But there is no future to build upon.
Wayland was supposed to be the rescue from the arcane outdated X11 and it is severely disappointing. The reason is that it does not offer enough features. It should have font rendering, it should have window decorations by default it should have native UI widgets and much more. None of it is offered and the responsibility is pushed away to others like Gnome, KDE and believe it or not, X11.
IMHO the monolithic kitchen sink approach of systemd should be applied to wayland. Not the other way around.
I would argue that wayland is still in beta. They are still designing extensions to reimplement X11 desktop features. But with wlroots things have picked up pace.
As imperfect as Linux is, it's still better than Windows' dumpster fire and Apple's jail.
And as of right now I have an Acer Swift with an AMD Ryzen/Vega API which is running Ubuntu completely flawlessly. As in, everything works, all the time - WiFi, suspend, function keys, plugging in external monitors, etc...
It could be better, but I can say that about every product I've ever owned (and yes I've had an iPhone, it's the reason I never bought another Apple product ever again).
I think one of the disturbing parts of the linux ecosystem is how it is starting to resemble the software turnover of a gigantic enterprise.
Gigantic enterprises have functioning systems, but the people that wrote them eventually leave, and the complexity of the systems is such that when they need to be updated, they are just rewritten from the ground up, with the usual mixed bag of new bugs, new features, and missing features from the previous regime.
This just seems what Wayland and system.d are, especially reading the list.
The number of distros is really really counterproductive.
Linux also DESPERATELY needs a massive hardware support information site with graphical matrices, to help guide purchases. This alone might shame hardware providers into sponsoring device drivers.
But realistically the window of Linux desktop adoption passed with Nadella taking over Microsoft and starting to right the ship on Windows, even if it is still utter garbage. There was a solid five years where Windows was being utterly insane and that was the time to strike.
Linux probably should just concentrate the resources of desktop into a near-perfect clone of OSX so they can at least unite forces with the macintosh people in usability / interface familiarity.
Well, a big problem here might be engineering thinking, as opposed to design thinking.
If you view the problem as, "I have to enumerate all the [Linux on Desktop] problems, and if I enumerate enough problems and fix all the ones I enumerated in the right order, then I've solved [Linux on Desktop]," then "Main Linux problems on the desktop" is, by virtue of having done exactly that, and not being the first or last iteration of that, part of the problem!
Not that I begrudge Linus Torvalds for saying things sort of like, Linux is evolved, not designed. It just shows that one of the big holes it has is its use on the desktop, and the major Linux designed product (Android) isn't really libre, and that Linus is a brilliant guy but he doesn't have answers to everything nor claims to.
Meanwhile take a look at elementaryOS, which I think has a lot of opinions you'd never find in an evolved or engineering-rational platform (like their own UI programming language). I think if they had the resources of a giant corporation they could make a meaningful impact on the desktop market.
I'd say it's very similar to a debate I heard from head of an architecture school attached to a university better known for its engineering. "When we looked at expanding the campus, decisions were made in terms of parking spaces per square foot, and whichever had the most parking spaces per square foot is where we would build the building." It's not that he's going around arguing his ambitious and less efficient designs are better, just because he's coming in with qualitative or emotional impacts incalculable by an enumerative cost-benefit analysis.
He's just saying enumerating all the considerations, and then solving, is a really reductive way of thinking about things.
The Linux Desktop has problems, yes, but that's Ok.
We just have to accept that Linux will never be a mainstream desktop environment.
A desktop environment needs a well polished experience. And this experience can only be created by centralized organizations with extensive resources, like developers, QA, designers, ergonomist, user's behavior studies and a common and consistent vision of what the experience should look.
In the Linux DE world, a project can consider itself lucky if it has enough developers.
Yet, these developpers are doing a wonderful job, specially given the lack of resources, with applications that are more than usable and can solve 90% of use cases.
Yes, Linux DEs are sometimes a little clunky, and the overall experience might encounter issues not solvable by the average user. But that's Ok, having a well polished experience would require an order of magnitude more resources, and an organization far more vertical than the existing array of communities.
Diverting so much resources for the DE goal would be a mistake. Even if by some miracle we manage to solve all the issues mentioned, we would be "X but different/better", and this business model doesn't generally do so well, it will not displace Windows/Mac OS.
Linux and the OSS ecosytem should remain what it is today: a powerful and rich toolbox to build things from giant web services or phones to Vacuum cleaners, cars or milking machines, and this toolbox should keep improving.
It doesn't mean we should not look around and see what is happening around us in the DE world, but it should not be the absolute priority. Building a Linux desktop should not jeopardize the use cases that made Linux a success.
Linux is a clunky Desktop Environment, yes, but this clunky environment has enabled me to build software reaching thousands of people.
Thank you to all the devs who have built this (mostly) working experience.
I use Linux, but only in a VM. Every year or two I try to install it on actual hardware. In the past ten years it has gotten so much better.
That said it is still not quite at the level I would use it as my main OS. If my laptop stops working when I'm away from my desk I really don't want to have to fix something broken. I need it to just work. And for all the many shortcomings of Windows it has been hammered on and written for a complete idiot to keep it just working a lot easier.
I will keep monitoring the situation of course and the second I feel comfortable enough will make the switch. Until then I'll just keep using a vm.
There are a couple main problems with Linux as a desktop.
1) Is (obviously) hardware support. While one or two vendors will produce a small number of Linux-compatible drivers, they don't do QC testing of all their products on Linux, whereas they almost certainly do for Windows or Mac.
Distros, and OSS devs in general, have to support a wide range of software and hardware in every possible configuration. This includes not only individual components on a system, but how they are tied together, and the proprietary extensions (keyboard buttons) that allow the user to operate them. But there's no way any company could possibly test all software with all hardware. Even if they did such insane amounts of testing, they'd need to pay someone to fix all the bugs that would come out of it. No OSS company I'm aware of has the bankroll of an Apple or Microsoft, to say nothing of all the hardware vendors' investments.
Trying to support OSS on proprietary platforms is like trying to become the development and support for every such product in existence, and those products are often black boxes. The only reliable option is to pick a distro, then find hardware which has been explicitly certified for that distro. This is usually a short list, and becomes shorter as you try to find something that fits your needs and budget.
2) Is an even more intractible problem: limitations of the software.
Do I need some software which is platform-dependent? Then I should use that platform. Trying to shoehorn it into Linux is just a recipe for frustration and support calls to your cousin's son Eddie who you heard is really good with this Linux thing.
Then there's the difficulty of operating a system which is only designed to work in a particular way. Want to use some software which doesn't have an official package? Good luck figuring out how to install it. Have some problem on the system? Good luck figuring out what magical combination of "console commands" might make it work again. And don't even bother telling your ISP or work that you use Linux when you call with a support problem, because they'll just tell you to get bent.
Really, it all comes down to money. Nobody is spending the money on Linux to become an officially supported Desktop, because it would be unaffordable. Linux will always be a hobbyist OS as long as nobody supports it.
Anyhow, the article touches a lot of the problem zones and it would be great if companies profiting from Linux would start investing in the desktop.
Regarding security Linux has some advantages, like being open source and not being an attractive target for malware.
I remember there was a Blackhat talk a few years back, detailing the security features of windows. I'd love to see a comparision with Ubuntu or Fedora.
>linux problems on the desktop
>Linux is administered by ssh
Right... on desktops, Windows is far worse with updates as far as breaking things goes. We have to deactivate the internet on Windows 10 machines so the updates don't break everything.
Microsoft do not care about breaking stuff with their updates at all. Ubuntu is the stellar opposite.
The article is not entirely accurate, and can be hit or miss. Granted, I'm cherry picking, but I'm cherry picking the pieces which I have direct knowledge which are definitely misleading.
> ! X.org architecture is inherently insecure - even if you run a desktop GUI application under a different user in your desktop session, e.g. using sudo and xhost, then that "foreign" application can grab any input events and also make screenshots of the entire screen.
A linux user would say, "So that's not a bug, that's the power of the root user. Don't do that or lock down your root access."
Then this one:
> ! The kernel cannot recover from video, sound and network drivers' crashes (I'm very sorry for drawing a comparison with Windows Vista/7/8 where this feature is implemented and works beautifully in a lot of cases).
To be fair, if you google "Nvidia BSOD" and you'll get something like this from four months ago:
And if you think the mac's are awesome, (which the software might be, I don't know) the hardware has some serious issues which has turned me off from buying one.
>> ! X.org architecture is inherently insecure - even if you run a desktop GUI application under a different user in your desktop session, e.g. using sudo and xhost, then that "foreign" application can grab any input events and also make screenshots of the entire screen.
>A linux user would say, "So that's not a bug, that's the power of the root user. Don't do that or lock down your root access."
It's actually recognized as a huge problem and one of the driving forces behind Wayland.
One surprise for me is that there is no mention on bluetooth. I have a reliable way of connecting a bluetooth headset; trying to connect between 2 and 10 times until it works. There are constant transient failures which is not what I expect from software in 2019.
I think it would be useful to separate this in problems that are Linux's(!) fault and problems that are someone else's fault(e.g. nvidia, the hardware market, intellectual property laws, the user himself).
I really, _really_ hate this statement. Android meets the only definition of 'linux' that matters IMO: it runs on a linux kernel. That's linux, full stop.
That said, I do find myself agreeing with most of the points listed there. I just not sure I'd call them 'linux on the desktop' issues per se. They're wayland/xorg issues, deb/rpm/flatpack/snap issues, pulseaudio/alsa issues, gtk/qt issues, etc. None of which are tied to the linux kernel. The fact that all of those technologies can and do run on bsd kernels (er, maybe not alsa).
Android is linux. Chromeos is linux. The issues above are really issues with the free/libre desktop distributions, mostly not linux per se.
I'd just like to interject for moment. What you're refering to as Linux, is in fact, GNU/Linux, or as I've recently taken to calling it, GNU plus Linux. Linux is not an operating system unto itself, but rather another free component of a fully functioning GNU system made useful by the GNU corelibs, shell utilities and vital system components comprising a full OS as defined by POSIX.
Many computer users run a modified version of the GNU system every day, without realizing it. Through a peculiar turn of events, the version of GNU which is widely used today is often called Linux, and many of its users are not aware that it is basically the GNU system, developed by the GNU Project.
There really is a Linux, and these people are using it, but it is just a part of the system they use. Linux is the kernel: the program in the system that allocates the machine's resources to the other programs that you run. The kernel is an essential part of an operating system, but useless by itself; it can only function in the context of a complete operating system. Linux is normally used in combination with the GNU operating system: the whole system is basically GNU with Linux added, or GNU/Linux. All the so-called Linux distributions are really distributions of GNU/Linux!
Yes but when you say “state of the Linux desktop,” all of this gets bundled in. A fair number of bits also touch the kernel directly — sound, video, etc.
Switched to Windows in 2018 because of Killer WiFi card not supported on Linux. First time after 10 years I was able to suspend my laptop without having to worry it will wake up in my bag and overheat.
I use GNOME (Fedora 29). Works fine, except for things that NVIDIA and Cisco keep locked down (driver and codecs). Other than that, GNOME is just kind of a wacky desktop (activities and no desktop launcher are just stupid). But it does work, and I would be fine recommending it to my grandma with some extensions installed to make it more like Windows UX.
I should also mention that Linux has improved, macOS has seriously regressed. All the time bugs, and if you want to plug in you Mac into anything not made by Apple, good luck.
There are plenty of valid complains including many touched on in the article but there is also some unvarnished nonsense.
"Pulseaudio is unsuitable for multiuser mode - yes, many people share their PCs (an untested solution can be found here)."
This would lead one to believe that multiple graphical logins playing sound doesn't work in actuality Pulseaudio runs as a process per user and the observed behavior is the same as switching users on a windows machine. When you switch to a users graphical session that users sound comes out of your speakers. Further this allows it to be configured per user and for it to run without super user privileges. The only thing you can't do is say play music as one user and switch to another users desktop and listen to said tunes while hearing your users applications.
In most cases you actually don't want random apps you can't by design effect or shut up playing over your desktop. So this is the right decision in every possibly way.
"No reliable sound system, no reliable unified software audio mixing (implemented in all modern OSes except Linux), many old or/and proprietary applications still open audio output exclusively causing major user problems and headaches."
The last application that I recall that I had an issue with grabbing the sound device directly was a 15 year old version of Skype. Pulseaudio does all of the above and as it is and has been the standard desktop apps are expected to integrate with it and by and large that has been the case for about a decade now. The fact that somewhere out there old broken non compliant apps exist isn't a compelling argument. All popular platforms have a mixture of crappy and useful apps. People generally deal with this by using their favorite search engine to find good applications for their task.
"What if the user decides to switch from Windows to Linux when he/she already has some hardware? When people purchase a Windows PC do they research anything? No, they rightly assume everything will work out of the box right from the get-go."
They "rightly" expect hardware that the manufacturer doesn't want to support on Linux will be reverse engineered by volunteers because they already invested their money in a manufacturer that only supports windows. When the kind of person that could help you out makes 200k at Google it turns out that 50k of their time is not available to protect your $99 investment in your printer.
If you pay $15 a month for hulu + hbo, 10 a month for spotify, $14 a month for netlfix, $150 for cable in the next 5 years you will pay.
$900 for hulu
$600 for spotify
$840 for netflix
$9000 for cable
If you paid for all 4 you bought a halfway OK used car. Perhaps a fraction of that might buy a more polished experience you don't need to piss and moan about.
Ultimately everyone will eventually upgrade their machine. Almost all hardware that a consumer would encounter or consider could also run windows. If you are considering switching to Linux and your current machine consider making Linux a part of your NEXT purchase. If you don't like it you can easily turn around and put Windows on it or hey put both.
Whoa this article has brought up some painful memories and I agree with so much, but yet I can't imagine using Windows, it pisses me off even more.
> Most distros don't allow you to easily set up a server with e.g. such a configuration: Samba, SMTP/POP3, Apache HTTP Auth and FTP where all users are virtual. LDAP is a PITA. Authentication against MySQL/any other DB is also a PITA.
Just thinking about it gives me ptsd, I do not want a thousand users or folders on my system, they're a pain to migrate, it's really a PITA to make everything virtual. Please someone give me a nice short guide how to set up dovecot so that it stores all e-mails in postgresql?
> KDE is spiralling out of control (besides, its code quality is beyond horrible - several crucial parts of the KDE SC, like KMail/akonadi, are barely functional): people refuse to maintain literally hundreds of KDE packages.
KMail has had so many bugs for me, I should report but man, it's such a pain in the ass. Such ridiculous bugs as well (e.g. connection loss spawning 1000 error boxes that my GPU can't handle the layered transparency.
> ! Linux security/permissions management is a bloody mess: PAM, SeLinux, Udev, HAL (replaced with udisk/upower/libudev), PolicyKit, ConsoleKit and usual Unix permissions (/etc/passwd, /etc/group) all have their separate incompatible permissions management systems spread all over the file system. Quite often people cannot use their digital devices unless they switch to a super user.
In theory they're all separate things, but they interleave so much, there has to be a better system.
> ! No equivalent of some hardcore Windows software like ArchiCAD/3ds Max/Adobe Premier/Adobe Photoshop/Corel Draw/DVD authoring applications/etc. Home and enterprise users just won't bother installing Linux until they can get their work done.
I really miss Solidworks/Fusion 360, there's no effort at all to port those :(
> ! Open source drivers have certain, sometimes very serious problems (Intel-!, NVIDIA and AMD):
Flickering windows, low FPS, laggy videos, everyday struggle :( Changing compositor OpenGL mode fixes it though.
> ! An insane number of regressions in the Linux kernel, when with every new kernel release some hardware can stop working inexplicably. I have personally reported two serious audio playback regressions, which have been consequently resolved, however most users don't know how to file bugs, how to bisect regressions, how to identify faulty components.
I was affected by this too, qemu was broken for me for almost half a year.
There's one thing that's very subjective in the article, even if the author claims otherwise. I like to call it the Winduslexia, it doesn't occur for anyone other than previous Windows users:
> All native Linux filesystems are case sensitive about filenames which utterly confuses most users. This wonderful peculiarity doesn't have any sensible rationale. Less than 0.01% of users in the Linux world depend on this feature.
These days, as many others have pointed out, choose the right hardware and everything just works.
At my company, we are currently in the middle of upgrading aging Windows 10 desktops in the Customer Service department to Ubuntu LTS. So far the feedback is universally positive from the CS agents. Ubuntu runs faster on the existing hardware and that's about all they notice. Chrome is still chrome and that's what they use for all the CS apps, including voip calling.