Wow, inverse Wine, aka Linux translation layer. Didn’t see this one coming. Not sure where this will take things, but this certainly isn’t Steve Balmers MS.
I’ve been following the news from the MS Build Conference overnight and I’m quite intrigued by this.
From what I can gather (I’m not using the insider builds so I can’t actually test it) this basically provides a “full” Ubuntu installation in Windows using a Linux sub-system that seems to map Linux Kernel calls to the Windows Kernel (I seem to think KVM on Linux works in a similar way but I could be wrong). Because there’s a specific Linux sub-system the performance will likely far outstrip any virtual machine, but there’s no X/Mir/Wayland so you’re not going to be able to run a full DE (I’m taking bets on how long before someone ports it just for fun).
As a dev who works with both Windows and Linux and spends a fair amount of time in Windows I’m quite interested in this because it opens up a whole world of possibilities of running Linux (specifically Ubuntu) software on Windows… Here’s a perfect example.
I think the whole point of this is to actually help developers, and MS has generally been pretty decent about being good to it’s developers (any excuse to bring this up). I work with the MS stack and they have good developer tools. Integrating the equally good developer tools available on Linux into the developers workflow is a logical step, since most developers already have to do that. For example I already use Docker to run certain services because I don’t want to have to install them on my machine. I have to run a VM for this, but eventually it may be able to run native without the overhead of a VM and without sucking up 2GB of my RAM (which will no doubt be handed over to Chrome!). In the cloud Linux won… So why should it be so damn difficult to integrate the tool-chain of the cloud into my local development process. It shouldn’t and this is just one step in making my life easier.
And as for the inevitable “embrace, extend, extinguish” rhetoric that’s going to be brought up I don’t think it’s about that at all. The perssimist in me wants me to believe that MS can never change and it’s all just a case of history repeating, but the reality is that things have changed since the nineties/early 2000s. No single company holds the monopoly on our computing because we no longer do our computing on a single device. At Build Microsoft touted 270m Windows 10 installs. That is a HUGE number, but assuming it’s 270m people (which it isn’t) how many of those people also have an Android phone - If Android has a marked share of roughly 80% then that’s about 210m people who do at least some of their computing on a non-Windows platform. And how many of those have people have an iPad, Android Tablet, Blackberry, and dual boot, or have a PS4 or Wii or even do some of their computing activity using a smart TV or TV box (e.g. Apple TV). The computing landscape is no longer dominated by a single company because our computing is no longer restricted to a single device.
And EVEN IF Microsoft tried their old “embrace, extend, extinguish” tricks they wouldn’t get very far because there are companies invested in Linux that are more than capable of taking on MS (Google anyone)! So at worst they’d maybe take out Canonical/Ubuntu… Well then we’d switch to Debian/Fedora/Suse/Slack/Arch/[name your favourite non-Ubuntu based distro here].
If this is a topic that is of interest to you, stay tuned
I’m assuming from @jeeremy 's comment this is going to be a segment on a future podcast.
But if you can’t wait there is plenty more information in the latest Ubuntu Weekly Newsletter
On this topic this makes quite an interesting read: http://arstechnica.com/information-technology/2016/04/why-microsoft-needed-to-make-windows-run-linux-software/
Does go some way to explaining the rationale behind the move.
As an aside I have listened to the show I must say while I see @bryanlunduke’s point about virtualisation being an “all round” better solution, I sort of think this misses the point a little. Virtualisation is a brilliant platform when you need an entire Linux operating system, but developers don’t NEED an entire Linux operating system most of the time.
Redis is a perfect example. There’s no Windows version of Redis, meaning that if you want to run Redis you have to run a virtual machine… An entire VM for one app! That’s no longer the case, you can start bash and
sudo apt-get install redis.
I think @sil made an the point that developers are choosing to develop on OSX and deploy to Linux… I agree that this is the short term goal of brining Bash to Windows. I think longer term we’re going to see more and more of the Linux tooling being made available and more integrated into Windows, meaning that more and more of the tooling is going to be common across all platforms, which could and should make Windows a far more viable platform to develop software (any software) on.
Great show today, with a lively discussion with Bash on Windows. I have been torn about this issue, so happy to hear what other Linux users thought. Just to add from experience, I have seen the workplace where Macs were allowed only to developers to get certain UNIX tools while still using propriety software. Now that those same tools work on Windows (still beta), I can see why companies would be happy to handout laptops at a third the cost of a Macbook.
That said, I like the final part of the discussion, with the “morality” of using Bash on Windows. From a cost perspective, it makes sense. But from a moral standpoint, in allowing free software to be used in such a way, does this go against the spirit of free software? Should we not be attacking Macs for using these same tools as well? Maybe the cost is our moral measuring tape?
I would be interested in a follow-up segment about where we should draw the line on free software in a proprietary system.
I had a go at this thing today, not very complete at all. It’s very clear that they’ve been able to do just enough to get most applications running but it’s still very early.
Here’s an imgur album of things that I’ve tried. Tried installing Python from Linuxbrew, it tried to pull in a dependency (pkg-config) and things compiled just fine, but when moving it to the appropriate Cellar,
brewcouldn’t find any files anymore.
There’s no kernel, no udev, and upstart is just ignored it seems. That makes sense, but a shim is better, and it’s likely that they’ll put some in later.
root. Horrible idea, there’s been talk about automatically creating a user which would be good idea.
Very incomplete, it seems they did just enough to get binaries working. I couldn’t talk to
systemin Python, it seems that networking isn’t finished yet. Another example is there according to this there’s little to no semaphore support, so it’s possible some applications can start acting weirdly.
The command-line application on Windows is called
lxrun, from which you can install and uninstall the subsystem. It’s installed to
%LOCALAPPDATA%\lxss, looks like it’s hidden but there’s an entire tree in
grubis installed, no idea why.
You don’t want to do an
apt-get upgraderight now, there’s a fix but it’s clear that upgrading is going to cause problems for now.
I like the idea so far. I mostly use command line tools on my laptop which dual boots Windows and Ubuntu right now. If things start shaping up, and it gets easier to do some remote debugging for Python,
gdb, the JVM, etc. I’ll likely switch over. Then I won’t have to reboot for some games or Office.
Here’s a request forum for features.
Can you talk about why you don’t choose to use a VM for that? We discussed that point in the show, and it’s good to hear from people actually confronted with the choice
Like most people here, I have to use Windows for work to access the corporate intranet. Though I would definitely consider a Linux machine if I could. But the laptops the company assigns to us aren’t exactly snappy spinning up the hard drive, so just loading up a VM is a no go. My company is kind enough to offer enterprise copies of VMware workstation but performance was abysmal when I last tried it.
My current setup includes a combination of SSH (Putty) and VNC (Chrome app) to a server running in a lab. Working through VNC can be slow though and limits what keyboard commands you can send (the lack of a functional ‘Alt’ key in VNC is annoying at times).
In all, I think this new subsystem for Linux sounds great. Having a suite of Unixy tools that can be installed with a minimum of configuration is something Windows has been sorely lacking.
The honest reason is that I’m a student. We usually don’t have the day to day set of responsibilities that a full time job has. If I had a workflow setup that used a VM, and I didn’t have the time to move things I would keep it all as it is. There are some in the labs who really like this idea, and would consider moving over back to Windows.
Another large reason is that I already have a dedicated work desktop at home running Arch. I wouldn’t remove Linux on my laptop if it was the only machine I had. I don’t really need anything outside of GCC/llvm, the jdk, python and texlive on my laptop, the desktop is far better at handling more complex tasks like Android studio or Qt.
Is it just me who finds it vaguely amusing that this is being called things like “Linux on Windows” when, actually, it’s almost everything from a GNU/Linux (ahem) system except the Linux?
(Pours petrol on the GNU/Linux naming debate, lights match, runs away to avoid inevitable flames)
Indeed, it’s very amusing mostly GNU and other third party tools are consistently called “Linux” in the press. It’s easier to think about it though, having only one name for it all. In any case, we should probably start running for the hills after opening that can of worms
This Linux subsystem is not for home users or developers who work for a software company but this is targeted for Windows main base which is corporate users. If you work for a large company you will have to use Windows on the desktop. This causes issues for developers who need to use development tools such as Rub on Rails or Web Development in General. Have you tried using a VM in a corporate environment. You try doing apt-get through the corporate firewall. Hopefully it will be easier wi the Linux subsystem. Usually you have permission to do updates if you are a developer onto your windows desktop. VMs get blocked.
I think you’ll still need Administrator privileges to be able to add/remove Windows features, and corporate firewalls won’t care what device or OS is trying to get access to something if they have rules in place to block/throttle access to external things. But it does lessen or negate the need to have an Ubuntu VM running on your machine, so you get more of the native resources on your machine.
Most developers get admin privileges on their local windows machine but when using a vm you need a separate machine name. A linux subsystem would not require a separate name. You are probably right about a firewall though, it would still be blocked though it does make it easier to do rails development in a corporate environment. I know Microsoft have been working hard to get rails easier to use in a Windows environment especially given how far behind their ASP.Net platform is these days.
Now we just need them to back OpenJDK.
If its good enough for Google…
Very happy with this development. Love the Linux commands. Completely disagree with Bryan. But I ALWAYS completely disagree with Bryan. He lives in Neverrightania.