To confess, I’m not a Cent OS / RHEL fan. I genuinely don’t understand how you can brand something as stable with a 5 year old kernel that doesn’t work with a lot of new hardware and takes days to install anything newish on. I, personally, run Arch on my servers as you can just install the bare necessities and easily snapshot the build. Just wondering what the world’s thoughts are? Just to point out, I’m forced to use Cent OS at work, so I’ve had nearly 4 years of dealing with it.
With RHEL/CentOS/Ubuntu LTS/Debian/etc, when you install Apache or MySql or BIND for example, it stays the same version for as long as you have the server configured. Updates do not generally introduces new features or version incompatibilities. You can feel fairly assured that the tools you build your application on top of will be stable and unchanging, which is an excellent thing for production environments. It could become a troubleshooting nightmare otherwise I think. You may say that everyone should “test” before making any changes to production, but the less you have to “test”, the better really.
I would only run Arch on a server if it was a personal home test server (which I do actually). Having worked on Linux servers in production for many years now, I personally like running CentOS/RHEL servers (I like debian/ubuntu as well). I thought I would hate it, but it really grew on me to be honest. So my advice is to give it more time and I think it will grow on you. Study up as well: https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/ RHCE should lead you to a very well paying job (I’m assuming you do sys admin work).
I’m actually a dev, but am forced to do a fair bit of sys admin 'cause our IT is stretched a little. That’s probably why I’d use Arch for it, I can never get anything new to build properly on COS/RHEL because the glibs are so old (even with the ‘unstable’ versions).
Thanks for the link though, I’ll skim them when I have a mo.
Oh yeah in that case I might run with debian or ubuntu LTS which have newer versions.
You just answered your own question. It’s branded as stable because it’s proven, tested and Does Not Change. Ever.
Ops people (like me) love things that stay stable because it reduces risk and scope of the test/accept cycle when upgrades come along. Devs love things that stay stable because when upgrades come along, it reduces - if not eliminates - the possibility of ABI/API changes.
Fast-cycle distros like Gentoo, Fedora, non-LTS Ubuntu and the rest of the Debian downstreams are great fun for people who want to play with new technology, but when you want to deploy into production, stability is king.
Maybe if you’re a web developer who loves to do the node-vagrant-docker-etc happy dance, the latest builds of OSes are interesting, but if you’re a bank, cloud storage provider, mail host, and so on, you want things to be rock frakking solid.
So why not just do as I do and freeze a distro like Arch? If you can’t even build something like Cmake, there’s a problem (in my eyes).
I’m running Django on Python 2.7.6 with Postgres. If you want the stables, you get them in tarball instead of Pacman.
Banks run most of there stuff out of Excel. There’s a very well known investment bank that wrote a bunch of critical stuff in A+ and there’s hardly anyone who can maintain it, so not sure they’re a good example of stability.
That sounds like a right pain in the arse. Also, where’s the commercial support? A large enterprise can throw a ton of cash at Red Hat or Canonical and get 24/7 frontline assistance and deployment tools up the wazoo. There’s a reason why RHN and Landscape exist, and it’s definitely not to freeze second- and third-tier distributions to try and hack them into enterprise mode. You buy what does it out of the box.
And some vendors only support a small subset of distributions, usually Red Hat, as they know they can depend on the long release cycles with zero ABI changes. This keeps enterprise software houses happy, and it keeps deployment teams and end-users happy. When an update rollout can reach a cost in the six-figure bracket and beyond in manpower costs alone, you quickly subscribe to the motto that Change Is Bad.
Wow, tarballs? It’s the 21st Century, we moved on
I don’t know what banks you’re talking about, but the banks I know of run things like Oracle, DB2, WebSphere, RHEL, Solaris, Win2K8 and AIX. If I thought my bank was managing my money using Excel, I’d run - not walk - to the teller desk and close my accounts. I don’t know anyone who works ops or dev in a bank and uses a rolling release distro, it just doesn’t make sense from an enterprise stability perspective. Trust me.
Trust me, just look for VBA jobs in London and you’ll find hundreds to write macros in Excel for banks. Plus, just Google for articles http://www.forbes.com/sites/timworstall/2013/02/13/microsofts-excel-might-be-the-most-dangerous-software-on-the-planet/
That’s for analysis, not for day-to-day trading (stocks, forex, futures, etc) and operations infrastructure.
Because if you want stable you pick ubuntu lts or centos/rhel. It’s not just some random unstable/constant release distro that you frooze across your servers. who knows what instability might be baked in, you don’t get security updates cus I assume you are now not taking updates, and you can’t get much specific help because no one else is running the exact same version combo.
And I say this as a person who runs Gentoo on his server. Speaking of, why not run Gentoo? Latest everything? Check. Bare neccesities? Hard to beat, since it’s all compiled I can tweak the compiler to my specific CPU if I want and only compile in support for tools and libraries I’m actually going to install and support. And the Gentoo Hardened team is doing great security work, one of the reasons I use Gentoo is for their fine work.
The servers I have running for USAF flight simulators are very old CentOS’. We run them that way because it is tried and true and an update that could break a sim is training lost. In the eyes of most of the companies that I work with it is a solid foundation and scripts that get the job done not “bleeding edge” software. It drives me crazy, being an Arch guy myself, but I understand their concerns. I can’t sit still when it comes to my rig but companies don’t have the time nor the patience to have the latest or greatest software if it is not compatible with their tried and true stable system. Just my .02 worth.
(obligatory filler text to meet 20 char post minimum)
It’s mainly a cost thing (which time also translates into). If the cost-benefit analysis of an upgrade says the time spent, manpower spent and money spent is more than the resultant improvements and/or cost savings post-upgrade, then it just won’t happen until it Absolutely Must Happen.
Vendor support, internal QA and hardware compatibility are usually the primary factors in whether or not something gets upgraded. Why do you think so many enterprises are only now making the move from Windows XP to Windows 7? (Answer: because newer laptops are less and less compatible on XP. That’s it. Pure and simple)
Difference is that something like RHEL provides security patches to their “frozen” packages.
ha more my point was if i’m not going to use the stability of ubuntu or rhel/centos why not go all out, why go "half way’ to arch If you’re going to toss one requirment, might as well max out the other as compensation.
Last month our CentOS file server shredded about the third of data due to two specific bugs in swRAID and fsck… Some reading of mailing lists showed the mainterner’s stances, I’ll paraphrase, you should be running new code those bugs where fixed… I’m gonna roll some Arch and see how that goes.
I run Arch on most of my fluid servers. I believe software in general is transient and needs constant work on so I don’t agree with the whole API stability argument for enterprise distros because I believe it encourages the very old fashioned ethos of ‘build it and leave it running forever’. Things move so fast these days I don’t think enterprise distros are keeping up very well
Sure if you are a big institution then IMO RHEL is for you because you shamefully admit that you cannot possibly keep your 1+ million lines of code enterprise web app up with API changes introduced in a rolling distro without your whole institution crumbling… but for Joe Shmoe simple web host I think a rolling distro like Arch is the more secure and non lazy way of keeping a server running as it encourages you to actually work on it.
I’ve been doing it a couple years and had no server down times yet… although admittedly it does feel a bit more dangerous than my old Centos servers but i think that’s just old habits dying hard. We certainly have awesome tech like containers now anyway which means we can now have the best of both worlds.
Having worked in enterprise-sized companies before coughbroadcomcough where herculean efforts were made to ensure massive farms of RHEL, Solaris, etc were kept as up-to-date as viably possible, that’s a bit unfair
This is a little different from “why doesn’t everyone use Arch”; if your codebase/customerbase is small enough that it’s trivial to keep up with a moving target like that and you gain some benefit from the extra admin involved then a rolling release is probably fine. But the majority of servers are not serving some buzzword-compliant continually-integrated devopsy site, they’re serving a largely static and heavily-tested codebase for which it’s more important that it carries on working as it currently is than that it gets to link with today’s shiny new glibc.
I personally find that stability is the key to server installations as has been pointed out here. Debian, Centos and Ubuntu LTS are all platforms I would consider for enterprise environments. I usually run Debian testing on my home computers/servers as this allows me to play with new features and get a good grip on the next stable release.
Generally there is less to test when applying patches and upgrades to production servers running stable/LTS versions which keeps my workload down. It comes down to whatever you are happy to support.