Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Monday February 25 2019, @10:11AM   Printer-friendly
from the the-word-according-to-linus dept.

https://www.realworldtech.com/forum/?threadid=183440&curpostid=183486

Guys, do you really not understand why x86 took over the server market?

It wasn't just all price. It was literally this "develop at home" issue. Thousands of small companies ended up having random small internal workloads where it was easy to just get a random whitebox PC and run some silly small thing on it yourself. Then as the workload expanded, it became a "real server". And then once that thing expanded, suddenly it made a whole lot of sense to let somebody else manage the hardware and hosting, and the cloud took over.

Do you really not understand? This isn't rocket science. This isn't some made up story. This is literally what happened, and what killed all the RISC vendors, and made x86 be the undisputed king of the hill of servers, to the point where everybody else is just a rounding error. Something that sounded entirely fictional a couple of decades ago.

Without a development platform, ARM in the server space is never going to make it. Trying to sell a 64-bit "hyperscaling" model is idiotic, when you don't have customers and you don't have workloads because you never sold the small cheap box that got the whole market started in the first place.

Submitted via IRC for Bytram

Linus Torvalds pulls pin, tosses in grenade: x86 won, forget about Arm in server CPUs, says Linux kernel supremo

Channeling the late Steve Jobs, Linux kernel king Linus Torvalds this week dismissed cross-platform efforts to support his contention that Arm-compatible processors will never dominate the server market.

Responding to interest in Arm's announcement of its data center-oriented Neoverse N1 and E1 CPU cores on Wednesday, and a jibe about his affinity for native x86 development, Torvalds abandoned his commitment to civil discourse and did his best to dampen enthusiasm for a world of heterogeneous hardware harmony.

"Some people think that 'the cloud' means that the instruction set doesn't matter," Torvalds said in a forum post. "Develop at home, deploy in the cloud. That's bullshit. If you develop on x86, then you're going to want to deploy on x86, because you'll be able to run what you test 'at home' (and by 'at home' I don't mean literally in your home, but in your work environment)."

For Torvalds, this supposedly unavoidable preference for hardware architecture homogeneity means technical types will gladly pay more for x86 cloud hosting, if only for the assurance that software tested in a local environment performs the same way in the data center.

Jobs during his time as Apple's CEO took a similar stance toward native application development, going so far as to ban Adobe's Flash technology on devices running iOS in 2010. For Jobs, cross-platform code represented a competitive threat, bugs, and settling for lowest-common denominator apps.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Informative) by aiwarrior on Monday February 25 2019, @03:15PM

    by aiwarrior (1812) on Monday February 25 2019, @03:15PM (#806319) Journal

    Here are my 2 cents as this is something I actually do professionally: Cross platform development and server management.

    Things have changed recently. What Linus says is part of a workflow I abandoned because it brought me more problems than I wanted. It used to be possible because software inter-dependence used to be simpler and features and bug fixes also were slow to appear. Just look at how retarded build system of GNU tools(awk, bison) are when compared with other projects. Have fun with libtools and autotools!

    Developing web stuff in your own machine is shit. Shit because it has the opposite problem of the GNU eco-system. Everything is extremely fast changing. Example: Do you want to run a nodejs server in your laptop for testing? Well then you need the latest supper-dupper version which your distro has not provided yet. If this was a server you might take the plunge and manage these special packages manually but for a multi-purpose machine like your development/personal computer this will fast become nightmarish. More: Moving the hacks of your personal machine back and forth with a server will cause misalignment that at best will cost you time, at worse will introduce bugs. Also enjoy being afraid of an update on your machine where you also have entertainment.

    The solution: Have a virtual machine of your server, and share the relevant files with your dev/personal machine. Everything is exactly the same as the server, you develop on the real thing, without it being the real thing. Hell! When i am in that mode i even modify my hosts file so that there is really no difference down to the address. This is required for some stuff that has hardcoded domains like certificates.

    Now you could ask, hmm do you mean containers like docker or hard qemu? My answer is it depends. I use containers when needing something that is pre-configured "correctly" by the maintainers, like nodejs. I use hard qemu when I want to emulate a whole machine, regardless of architecture.

    So if you are on qemu and/or containers what is the deal with the architecture? I would say that as far as development is concerned: Nothing. Thus I disagree with Linus.

    More, recently I needed to dump an EC2 instance and run it locally, exactly as it is. It is so special purpose that recreating all the bullshit there in my local environment would be a waste of time. The problem is that idiotic/greedy Amazon only lets you dump a virtual machine image from Amazon EC2 if you originally pushed a Virtual Machine Image or if you are a customer of VMWare and other paid stuff. (Snapshots are not downloadable). Fortunately from my Embedded work I had quite a good knowledge on setting up QEMU, so I thought: Screw it, let's dd the whole disk and make it a drive for QEMU. Voila, I had my EC2 instance 1-1 down to the serial console.

    Again, virtualization is pretty much agnostic to my machine. I could be emulating my EC2 server and developing in my Odroid.

    http://www.pneves.net/2019/01/exporting-amazon-ec2-instance-and.html [pneves.net]

    On another topic I use yocto for cross-compilation development. There too I gave up trying to build my stuff natively in my Ubuntu before deploying to the target.
    First my Ubuntu's toolchain is not the most modern (meson, gcc) are not the bleeding edge.
    2nd I want to emulate some hardware to make tests. Again doing that on my local machine requires different toolchains and headaches.

    If i just use the devtool from Yocto project i just have all the cross compilers and versions aligned, compile locally and have a scp pushing my binary to my qemu instance. Voila! Even easier than local native builds.

    This my experience of couse and may vary but, given that this is my job I would say I have been through some of the best and worse of experiences.

    The only thing i am being bitten with the architecture thing is that Thread Sanitizer does not work for Arm 32 bit :(

    Starting Score:    1  point
    Moderation   +2  
       Informative=2, Total=2
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4