Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Monday July 13 2020, @08:41PM   Printer-friendly [Skip to comment(s)]

Linus Torvalds' Initial Comment On Rust Code Prospects Within The Linux Kernel

Kernel developers appear to be eager to debate the merits of potentially allowing Rust code within the Linux kernel. Linus Torvalds himself has made some initial remarks on the topic ahead of the Linux Plumbers 2020 conference where the matter will be discussed at length.

[...] Linus Torvalds chimed in though with his own opinion on the matter. Linus commented that he would like it to be effectively enabled by default to ensure there is widespread testing and not any isolated usage where developers then may do "crazy" things. He isn't calling for Rust to be a requirement for the kernel but rather if the Rust compiler is detected on the system, Kconfig would enable the Rust support and go ahead in building any hypothetical Rust kernel code in order to see it's properly built at least.

Linus Torvalds Wishes Intel's AVX-512 A Painful Death

According to a mailing list post spotted by Phoronix, Linux creator Linus Torvalds has shared his strong views on the AVX-512 instruction set. The discussion arose as a result of recent news that Intel's upcoming Alder Lake processors reportedly lack support for AVX-512.

Torvalds' advice to Intel is to focus on things that matter instead of wasting resources on new instruction sets, like AVX-512, that he feels aren't beneficial outside the HPC market.

Related: Rust 1.0 Finally Released!
Results of Rust Survey 2016
AVX-512: A "Hidden Gem"?
Linus Torvalds Rejects "Beyond Stupid" Intel Security Patch From Amazon Web Services


Original Submission

Related Stories

Rust 1.0 Finally Released! 75 comments

After many years of waiting, version 1.0 of the Rust programming language has finally been released. The Rust home page describes Rust as "a systems programming language that runs blazingly fast, prevents nearly all segfaults, and guarantees thread safety."

Thanks to the hard work of noted Rust core team members Yehuda Katz and Steve Klabnik, Rust is now poised to become a serious competitor to established systems programming languages like C and C++.

The announcement has brought much jubilation to the followers of Rust, who have been eagerly awaiting this milestone release for so long. With only 1,940 open issues and over 11,500 issues already closed, Rust is finally ready for users to build fantastically reliable software systems using it.

Results of Rust Survey 2016 20 comments

[Ed. Note: The first link seems to have been pulled from the web after this story was accepted. We apologize that we have been unable to find a link to replace it.]

Arthur T Knackerbracket has found the following story about the progress of the Rust programming language and its growing usage:

"The Results are in! Thank you to all 3103 of you who responded to our first Rust community survey. We were overwhelmed by your responses, and as we read your comments we were struck by the amount of time and thought you put into them. It's feedback like this that will help us focus our energy and make sure Rust continues to grow the best way. A big reason for having the survey was to make the results available publically so that we can talk about it and learn from it together. In this blog post, we'll take a first look at the survey responses, including themes in the comments, demographics, and quantitative feedback.

Do You Use Rust?

We wanted to make sure the survey was open to both users of Rust as well as people who didn't use Rust. Rust users help us get a sense of how the current language and tools are working and where we need to improve. Rust non-users give us another perspective, and help shed light on the kinds of things that get in the way of someone using Rust. I'm happy to report that more than a third of the responses were from people not using Rust. This gave us a lot of great feedback on those roadblocks, which we'll talk about in this (and upcoming) blog posts.

But first, let's look into the feedback from Rust users.

Rust Users

How long have you been using Rust?

Almost 2000 people responded saying they were Rust users. Of these, almost 24% were new users. This is encouraging to see. We're still growing, and we're seeing more people playing with Rust now that could become long-term users. Equally encouraging is seeing that once someone has become a Rust user, they tend to stick around and continue using it. One might expect a sharp drop-off if users became quickly disenchanted and moved onto other technologies. Instead, we see the opposite. Users that come in and stay past their initial experiences tend to stay long-term, with a fairly even spread between 3 months to 12 months (when we first went 1.0).

There is much more to be found in the full story for Rust aficionados.


Original Submission

AVX-512: A "Hidden Gem"? 6 comments

Upcoming Intel processors will support scalable AVX-512 instructions, which one former Intel employee calls a "hidden gem":

Imagine if we could use vector processing on something other than just floating point problems. Today, GPUs and CPUs work tirelessly to accelerate algorithms based on floating point (FP) numbers. Algorithms can definitely benefit from basing their mathematics on bits and integers (bytes, words) if we could just accelerate them too. FPGAs can do this, but the hardware and software costs remain very high. GPUs aren't designed to operate on non-FP data. Intel AVX introduced some support, and now Intel AVX-512 is bringing a great deal of flexibility to processors. I will share why I'm convinced that the "AVX512VL" capability in particular is a hidden gem that will let AVX-512 be much more useful for compilers and developers alike.

Fortunately for software developers, Intel has done a poor job keeping the "secret" that AVX-512 is coming to Intel's recently announced Xeon Scalable processor line very soon. Amazon Web Services has publically touted AVX-512 on Skylake as coming soon!

It is timely to examine the new AVX-512 capabilities and their ability to impact beyond the more regular HPC needs for floating point only workloads. The hidden gem in all this, which enables shifting to AVX-512 more easily, is the "VL" (vector length) extensions which allow AVX-512 instructions to behave like SSE or AVX/AVX2 instructions when that suits us. This is a clever and powerful addition to enable its adoption in a wider assortment of software more quickly. The VL extensions mean that programmers (and compilers) do not need to shift immediately from 256-bits (AVX/AVX2) to 512-bits to use the new bit/byte/word manipulations. This transitional benefit is useful not only for an interim, but also for applications which find 256-bits more natural (perhaps a small, but important, subset of problems).

Will it be enough to stave off "Epyc"?


Original Submission

Linus Torvalds Rejects "Beyond Stupid" Intel Security Patch From Amazon Web Services 50 comments

Linus Torvalds rejects 'beyond stupid' AWS-made Linux patch for Intel CPU Snoop attack

Linux kernel head Linus Torvalds has trashed a patch from Amazon Web Services (AWS) engineers that was aimed at mitigating the Snoop attack on Intel CPUs discovered by an AWS engineer earlier this year. [...] AWS engineer Pawel Wieczorkiewicz discovered a way to leak data from an Intel CPU's memory via its L1D cache, which sits in CPU cores, through 'bus snooping' – the cache updating operation that happens when data is modified in L1D.

In the wake of the disclosure, AWS engineer Balbir Singh proposed a patch for the Linux kernel for applications to be able to opt in to flush the L1D cache when a task is switched out. [...] The feature would allow applications on an opt-in basis to call prctl(2) to flush the L1D cache for a task once it leaves the CPU, assuming the hardware supports it.

But, as spotted by Phoronix, Torvalds believes the patch will allow applications that opt in to the patch to degrade CPU performance for other applications.

"Because it looks to me like this basically exports cache flushing instructions to user space, and gives processes a way to just say 'slow down anybody else I schedule with too'," wrote Torvalds yesterday. "In other words, from what I can tell, this takes the crazy 'Intel ships buggy CPU's and it causes problems for virtualization' code (which I didn't much care about), and turns it into 'anybody can opt in to this disease, and now it affects even people and CPU's that don't need it and configurations where it's completely pointless'."


Original Submission

Following Layoffs, Mozilla and Core Rust Developers Are Forming a Rust Foundation 37 comments

Rust Core Team + Mozilla To Create A Rust Foundation

Rust's core team and Mozilla are announcing plans to create a Rust foundation with the hopes of establishing this legal entity by year's end. The trademarks and related assets of Rust, Cargo, and Crates.io will belong to this foundation. Work is well underway on establishing this foundation with originally coming to the idea of possibly creating an independent Rust foundation last year, now pushed along by the recent Mozilla layoffs and the global pandemic. This should allow the Rust community more safety rather than being reliant upon a sole organization (Mozilla) and help foster growth and open up new possibilities.

Lay(off)ing the foundation for Rust's future

Previously: Mozilla Lays Off 250, Including Entire Threat Management Team

Related: Linus Torvalds: Don't Hide Rust in Linux Kernel; Death to AVX-512


Original Submission

Linus Torvalds On The Importance Of ECC RAM, Calls Out Intel's "Bad Policies" Over ECC 99 comments

Linus Torvalds On The Importance Of ECC RAM, Calls Out Intel's "Bad Policies" Over ECC

There's nothing quite like some fun holiday-weekend reading as a fiery mailing list post by Linus Torvalds. The Linux creator is out with one of his classical messages, which this time is arguing over the importance of ECC memory and his opinion on how Intel's "bad policies" and market segmentation have made ECC memory less widespread.

Linus argues that error-correcting code (ECC) memory "absolutely matters" but that "Intel has been instrumental in killing the whole ECC industry with it's horribly bad market segmentation... Intel has been detrimental to the whole industry and to users because of their bad and misguided policies wrt ECC. Seriously...The arguments against ECC were always complete and utter garbage... Now even the memory manufacturers are starting [to] do ECC internally because they finally owned up to the fact that they absolutely have to. And the memory manufacturers claim it's because of economics and lower power. And they are lying bastards - let me once again point to row-hammer about how those problems have existed for several generations already, but these f*ckers happily sold broken hardware to consumers and claimed it was an "attack", when it always was "we're cutting corners"."

Ian Cutress from AnandTech points out in a reply that AMD's Ryzen ECC support is not as solid as believed.

Related: Linus Torvalds: 'I'm Not a Programmer Anymore'
Linus Torvalds Rejects "Beyond Stupid" Intel Security Patch From Amazon Web Services
Linus Torvalds: Don't Hide Rust in Linux Kernel; Death to AVX-512
Linus Torvalds Doubts Linux will Get Ported to Apple M1 Hardware


Original Submission

AnandTech Reviews Intel's i7-11700K "Rocket Lake" CPU Early 14 comments

Intel's next-generation "Rocket Lake" CPUs will be some of Intel's last desktop models on a "14nm" node, and include "backported" Willow Cove cores (referred to as "Cypress Cove") from "10nm" Tiger Lake mobile CPUs, with improved instructions per clock. Notably, the lineup only goes up to 8 cores, instead of 10 cores for the previous Core i9. The review embargo ends on the launch date, March 30th, but some retailers have been selling the CPUs early. AnandTech obtained an 8-core i7-11700K and wrote a review of it. The results were not great.

Power consumption of the 125 W TDP chip peaked at 224.56 W when running an AVX2 workload, compared to 204.79 W for its i7-10700K "Comet Lake" predecessor and 141.45 W for AMD's Ryzen 7 5800X. The i7-11700K reached 291.68 W with an AVX-512 workload.

The i7-11700K not only failed to beat the 5800X in many benchmarks, but trailed the previous-gen i7-10700K in some cases. The major exception is performance in AVX-512 workloads. Gaming performance of the i7-11700K was particularly bad, in part due to an increase in L3 cache and core-to-core latency.

It's possible that there will be some improvements from a final microcode update before launch. There's also models like the Core i9-11900K, which have the same 8 cores but can clock up to 300 MHz higher.

See also: Intel Core i7-11700K 8 Core Rocket Lake CPU Review Published By Anandtech – Very Hot, Consumes More Power Than Core i9-10900K & Slower Than AMD In Core-To-Core Tests

Related: Linus Torvalds: Don't Hide Rust in Linux Kernel; Death to AVX-512
Former Intel Principal Engineer Blasts the Company
Gigabyte Confirms Intel Rocket Lake Desktop CPUs Will Launch in March


Original Submission

Former Intel Principal Engineer Blasts the Company 10 comments

What's wrong with Intel, and how to fix it: Former principal engineer unloads (archive)

In a blunt video posted late Thursday evening, outspoken former Intel principal engineer Francois Pidnoel offered his advice on how to "fix" Intel CPUs, criticized current leadership for not being engineers, said AVX512 was a misadventure, and declared that it's only luck AMD hasn't grabbed more market share.

"First, Intel is really out of focus," Piednoel said in the nearly hour-long video presentation. "The leaders of Intel today are not engineers, they are not people who understand what to design to the market."

[...] Pidnoel flat-out dismissed including AVX512 in consumer chips as a mistake. "You had Skylake and Skylake X for a reason," Piednoel said. "AVX512 is designed for a race of throughput that is lost to the GPU already. There's two ways to get throughput. One is to get the throughput is by having larger vectors to your core, and the other way is to have more cores."

[...] "Intel is very lucky AMD cannot get the volume, to be able to compete," Piednoel. "If they were getting volume, the price difference would definitely cost Intel market share a lot more than what they are losing right now."

Related: AVX-512: A "Hidden Gem"?
Intel CEO Blames "10nm" Delays on Aggressive Density Target, Promises "7nm" for 2021
Intel's Process Nodes Will Trail Behind Competitors Until at Least Late 2021
Linus Torvalds: Don't Hide Rust in Linux Kernel; Death to AVX-512
Intel Engineering Chief Out After 7nm Product Delays
Intel Faces Class-Action Lawsuit Over "7nm" Delays


Original Submission

The ISRG Wants to Make the Linux Kernel Memory-Safe with Rust 26 comments

The ISRG wants to make the Linux kernel memory-safe with Rust

The Internet Security Research Group (ISRG)—parent organization of the better-known Let's Encrypt project—has provided prominent developer Miguel Ojeda with a one-year contract to work on Rust in Linux and other security efforts on a full-time basis.

As we covered in March, Rust is a low-level programming language offering most of the flexibility and performance of C—the language used for kernels in Unix and Unix-like operating systems since the 1970s—in a safer way.

Efforts to make Rust a viable language for Linux kernel development began at the 2020 Linux Plumbers conference, with acceptance for the idea coming from Linus Torvalds himself. Torvalds specifically requested Rust compiler availability in the default kernel build environment to support such efforts—not to replace the entire source code of the Linux kernel with Rust-developed equivalents, but to make it possible for new development to work properly.

Using Rust for new code in the kernel—which might mean new hardware drivers or even replacement of GNU Coreutils—potentially decreases the number of bugs lurking in the kernel. Rust simply won't allow a developer to leak memory or create the potential for buffer overflows—significant sources of performance and security issues in complex C-language code.

Previously: Linus Torvalds: Don't Hide Rust in Linux Kernel; Death to AVX-512

Related: Microkernel, Rust-Programmed Redox OS's Devs Slam Linux, Unix, GPL
Following Layoffs, Mozilla and Core Rust Developers Are Forming a Rust Foundation


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 1, Funny) by Anonymous Coward on Monday July 13 2020, @08:51PM (5 children)

    by Anonymous Coward on Monday July 13 2020, @08:51PM (#1020672)

    "The discussion arose as a result of recent news that Intel's upcoming Alder Lake processors reportedly lack support for AVX-512."

    I don't get it. Intel is dropping AVX-512 in their upcoming processor and everyone is complaining that AVX-512 shouldn't be included and it's a waste of space.

    If that went over your head...

    Democrats won the election, but Democrats are complaining a Democrat won the election.

    • (Score: 0) by Anonymous Coward on Monday July 13 2020, @09:02PM

      by Anonymous Coward on Monday July 13 2020, @09:02PM (#1020684)

      I hope AVX512 dies a painful death, and that Intel starts fixing real problems instead of trying to create magic instructions to then create benchmarks that they can look good on.

      I hope Intel gets back to basics: gets their process working again, and concentrate more on regular code that isn't HPC or some other pointless special case.

      I've said this before, and I'll say it again: in the heyday of x86, when Intel was laughing all the way to the bank and killing all their competition, absolutely everybody else did better than Intel on FP loads. Intel's FP performance sucked (relatively speaking), and it matter not one iota.

      Because absolutely nobody cares outside of benchmarks.

      The same is largely true of AVX512 now - and in the future. Yes, you can find things that care. No, those things don't sell machines in the big picture.

      And AVX512 has real downsides. I'd much rather see that transistor budget used on other things that are much more relevant. Even if it's still FP math (in the GPU, rather than AVX512). Or just give me more cores (with good single-thread performance, but without the garbage like AVX512) like AMD did.

      I want my power limits to be reached with regular integer code, not with some AVX512 power virus that takes away top frequency (because people ended up using it for memcpy!) and takes away cores (because those useless garbage units take up space).

      Yes, yes, I'm biased. I absolutely destest FP benchmarks, and I realize other people care deeply. I just think AVX512 is exactly the wrong thing to do. It's a pet peeve of mine. It's a prime example of something Intel has done wrong, partly by just increasing the fragmentation of the market.

      Stop with the special-case garbage, and make all the core common stuff that everybody cares about run as well as you humanly can. Then do a FPU that is barely good enough on the side, and people will be happy. AVX2 is much more than enough.

      Yeah, I'm grumpy.

      Linus

    • (Score: 1, Insightful) by Anonymous Coward on Monday July 13 2020, @09:58PM (3 children)

      by Anonymous Coward on Monday July 13 2020, @09:58PM (#1020752)

      It's crap because it's an instruction set for only some intel processors, which even intel is not including in all their processors, which only makes it even useless than it already is. So a waste of resources for pretty much every which way.

      • (Score: 2) by driverless on Tuesday July 14 2020, @03:02AM

        by driverless (4770) on Tuesday July 14 2020, @03:02AM (#1020992)

        Makes Intel look good on specific benchmarks though.

      • (Score: 1) by petecox on Tuesday July 14 2020, @03:08AM (1 child)

        by petecox (3228) on Tuesday July 14 2020, @03:08AM (#1020997)

        I use a package-based distribution, so yes, I probably wouldn't see any benefit.

        Some use to Gentoo's user base who compile every optimisation for every piece of software on their machines!

        • (Score: 0) by Anonymous Coward on Tuesday July 14 2020, @04:43AM

          by Anonymous Coward on Tuesday July 14 2020, @04:43AM (#1021051)

          AVX-512 is only a benefit in really specific circumstances. Other than those, it hurts performance and can hurt it quite badly. One of the biggest reasons why is that in order for AVX-512 to work, the entire CPU is clocked down, slowing all other threads on that chip for the duration of that part of the pipeline.

  • (Score: 0) by Anonymous Coward on Monday July 13 2020, @09:12PM (28 children)

    by Anonymous Coward on Monday July 13 2020, @09:12PM (#1020697)

    Those in the know, chime in with your pro-con of these two language, aiming to replace C/C++.

    From the little I read, I am leaning towards Go - seems cleaner and "less-is-more" kinda language - it's THE lesson C++ taught us.

    Rust seems a more "managed' language, to save the programmers from themselves, and that mentality doesn't seem right for replacing C/C++.

    Go is a spawn of Google. Rust is a spawn of Mozilla. Make of it what you will.

    So go ahead. Make your case for either, or both, or neither.

    • (Score: 5, Informative) by NickM on Monday July 13 2020, @09:29PM (4 children)

      by NickM (2867) Subscriber Badge on Monday July 13 2020, @09:29PM (#1020714) Journal
      Your wrong on the managed aspect of Rust, it dosen't use a GC, doesn't possess runtime reflection and like C++ it has a zero-cost abstraction model; zero cost if you exclude the geological compilations times and the cognitive load of having a complex evolving language. Go, like JVM and CLR languages, is a managed language, it use a GC and has runtime reflection.
      --
      I a master of typographic, grammatical and miscellaneous errors !
      • (Score: 3, Funny) by Anonymous Coward on Monday July 13 2020, @10:36PM (3 children)

        by Anonymous Coward on Monday July 13 2020, @10:36PM (#1020787)

        ... zero cost if you exclude the geological compilations times and the cognitive load of having a complex evolving language.

        Dude. Like, we are nerds, right?! We don't do subtlety too good, you know?

        Don't post comment like that again. In the medieval times, your sort got hunged, and rightfully so.

        • (Score: 2, Funny) by Anonymous Coward on Tuesday July 14 2020, @01:51AM (2 children)

          by Anonymous Coward on Tuesday July 14 2020, @01:51AM (#1020919)

          I've been hung all my life

          • (Score: 0) by Anonymous Coward on Tuesday July 14 2020, @03:39AM

            by Anonymous Coward on Tuesday July 14 2020, @03:39AM (#1021018)

            Ladies?

            A well hung man is hard to find.

          • (Score: 2) by Bot on Wednesday July 15 2020, @12:48PM

            by Bot (3902) on Wednesday July 15 2020, @12:48PM (#1021865) Journal

            That's what she said, sarcastically.

            --
            Account abandoned.
    • (Score: 5, Interesting) by turgid on Monday July 13 2020, @09:38PM (15 children)

      by turgid (4318) Subscriber Badge on Monday July 13 2020, @09:38PM (#1020729) Journal

      I'm leaning towards D because it's much cleaner than either and designed by a guy (Walter Bright) with years of experience writing C and C++ compilers. When Go came out, I was underwhelmed by it. Rust seems to be designed by people who haven't heard of Test Driven Development and it suffers from some of the same problems as C++.

      If I had another 20-30 IQ points then perhaps I would have some to spare on difficult languages like C++ and Rust, but I just want to get things done with what I have. I have no interest in being "clever" and a language lawyer. I prefer not to put bugs in my code by keeping it simple and quick to compile, running the tests every time I change a line of code.

      Why is there a fashion for compilers (and languages) to become ever more complex, slow and potentially flaky?

      • (Score: 3, Insightful) by DannyB on Monday July 13 2020, @10:11PM (2 children)

        by DannyB (5839) Subscriber Badge on Monday July 13 2020, @10:11PM (#1020767) Journal

        Why is there a fashion for compilers (and languages) to become ever more complex

        Why do we have dishwashers when we can do dishes by hand?

        Why the noise and complexity of a backhoe when you can quietly dig a long ditch with a shovel.

        Because: human productivity.

        Compilers and languages have gotten more complex for the last sixty years. For that reason. To make humans more productive.

        In the last three decades we reached a point where human programming cost exceeds the cost of computers. Thus runtime inefficiency is a net dollars and cents gain if you can program the machine faster.

        Ask anyone (end users, customers, managers, etc), would you rather have your XYZ software delivered six months sooner, or have it be more efficient at runtime? Guess which one they will pick?

        There is also a consideration of languages protecting programmers from themselves. This is a Good Thing. As long as you can deliberately get around it when necessary by telling the compiler "I know what I'm doing". Fact: humans make mistakes.

        Now I'm going to mention Java as an extreme example, which is completely inappropriate when talking about the Linux kernel. But the economic reality is why Java has been the #1 or #2 language for the last 15 years in a row. [youtube.com] Java must be doing something right. GC must be a net economic win. (argument: the threads that service a customer request incur zero cpu cycles of memory management. GC threads on other cores incur that cost -- but not inline with servicing the customer request. The customer request naturally has to make enough money to pay the freight of the GC -- and it does.)

        There are different levels of abstraction, and protecting programmers, for different types of programming work. Writing kernels and microcontroller code is certainly different than business software. But some of these arguments apply about human productivity, economics, eliminating entire classes of bugs automatically if possible, using the compiler as your first line of unit testing. If the code won't even compile, then it fails the first unit test. :-) Letting those errors slide until runtime is simply a lazy approach and means that they might not be found in your unit testing, but will be found by your ultimate customer. I remember in the early 1980s reading about a rocket that was lost due to a type error that would have been a compile error in Pascal. Yes, early 1980s story.

        --
        Employers should not mandate wearing clothing. It should be a personal choice. It only affects me. Junk can't breathe!
        • (Score: 4, Insightful) by turgid on Monday July 13 2020, @10:40PM (1 child)

          by turgid (4318) Subscriber Badge on Monday July 13 2020, @10:40PM (#1020793) Journal

          Compilers and languages have gotten more complex for the last sixty years. For that reason. To make humans more productive.

          My point is that, in my humble opinion, some have got so complex that they have the opposite effect. I have yet to see any "good" C++ code in a commercial setting. There are whole books written on what you shouldn't do when writing C++, for example.

          In the last three decades we reached a point where human programming cost exceeds the cost of computers. Thus runtime inefficiency is a net dollars and cents gain if you can program the machine faster.

          I agree that there are some languages that facilitate this. They tend to be clean, simple, elegant and high-level with a great deal of abstraction. Unfortunately, they tend not to get used much because they're not in fashion like e.g. Java, C++, C# and maybe now Rust.

          There is also a consideration of languages protecting programmers from themselves. This is a Good Thing. As long as you can deliberately get around it when necessary by telling the compiler "I know what I'm doing". Fact: humans make mistakes.

          There is often a lot of hubris involved when someone decides, "I know what I'm doing, and I'm going to ignore what the compiler is trying to tell me." It often ends in tears. If you are having to resort to such bodges, (1) check your understanding of the problem, (2) check your understanding of the language and (3) redesign your solution or (4) choose a better language.

          Java must be doing something right.

          It allows mediocre programmers to get simple programs working in a nice cross-platform sandbox. I use Java every day. It's too wordy. The standard class libraries aren't vert good.

          Letting those errors slide until runtime is simply a lazy approach

          Agreed, but compilers and static analysis can't catch all those bugs. The code has to be executed and made to fail. Branches must be checked. Use cases must be exercised. C++ and Rust and the intel itanics of programming languages. They are enormous, baroque and make magical promises which can never be fulfilled because code must be run.

          For this reason, there has to be a good balance between compile-time safety (good), static analysis (good), unit testing (good) and regression testing (good).

          If the language is so difficult and complex that it's slow to get something through the compiler, people will hack around it. Then you lose 1 and 2. You also don't get around to 3 and 4 until much later, if at all.

          Bugs are expensive the later they are discovered. It's best not to put them in in the first place.

          If your language is big and difficult, and your compiler is big and slow steps 3 and 4 get put off until much later than they should. Some people are so confident in their fancy languages and compilers that they think they don't have to write unit tests!

          There needs to be a healthy balance. Making languages more complex is not the answer. It doesn't work.

          • (Score: 2) by DannyB on Tuesday July 14 2020, @01:57PM

            by DannyB (5839) Subscriber Badge on Tuesday July 14 2020, @01:57PM (#1021228) Journal

            There are whole books written on what you shouldn't do when writing C++, for example.

            Both C and C++ are the kind of languages that I am arguing against not for.

            Unfortunately, they tend not to get used much because they're not in fashion

            It is unfortunate that some good languages never take hold and get the recognition they deserve. I've seen talk about that for a long time and no good solution. Betting on a programming language for a big project is a huge bet. A gigantic risk. It is amusing that the only modern languages to gain any traction are mostly created by corporations. Java, Swift, Rust, C#, Go, Dart.

            There is often a lot of hubris involved when someone decides, "I know what I'm doing, and I'm going to ignore what the compiler is trying to tell me."

            That can be true. I think back to languages like Pascal and Modula 3. It is important for small bits of code that might manipulate the hardware to turn off some features. In Pascal, I remember an example like:
            * Declare a constant, named "memory" that is a pointer to a byte array, and initialized to zero.
            Talk about instant PEEK / POKE. Anywhere you could do: memory^[16384] = 227;
            That is just an example, where if you were writing code to manipulate the hardware of an IBM PC's video display, being able to say "I know what I'm doing" is quite useful and not hubris. Since it's a constant, it doesn't use up a global stack slot as a variable. Once it is initialized, it is "safe" to use anywhere. But you might limit its visibility to a specific "unit" (pascal term) where you are providing an abstraction over some hardware diddling.

            It allows mediocre programmers to get simple programs working in a nice cross-platform sandbox. I use Java every day. It's too wordy. The standard class libraries aren't vert good.

            I use Java every day too. So far as I am able to determine, I have not yet been involuntarily committed to any mental institution. (But even if so, I can't tell, because I wake up every day and program Java.)

            Java has its warts. As do most languages. The things that it does well vastly outweigh the warts. Otherwise nobody would use it.

            The success of Java is an economic argument. And this is what most programmers miss. Programmers think too much in purely technical terms without other considerations. Completely unable to see what drove Java to success. Or why it is so widely used. Especially by huge financial institutions, banks, or other corporate enterprise software. So much so that Microsoft needed to try to steal it.

            The previous paragraph especially applies to garbage collection. People who hate GC don't understand the economic argument. But there are also technical arguments for it. I'm not saying GC should be used everywhere. But for higher level languages used to write applications, I'll let the evolution of languages speak for itself. How many modern high level languages have GC? Python. JavaScript. Go. Visual Basic. Visual FoxPro. Java. C#. Lisp and its variants. Haskell et all. Prolog style languages. Theorem provers. Computer Algebra System languages. I would add any language that runs on the JVM (Kotlin, Scala, Groovy, etc, etc)

            While people argue about languages that manipulate bits and bytes and teeny bits of technical efficiency, and lack of high level abstractions, the high level language users can get things done, much more quickly, and laugh all the way to the bank.

            compilers and static analysis can't catch all those bugs.

            Compilers CAN and DO catch certain types of errors 100%.

            Compilers can NEVER catch certain types of errors, that is 0%.

            A compiler is never going to know that a check should be subtracted, and a receipt should be added to a total. But a compiler can catch an error where you're trying to assign a String to an HtmlString or to an SqlString without going through an "escape" function, because those two string types are incompatible.

            It is important and beneficial that languages and compilers now catch more types of errors than long ago. Entire classes of bugs have been eliminated by language design. I would just point out that GC eliminated three different types of memory bugs that are the scourge of all software that has ever been written in C or C++, and the source of many security problems.

            If your language is big and difficult, and your compiler is big and slow

            You're preaching to the choir.

            Languages need to be small. Comprehensible. Never put something into a language that could go into a "standard library". Follow the "scheme" philosophy more than the "common lisp" philosophy.

            Compilers need to be FAST. At least for development. The speed of the Edit-Compile-Debug cycle is extremely important.

            --
            Employers should not mandate wearing clothing. It should be a personal choice. It only affects me. Junk can't breathe!
      • (Score: 2, Troll) by The Mighty Buzzard on Monday July 13 2020, @10:50PM (9 children)

        ...running the tests every time I change a line of code.

        Veto.

        If you can't write a single line of code without needing to check that you're not fucking it up, I don't want to work with you. Not ever. I write the entirety of whatever project or feature I'm working on in one go unless it goes over a couple thousand lines. And rarely have to fix more than a dozen piddly little things (mostly punctuation or enclosure matching because I code entirely in vanilla vim). I'm not saying everyone should hold themselves to that level but they should at least be able to write a twenty-five line function in one go without having to ask the compiler if they know what they're doing.

        Also, if your shit is so monolithic that it compiles slow as hell in Rust, it's going to compile slow in anything else. If you're using modular code for a large project and only change one file, you only rebuild the one file and relink, even in Rust. And it doesn't even need to compile to tell you when you fucked something up at the language level. Your algorithms are of course on you.

        --
        My rights don't end where your fear begins.
        • (Score: 4, Insightful) by turgid on Monday July 13 2020, @11:17PM (6 children)

          by turgid (4318) Subscriber Badge on Monday July 13 2020, @11:17PM (#1020814) Journal

          I'd you never make mistakes then you either don't know you've made any because you don'tknow where to look or you are regurgitating a boiler-plate solution from memory (in which case you should be reusing not reinventing). If it's the former, you are passing on debugging to your users and technical debt to your colleagues. If it's the latter, you can be scripted.

          • (Score: 1, Offtopic) by The Mighty Buzzard on Tuesday July 14 2020, @12:06AM (5 children)

            To begin, I'd +1 Funny you for the typos but I'm out of points.

            As to the assertion itself? Compilers can only catch you out making stupid mistakes that can be corrected easily all at once when you're done. They will not tell you if your shiny, happy algorithm (that had damned well better take more than one line) is doing something slightly different than what you meant it to. That's what unit tests, fuzzing, and martybs are for.

            To be clear, I wasn't insulting you. You expressed an extreme lack of confidence in your ability to write something correctly the first time. I took you at your word that you can't. I can though. Yes, I may have to recompile half a dozen times when I'm done to fix the few errors I inevitably code in but I will not have to recompile a couple thousand times like you would have.

            --
            My rights don't end where your fear begins.
        • (Score: 1, Informative) by Anonymous Coward on Tuesday July 14 2020, @07:39AM (1 child)

          by Anonymous Coward on Tuesday July 14 2020, @07:39AM (#1021110)

          not that I disagree with the 25 line function statement, but please note that in C++ with templates and abstract classes on top a small change can lead to a long recompilation time even if your code is technically modular.

      • (Score: 0) by Anonymous Coward on Tuesday July 14 2020, @08:16AM (1 child)

        by Anonymous Coward on Tuesday July 14 2020, @08:16AM (#1021120)

        Last I checked, D uses a tracing collector, which is completely unacceptable in any kind of space, time, or latency constrained system.

    • (Score: 1, Funny) by Anonymous Coward on Monday July 13 2020, @10:00PM (5 children)

      by Anonymous Coward on Monday July 13 2020, @10:00PM (#1020755)

      I prefer Rust, because as a Mozilla project it is more likely to hold to the highest social justice standards. I don't like to be browsing through code and come across things that might trigger me, like the > operator which implies arbitrary ordinality (when you say 4 > 3, you might as well say you endorse the KKK's views on BIPOCs).

      • (Score: 2) by Freeman on Monday July 13 2020, @10:22PM (3 children)

        by Freeman (732) on Monday July 13 2020, @10:22PM (#1020777) Journal

        You had me until BIPOC. What is a BIPOC? In any case, I sure hope your post was supposed to be satire.

        --
        Forced Microsoft Account for Windows Login → Switch to Linux.
        • (Score: 0) by Anonymous Coward on Monday July 13 2020, @10:27PM

          by Anonymous Coward on Monday July 13 2020, @10:27PM (#1020781)

          Them losers Keep coming up with shits like "bipoc" whatever the fuck it means.

        • (Score: 0) by Anonymous Coward on Monday July 13 2020, @10:33PM

          by Anonymous Coward on Monday July 13 2020, @10:33PM (#1020784)

          https://bipocinfiber.com/ [bipocinfiber.com]

          Some rich white woman put the site up to help her get richer.

        • (Score: 0) by Anonymous Coward on Monday July 13 2020, @11:33PM

          by Anonymous Coward on Monday July 13 2020, @11:33PM (#1020824)

          Blacks, Indians, Pakis and Odorous Columbians

      • (Score: 0) by Anonymous Coward on Monday July 13 2020, @10:24PM

        by Anonymous Coward on Monday July 13 2020, @10:24PM (#1020780)

        Is bipok better than a kapok life preserver? How many bipoks does a 180 pound man need to stay afloat for a month?

    • (Score: 0) by Anonymous Coward on Tuesday July 14 2020, @12:46AM

      by Anonymous Coward on Tuesday July 14 2020, @12:46AM (#1020865)

      You will never replace C

      That's like the Air Force replacing the B-52

      Besides, screw C, if you can't do assembly, you're a punk!

  • (Score: 3, Insightful) by Runaway1956 on Monday July 13 2020, @10:15PM (4 children)

    by Runaway1956 (2926) Subscriber Badge on Monday July 13 2020, @10:15PM (#1020772) Homepage Journal

    I did a search for "rust". I started scrolling - and scrolling - and scrolling - It took about half a day to scroll all the way through the various librust* libraries. Alright, maybe I exaggerate - it was less than six hours.

    Anyway, I'm reminded that most open source vulnerabilities are introduced by way of libraries that no one seems to control.

    https://nakedsecurity.sophos.com/2020/05/27/open-source-libraries-a-big-source-of-application-security-flaws/ [sophos.com]

    --
    Let's go Brandon!
    • (Score: 2) by The Mighty Buzzard on Monday July 13 2020, @11:14PM

      Base security of native Rust libraries is quite high compared to C/C++ but many of the libraries are not native Rust libraries; they're wrappers around shared libs installed on the system. Also, they're constantly in flux and the maintainers love nothing better than to introduce breaking changes because a shiny, new thing came out and they just have to use it in their lib.

      --
      My rights don't end where your fear begins.
    • (Score: -1, Spam) by gtwregegcwcfew on Monday July 13 2020, @11:59PM (2 children)

      by gtwregegcwcfew (11552) on Monday July 13 2020, @11:59PM (#1020842)
      Useful piece of information! CeliTech [celitech.com]
  • (Score: 1, Funny) by Anonymous Coward on Monday July 13 2020, @10:20PM (2 children)

    by Anonymous Coward on Monday July 13 2020, @10:20PM (#1020774)

    Listen, linus, what linux needs is not rust, hell, no.

    Did you hear about Scala? It's, like, the next-level shit, forget rust, c++ can't polish scala's shoes!!

    Get woke. All the cool kids (from, like, couple decades ago) are WITH IT.

    Get it on, man.

    Linux, together with Scala, the Two Towers will ...

    • (Score: 2) by turgid on Monday July 13 2020, @10:42PM (1 child)

      by turgid (4318) Subscriber Badge on Monday July 13 2020, @10:42PM (#1020794) Journal

      Scala? Nah, cut out all that unnecessary syntax and go straight to LISP.

      • (Score: 1, Funny) by Anonymous Coward on Monday July 13 2020, @10:54PM

        by Anonymous Coward on Monday July 13 2020, @10:54PM (#1020800)

        The language's designer is ... well, he's the one that cooked up the brilliant invention called "Java generics."

        I mean, syntactic gymnastics to make it "look like" DSM? The guy actually saw Rube Goldberg as his role model.

  • (Score: 2) by turgid on Monday July 13 2020, @10:43PM

    by turgid (4318) Subscriber Badge on Monday July 13 2020, @10:43PM (#1020795) Journal

    Once (about 1997?) some people took it upon themselves to rewrite the Linux kernel the "proper way" : in C++.

  • (Score: 5, Informative) by Subsentient on Monday July 13 2020, @11:49PM (2 children)

    by Subsentient (1111) on Monday July 13 2020, @11:49PM (#1020835) Homepage Journal

    I agree with Linus on this -- 99% of my code in all my projects just uses straight integers of 64-bits width or smaller. The only time I end up using floats is when something else uses floats and wants a float. I suspect that to be true unless you're doing a lot of graphics or scientific work.
    Integer performance is what really matters. Just optimize that.

    That said, Intel has become so evil that I don't want them to succeed. I don't want them to improve -- I want them to die. And I want ARM64 and RISC-V to rise as new competitors for AMD.

    --
    Trying is the first step towards failure. -The Click
    • (Score: 1, Insightful) by Anonymous Coward on Tuesday July 14 2020, @07:42AM (1 child)

      by Anonymous Coward on Tuesday July 14 2020, @07:42AM (#1021112)

      not to rain on your parade, but people use computers mostly for media consumption and games.
      i.e. floating point operations are happening all the time and everyone wants them to be fast.

      now that I think about it, you may have a point in saying servers only need integer operations though.

      if I was a chip-maker, I'd worry about both integers and floating point numbers.

      • (Score: 0) by Anonymous Coward on Tuesday July 14 2020, @11:02PM

        by Anonymous Coward on Tuesday July 14 2020, @11:02PM (#1021540)

        That may be, but AVX-512 doesn't help you there either.

  • (Score: 1) by weirsbaski on Tuesday July 14 2020, @05:20AM (1 child)

    by weirsbaski (4539) on Tuesday July 14 2020, @05:20AM (#1021064)

    We do a lot of AVX-related stuff where I work, so I've been making the joke that after AVX1 (128-bit), AVX2 (256-bit), and AVX-512, the only way to make the FP any wider would be to direct-wire the floating-point unit into the on-chip graphics block.

  • (Score: 1, Funny) by Anonymous Coward on Tuesday July 14 2020, @04:43PM

    by Anonymous Coward on Tuesday July 14 2020, @04:43PM (#1021339)

    avx-1024
    avx-2048
    avx-4096
    ...
    avx-to-infinity-and-beyond!

    nb: don't look at the docs for amx, advanced matrix extensions. just. don't. do. it.

(1)