Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Tuesday September 20 2022, @05:04AM   Printer-friendly

News and Advice on the World's Latest Innovations:

The Rust in Linux debate is over. The implementation has begun. In an email conversation, Linux's creator Linus Torvalds, told me, "Unless something odd happens, it [Rust] will make it into 6.1."

The Rust programming language entering the Linux kernel has been coming for some time. At the 2020 Linux Plumbers Conference, developers started considering using the Rust language for new Linux inline code. Google, which supports Rust for developing Android -- itself a Linux distro -- began pushing for Rust in the Linux kernel in April 2021.

As Wedson Almeida Filho of Google's Android Team said at the time, "We feel that Rust is now ready to join C as a practical language for implementing the kernel. It can help us reduce the number of potential bugs and security vulnerabilities in privileged code while playing nicely with the core kernel and preserving its performance characteristics."

It took a while to convince the top Linux kernel developers of this. There were concerns about non-standard Rust extensions being needed to get it to work in Linux. For instance, with the new Rust Linux NVMe driver, over 70 extensions needed to be made to Rust to get it working. But, Torvalds had told me in an earlier interview, "We've been using exceptions to standard C for decades."

This was still an issue at the invitation-only Linux Kernel Maintainers Summit. But, in the end, it was decided that Rust is well enough supported in the Clang -- the C language family compiler front end -- to move forward. Besides, as Torvalds had said earlier, "Clang does work, so merging Rust would probably help and not hurt the kernel."

[...] Now, Torvalds warns in this first release, Rust will "just have the core infrastructure (i.e. no serious use case yet)." But, still, this is a major first step for Rust and Linux.


Original Submission

 
This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by choose another one on Wednesday September 21 2022, @12:49PM (3 children)

    by choose another one (515) Subscriber Badge on Wednesday September 21 2022, @12:49PM (#1272742)

    Um, as I recall Linux (kernel) development _did_ use C++, back in the early days (just checked - yep 1992).

    The intention of the switch, as I recall, was to benefit from C++ features like stronger type-checking. It failed - mainly, I suspect, because C++ support in gcc/g++ at the time was a crapshoot in a bug-fest. Even in late '90s I personally found it quite a shock to move to the MS/Windows world (for work/gotta-make-a-living reasons) and find that actually pretty much all the "that would be really neat if it worked" stuff in C++ that I was having to kludge around in Linux land, actually _did_ work properly in Visual C++. FOSS compilers remained well behind until gcc/ecgs merge sometime around turn-of-the-century.

    There were other more fundamental issues outside of that though, things that would have to be avoided because would not play nicely with the rest of the in-C-kernel - e.g. memory management, exceptions. Later in the '90s embedded folks developed embedded subsets of C++ for pretty much the same reasons.

    Fast forward 20 years and C++ would be a non-starter I think, simply because of the sheer amount of code that would have to be converted or made at least C++ aware - you can put some C code inside C++ (C libraries are used all the time) and the problems from doing that are known and can be managed (it's what C++ was designed to do) but using some C++ inside C code framework will all go horribly wrong (exceptions, memory management etc). I can easily see that it might be far less work to allow another completely different language to be used in places - C++ is just too close to the internals of C.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 3, Interesting) by bzipitidoo on Wednesday September 21 2022, @02:30PM (2 children)

    by bzipitidoo (4388) on Wednesday September 21 2022, @02:30PM (#1272771) Journal

    C++ in the kernel that long ago? Didn't know that!

    > FOSS compilers remained well behind

    Did you ever use the Borland C++ compiler? Way more broken than g++. One of Borland C++'s worst bugs was its inability to correctly use more than 64k data in the x86 segmented memory model. As long as you kept to piddly little beginning CS assignments, you were okay, but try to work with more than 64k of data, watch out. Had that problem up through version 4.5, and as I recall it still wasn't fixed in 5.x. That was what pushed me to FOSS.

    > There were other more fundamental issues

    Yeah, compiler issues shouldn't be reason to trash a language, unless the language is the cause of those issues.

    I find C++ basically a gateway drug. Every time I've started a project in C, I find myself wishing I could use this and that convenience that works in C++, and end up switching. At first, the code is just C with structures renamed to classes because it's a whole lot easier to point to functions that way. No inheritance, no polymorphism, no templates, and no operator overloading. Still use stdio.h, because everyone knows that iostream has worse performance. (A curious thing about that is that iostream's performance is worse because it defaults to support of stdio. Turn that legacy support off, and ...) But then, you start wishing for such things as the nicer string handling of the STL, and associative arrays and such like, and so, those features, especially templates, creep in. Or at that point, you switch to a language that has native support for that stuff. Depends on several factors, with performance being a big one. I wrote a simple byte value counter in Perl 6 (now called Raku), turned the program loose on a 100M file, and it took 20 minutes. The same program in C took 2 seconds. We're further than ever from One Language To Rule Them All.

    • (Score: 0) by Anonymous Coward on Wednesday September 21 2022, @06:55PM (1 child)

      by Anonymous Coward on Wednesday September 21 2022, @06:55PM (#1272839)
      2 seconds? Why is your C program taking so long to count bytes?

      Have you tried perl5?

      #!/usr/bin/perl -w
      use strict;
      my $path=shift||'';
      my $fh;
      die "unable to open: $path: $!"  unless open($fh,"<$path");
      my $buf='';
      my $bufsize=131072;
      my $total=0;
      while ( my $read = sysread($fh , $buf , $bufsize) )
      {
          $total+= $read;
      }
      print "$total\n";

      time ./t.pl  test.bin
      100000000

      real    0m0.009s
      user    0m0.004s
      sys     0m0.004s

      And this is on a Linux VM running in virtualbox on a Windows 10 ryzen PC.

      • (Score: 2) by bzipitidoo on Wednesday September 21 2022, @11:49PM

        by bzipitidoo (4388) on Wednesday September 21 2022, @11:49PM (#1272916) Journal

        The computer I tested it on had a HDD, not a SSD.

        Here's the Perl6 code I banged out:

        # histogram of bytes in a file.
        use v6;

        my @c;
        my $i = 0;

        loop ($i=255; $i>=0; $i--) {
            @c[$i]=0;
        }

        my $count=0;
        while defined $_ = $*IN.getc {
            $count++;
            $i = ord($_);
            if ($i>255) { $i=255; }
            @c[$i]++;
        }

        $i=0;
        for @c {
            print sprintf("%c %7d ",$i,$_);
            $i++;
            if ($i%8 == 0) { say ""; }
        }