Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Wednesday April 30 2014, @03:43PM   Printer-friendly
from the problems-with-hardware-software-and-people dept.

Reported by LWN:

As of tonight, there is no more SPARC in testing. The main reasons were lack of porter commitments, problems with the toolchain and continued stability issues with our machines.

The fate of SPARC in unstable has not been decided yet. It might get removed unless people commit to working on it. Discussion about this should take place on #745938.

(Cross submitted on pipedot.org)

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Interesting) by VLM on Wednesday April 30 2014, @04:03PM

    by VLM (445) on Wednesday April 30 2014, @04:03PM (#38160)

    WRT to the claimed 3 reasons, the porters seem to be committed, its just that linux and gcc don't "do" sparc so well anymore, making life extremely hard on the porters.

    If upstream doesn't work on amd64, upstream is usually highly motivated to fix it. Doesn't work on sparc, eh, upstream shrugs shoulders while watching porter team grow gray hair.

    68000, sparc, and mips (maybe others) are all on the cusp of "you want a porting machine, just program this COTS FPGA board and it'll be your 68000". Just a couple more years of FPGA hardware development and this will be BAU rather than a peculiar idea.

    Starting Score:    1  point
    Moderation   +3  
       Insightful=1, Interesting=2, Total=3
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 2) by frojack on Wednesday April 30 2014, @06:49PM

    by frojack (1554) on Wednesday April 30 2014, @06:49PM (#38209) Journal

    WRT to the claimed 3 reasons, the porters seem to be committed, its just that linux and gcc don't "do" sparc so well anymore, making life extremely hard on the porters.

    Does it do it worse than it did before, or are some new coding constructs simply not supported when generating executables for spark?

    How were previous spark versions produced? Massive re-coding?

    --
    No, you are mistaken. I've always had this sig.
    • (Score: 2) by VLM on Wednesday April 30 2014, @07:03PM

      by VLM (445) on Wednesday April 30 2014, @07:03PM (#38213)

      My likely inaccurate external observation of scaling, is as new coding constructs are implemented in the kernel and gcc, things are not getting quite as much testing on Sparc as they did in the 90s. So the re-coding load is increasing over time, along with plain old troubleshooting load.

      So WRT the porters being committed there's about as many as the old days, maybe more, working as hard as the old days if not harder, but the workload is increasing.

      Meanwhile Sparc hardware is not getting any more "available" for people who might want to jump in.

      Someday, a build farm will be 1000 FPGA cards and when a package needs recompiling or testing you'll just be assigned FPGA #723 which will have a "whatever" loaded into it and away you'll go on your dedicated FPGA "box". Just not today, not yet. Or maybe a vast cloudy farm of huge processors running perfect emulators. Maybe.