Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Friday December 29 2017, @01:57AM   Printer-friendly
from the memories dept.

Source code for Apple's legendary Lisa operating system to be released for free in 2018

You'll soon be able to take a huge trip down memory lane when it comes to Apple's computer efforts. The Computer History Museum has announced that the source code for the Lisa, Apple's computer that predated the Mac, has been recovered and is being reviewed by Apple itself...

The announcement was made by Al Kossow, a Software Curator at the Computer History Museum. Kossow says that source code for both the operating system and applications has been recovered. Once that code is finished being reviewed by Apple, the Computer History Museum will make the code available sometime in 2018.

While you've been able to run emulators of the Lisa operating system before, this is notable as it's not just a third-party hack solution, but rather Apple is directly involved and the full code will be available for everyone.

Apple Lisa.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by shortscreen on Friday December 29 2017, @08:02AM (5 children)

    by shortscreen (2252) on Friday December 29 2017, @08:02AM (#615487) Journal

    Why a single line? All video codecs I can think of use rectangular blocks. A 6502 trying to decode an 8x8 block would be limited by the time it takes to do a software 32-bit fixed-point integer multiply (probably 500 cycles or some such), not by memory size or I/O.

    4,096 6502 cores with 64KB each would be a pretty large die. Let's say it was only 1,024 cores, and that a frame of 4K video has 128K blocks. Each core would have to do 128 blocks in 33ms. If each nonzero DCT coefficient resulted in 40,000 cycles worth of multiply-adds, (times 3 color planes, and maybe 4 non-zero coefficients on average) then maybe your (MJPEG) video would play if the chip was clocked at least 2GHz...

    Performance would be much better with hardware multiply. According to my web search, a 32x32 multiply circuit takes 35,000 transistors though, which is much more than a 6502. ;)

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by JoeMerchant on Friday December 29 2017, @02:00PM (4 children)

    by JoeMerchant (3937) on Friday December 29 2017, @02:00PM (#615510)

    I do neglect the joys of hardware floating point acceleration... if you're doing floating point operations, then, yes - you'll want one of those (which is probably on-par with the size of the 6502+64K RAM) Still, most people forget that the 6502 "performance" they are used to was running on ~4MHz clocks, put the same die on a 14nm process with rapid access to the local 64K RAM store, and we're now in > 500x speedup territory just from clock speed increase.

    Video transcode is indeed usually done in blocks, a 64K store could hold a 64x64 pixel block easily enough, so a 3840x2160 screen becomes a 60x34 set of blocks, or 2040 blocks to work on.

    So, back off on the 4096 processor cores, and compromise with 2048 cores each with a dedicated FPU. Call it 1/4 billion transistors - if I didn't screw up the math too badly, that's 0.00514 square centimeters for the transistors alone, allow 10x that for interconnect and cooling space, and we've got a chip that's 2.3mm per side - reasonable enough.

    It would be a horrendous job re-coding existing software for the new/old architecture, but as with all things, some would work better, some not as well - and my main contention is that: with a 500x slower clock, single core, slow RAM and no FPU, lots of "human scale" tasks were handled quite well on the 6502 machines. Start with editing a single page of text. Sure, people edit giant documents and changes ripple throughout, but rarely do they even focus on a whole page of text information at a time.

    Fun thought: if your massively parallel editor worked page-per-core, this chip could handle up to 2000 page documents, if you need more, the architecture is already massively parallel, you could add 2000 pages of editing capacity per chip, or 4000 if you don't need floating point math operations.

    --
    🌻🌻 [google.com]
    • (Score: 2) by Rich on Friday December 29 2017, @04:21PM (3 children)

      by Rich (945) on Friday December 29 2017, @04:21PM (#615547) Journal

      The multiplication complexity is dependent on the barrel shifter, which requires a square array, so, if we assume 32K transistors (ballpark of the above quote) for a 32 bit unit, we'd get away with 8K transistors for a 16x16 bit unit. To do a 32x32 multiply, we'd need four uses of the 16x16 grid, so, from that limitation, we could run 4 in parallel, for the same magnitude in throughput. Of course, there is the adding overhead, but that could be factored into a pipelined multiply-acc (which is needed for signal processing anyway). On the other hand, because of the shorter propagation lines, the smaller array could probably be clocked higher. Because we might not always need the full 32 bit result, the 16x16 would lead to higher throughput, if at times four parallel multiplications are needed. I guess 64K of static RAM would need 6 transistors per bit, so we're at 3M transistors for the RAM per core, which one would want to balance with what's used for the adjacent CPU. Which, realistically would warrant at least 1M transistors for the CPU, which makes the large barrel shifter look like minor overhead, even after widening up the CPU to 32 bits...

      If I had to stick with 8 bit (say, we'd get away with much less memory), despite remembering a lot of 6502 opcodes, I would go for a more modern 8 bit variant. If you look at the end of the 8-bit developments (Hitachi 6309, AVR), there's a lot of 6800/6502 heritage, but also already a lot done that I would try to sneak in, if I had to do such a thing. I'd probably go for 16 bit handling (if not data path), including the index registers first, and barrel shifting for blitting and fixed point math second.

      • (Score: 2) by JoeMerchant on Friday December 29 2017, @05:54PM (2 children)

        by JoeMerchant (3937) on Friday December 29 2017, @05:54PM (#615577)

        Sorry, didn't check you on everything, but I think you missed on the RAM 64Kx6 = 384K.

        Not sure about balancing transistors dedicated to RAM with transistors dedicated to CPU core, that's kind of the whole RISCey joy of the 6502, not much CPU, but it does lots of things well. Any real-life projects I've ever done seem to come out much more RAM hungry than all the dogmatic guidelines. Dogmatic critics look at my project parameters and opine: "Well, you're design is too RAM heavy, you need to rethink..." Sorry Mr. Dogma, worked some of these problems for over a decade, re-imagined and rebuilt from the ground-up multiple times with varying teams and technologies - some things just NEED more RAM. I think the main thing about my problem space is not that it's RAM hungry, it's more that it's ROM light...

        As for modernizing, of course that would happen... the video and even audio output paths for today's "standard" hardware make 8 bit look silly. And, why stop at 16 bit handling and datapath? 32, or even 64 makes a lot of sense - in the massive memory stores, 3D image processing, etc. The thing that makes 8 bit "feel" like a sweet spot for me is (assuming the options are: 2, 4, 8, 16, 32, 64, 128, etc.) an 8 bit character can represent any character in the alphabet... now you can go all Kanji on me, but I think that the Pinyin phonetic encoding scheme fairly well demonstrates that even Kanji is "chunked" into a smaller number of phonemes that are combined to represent the concepts. So, what's running around in an 8 bit processor's data paths is, generally, sort of like letters. Well padded letters, a rich 256 character alphabet. When we step up even to 64 bit systems, we're not really exploiting the potential of symbolic representation in 64 bits, a lot of the 64 bit symbols that run around are still like letters, or maybe Kanji characters, but not at all using the potential of the larger code space.

        It's all arbitrary - I really hated the whole RISC thing that was going around in the late 1980s, trying to squeeze ever harder on the complexity just to get the clock speed up a bit... I do feel like the "sweet spot" is somewhere more complex than a barely Turing complete instruction set, but probably somewhere short of the 386 family level of complexity... where we've built to today, I doubt anyone really understands the "full stack" of a Coffee Lake system down to the flip-flop level, it's just abstract blocks of blocks of blocks now.

        --
        🌻🌻 [google.com]
        • (Score: 2) by Rich on Saturday December 30 2017, @01:52AM (1 child)

          by Rich (945) on Saturday December 30 2017, @01:52AM (#615702) Journal

          Sorry, didn't check you on everything, but I think you missed on the RAM 64Kx6 = 384K.

          If you meant KiloBIT, i misunderstood you. If you meant KiloBYTE, you show me the scheme to statically store 8 bits with six transistors. ;) Are we eventually getting multilevel DRAM???

          As you say, the preferred width of the data path correlates to the task to be done. For classic AV work, you'd get away with 16 bits for audio and 8 for video. Make that 24/12 for modern stuff. And of course the data set size determines the address width. If single-cycle operations on your preferred data and address units are available at negligible cost, you probably want them.

          I guess a simple 5- to 7-stage pipeline MIPS descendant with a vector unit is the sweet spot for massive parallelism when it's paired with around 64K RAM per cell (but I didn't deeply research any numbers).

          • (Score: 2) by JoeMerchant on Saturday December 30 2017, @02:32AM

            by JoeMerchant (3937) on Saturday December 30 2017, @02:32AM (#615709)

            Sorry, didn't check you on everything, but I think you missed on the RAM 64Kx6 = 384K.

            If you meant KiloBIT, i misunderstood you. If you meant KiloBYTE, you show me the scheme to statically store 8 bits with six transistors. ;) Are we eventually getting multilevel DRAM???

            Of course you're right, I haven't had any coffee in a few weeks, so...

            And I never liked pipelines, which is why I love my 6502s and 6811s - you can literally just look at the opcodes and know for sure if it will get done on time or not.

            Also, the 6811 systems I built always booted up - fully functional - in about the time it took the power switch to go "click." Somehow all this multi-layered pipelined glory just doesn't seem like progress to me.

            --
            🌻🌻 [google.com]