Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Friday December 29 2017, @01:57AM   Printer-friendly
from the memories dept.

Source code for Apple's legendary Lisa operating system to be released for free in 2018

You'll soon be able to take a huge trip down memory lane when it comes to Apple's computer efforts. The Computer History Museum has announced that the source code for the Lisa, Apple's computer that predated the Mac, has been recovered and is being reviewed by Apple itself...

The announcement was made by Al Kossow, a Software Curator at the Computer History Museum. Kossow says that source code for both the operating system and applications has been recovered. Once that code is finished being reviewed by Apple, the Computer History Museum will make the code available sometime in 2018.

While you've been able to run emulators of the Lisa operating system before, this is notable as it's not just a third-party hack solution, but rather Apple is directly involved and the full code will be available for everyone.

Apple Lisa.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Rich on Friday December 29 2017, @04:21PM (3 children)

    by Rich (945) on Friday December 29 2017, @04:21PM (#615547) Journal

    The multiplication complexity is dependent on the barrel shifter, which requires a square array, so, if we assume 32K transistors (ballpark of the above quote) for a 32 bit unit, we'd get away with 8K transistors for a 16x16 bit unit. To do a 32x32 multiply, we'd need four uses of the 16x16 grid, so, from that limitation, we could run 4 in parallel, for the same magnitude in throughput. Of course, there is the adding overhead, but that could be factored into a pipelined multiply-acc (which is needed for signal processing anyway). On the other hand, because of the shorter propagation lines, the smaller array could probably be clocked higher. Because we might not always need the full 32 bit result, the 16x16 would lead to higher throughput, if at times four parallel multiplications are needed. I guess 64K of static RAM would need 6 transistors per bit, so we're at 3M transistors for the RAM per core, which one would want to balance with what's used for the adjacent CPU. Which, realistically would warrant at least 1M transistors for the CPU, which makes the large barrel shifter look like minor overhead, even after widening up the CPU to 32 bits...

    If I had to stick with 8 bit (say, we'd get away with much less memory), despite remembering a lot of 6502 opcodes, I would go for a more modern 8 bit variant. If you look at the end of the 8-bit developments (Hitachi 6309, AVR), there's a lot of 6800/6502 heritage, but also already a lot done that I would try to sneak in, if I had to do such a thing. I'd probably go for 16 bit handling (if not data path), including the index registers first, and barrel shifting for blitting and fixed point math second.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by JoeMerchant on Friday December 29 2017, @05:54PM (2 children)

    by JoeMerchant (3937) on Friday December 29 2017, @05:54PM (#615577)

    Sorry, didn't check you on everything, but I think you missed on the RAM 64Kx6 = 384K.

    Not sure about balancing transistors dedicated to RAM with transistors dedicated to CPU core, that's kind of the whole RISCey joy of the 6502, not much CPU, but it does lots of things well. Any real-life projects I've ever done seem to come out much more RAM hungry than all the dogmatic guidelines. Dogmatic critics look at my project parameters and opine: "Well, you're design is too RAM heavy, you need to rethink..." Sorry Mr. Dogma, worked some of these problems for over a decade, re-imagined and rebuilt from the ground-up multiple times with varying teams and technologies - some things just NEED more RAM. I think the main thing about my problem space is not that it's RAM hungry, it's more that it's ROM light...

    As for modernizing, of course that would happen... the video and even audio output paths for today's "standard" hardware make 8 bit look silly. And, why stop at 16 bit handling and datapath? 32, or even 64 makes a lot of sense - in the massive memory stores, 3D image processing, etc. The thing that makes 8 bit "feel" like a sweet spot for me is (assuming the options are: 2, 4, 8, 16, 32, 64, 128, etc.) an 8 bit character can represent any character in the alphabet... now you can go all Kanji on me, but I think that the Pinyin phonetic encoding scheme fairly well demonstrates that even Kanji is "chunked" into a smaller number of phonemes that are combined to represent the concepts. So, what's running around in an 8 bit processor's data paths is, generally, sort of like letters. Well padded letters, a rich 256 character alphabet. When we step up even to 64 bit systems, we're not really exploiting the potential of symbolic representation in 64 bits, a lot of the 64 bit symbols that run around are still like letters, or maybe Kanji characters, but not at all using the potential of the larger code space.

    It's all arbitrary - I really hated the whole RISC thing that was going around in the late 1980s, trying to squeeze ever harder on the complexity just to get the clock speed up a bit... I do feel like the "sweet spot" is somewhere more complex than a barely Turing complete instruction set, but probably somewhere short of the 386 family level of complexity... where we've built to today, I doubt anyone really understands the "full stack" of a Coffee Lake system down to the flip-flop level, it's just abstract blocks of blocks of blocks now.

    --
    🌻🌻 [google.com]
    • (Score: 2) by Rich on Saturday December 30 2017, @01:52AM (1 child)

      by Rich (945) on Saturday December 30 2017, @01:52AM (#615702) Journal

      Sorry, didn't check you on everything, but I think you missed on the RAM 64Kx6 = 384K.

      If you meant KiloBIT, i misunderstood you. If you meant KiloBYTE, you show me the scheme to statically store 8 bits with six transistors. ;) Are we eventually getting multilevel DRAM???

      As you say, the preferred width of the data path correlates to the task to be done. For classic AV work, you'd get away with 16 bits for audio and 8 for video. Make that 24/12 for modern stuff. And of course the data set size determines the address width. If single-cycle operations on your preferred data and address units are available at negligible cost, you probably want them.

      I guess a simple 5- to 7-stage pipeline MIPS descendant with a vector unit is the sweet spot for massive parallelism when it's paired with around 64K RAM per cell (but I didn't deeply research any numbers).

      • (Score: 2) by JoeMerchant on Saturday December 30 2017, @02:32AM

        by JoeMerchant (3937) on Saturday December 30 2017, @02:32AM (#615709)

        Sorry, didn't check you on everything, but I think you missed on the RAM 64Kx6 = 384K.

        If you meant KiloBIT, i misunderstood you. If you meant KiloBYTE, you show me the scheme to statically store 8 bits with six transistors. ;) Are we eventually getting multilevel DRAM???

        Of course you're right, I haven't had any coffee in a few weeks, so...

        And I never liked pipelines, which is why I love my 6502s and 6811s - you can literally just look at the opcodes and know for sure if it will get done on time or not.

        Also, the 6811 systems I built always booted up - fully functional - in about the time it took the power switch to go "click." Somehow all this multi-layered pipelined glory just doesn't seem like progress to me.

        --
        🌻🌻 [google.com]