Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


AnonTechie (2275)

AnonTechie
(email not shown publicly)

Journal of AnonTechie (2275)

The Fine Print: The following are owned by whoever posted them. We are not responsible for them in any way.
Sunday July 02, 23
07:51 PM
Hardware

In 1979 the Macintosh personal computer existed only as the pet idea of Jef Raskin, a veteran of the Apple II team, who had proposed that Apple Computer Inc. make a low-cost “appliance”-type computer that would be as easy to use as a toaster. Mr. Raskin believed the computer he envisioned, which he called Macintosh, could sell for US $1000 if it was manufactured in high volume and used a powerful microprocessor executing tightly written software.

Mr. Raskin’s proposal did not impress anyone at Apple Computer enough to bring much money from the board of directors or much respect from Apple engineers. The company had more pressing concerns at the time: the major Lisa workstation project was getting under way, and there were problems with the reliability of the Apple III, the revamped version of the highly successful Apple II.

IEEE Spectrum

An interesting look at the design of Apple Mac ...

Display Options Threshold/Breakthrough Reply to Article Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Interesting) by DannyB on Monday July 03, @03:16PM (3 children)

    by DannyB (5839) Subscriber Badge on Monday July 03, @03:16PM (#1314176) Journal

    128 K bytes, that ought to be enough for anybody!

    At the time, in early 1984, it was obvious to everybody, except Steve Jobs, that 128 K was clearly not enough. All developers understood this. Steve stuck to his guns that the basic machine should come in one simple non-expandable configuration.

    After internal squabbling at Apple, eventually we got the "fat mac" which has 512 K. Enough to actually build useful programs.

    In 1987, I was the co-author of a Mac program called Timbuktu.

    What it was...


    Timbuktu was a screen sharing program over the AppleTalk network where one Macintosh could view the screen of another Macintosh. While this is now common today, it was a big deal back then. The other programmer and myself were inspired by a program on PCs called Carbon Copy that would let one PC compatible see the text only screen of another PC over a modem. When we were planning Timbuktu, we were not even sure it could be done. We sketched all this out on napkins at restaurants at lunch time. One day in 1987 we realized there were no technical problems left to solve. We fully understood how to build this.

    How it worked...


    There were two parts: (1) an "init", what in today's terminology we would call a system service loaded at boot time, and (2) a desk accessory (eg, small program that could be run within any other program, such as while you are in your word processor). The mac could not run multiple programs at a time, hence the usefulness of small "desk accessory" programs like calculators, scrapbooks, etc.

    I was the more experienced Mac programmer at this point in time, so I wrote the service back end that is installed at boot time. It was mostly in Pascal but with some M 68000 code to glue it together and glue it to the right interface mechanisms to get hooked in to the OS. It had to patch EVERY QuickDraw call to be overridden by a function I would provide. If Timbuktu were not "capturing" your screen, the patch was no more than a test of the "capturing" switch and a jump to the original QuickDraw function (probably located in ROM unless the system software had already patched it before us). This background code had to patch the Event Manger in order to cleverly be able to "pop up" user interface elements within whatever application might be running. It also had AppleTalk network code to send all of your screen drawing operations to the other end . . .

    The desk accessory part which was the only user interface of the product, was written by the other programmer who was very experienced but new to Mac programming. By opening the desk accessory you could bring up a window showing other Timbuktu nodes on the network that were open to accept a request to share their screen. If you picked one, the remote Mac would interrupt its user prompting them that Jane wants to view the screen. The user could allow this in either view only mode or view and share control mode.

    This got our company acquired. As we added features to the program, and were planning other software titles based on the same technology. (Screen Recorder, which would record your screen to a file which could be played back, Timbuktu/Remote which would work over high speed modems.)

    I remember a meeting where we developers, including new ones we had hired, were clear that these products should only run on 512K or higher memory Macs. There were a few who insisted that we make this all work on all macs including the original 128K mac. We pointed out the infeasibility of this, and our reasoned arguments won the day.

    There are other stories I could tell about artists who knew nothing about user interface design -- although their sketches looked fantastic -- like consumer electronics equipment that looks beautiful but is a disaster to attempt to operate.

    Those days were fun days. Not the only fun days, I've had plenty of others, but those few years were fun days.

    I think that Steve Jobs sticking to his 128 K Mac, and other ideas of limiting upgrades, is what got him stripped of his power at Apple.

    --
    If you eat an entire cake without cutting it, you technically only had one piece.
    • (Score: 2) by Tork on Monday July 03, @08:57PM (2 children)

      by Tork (3914) on Monday July 03, @08:57PM (#1314228)

      At the time, in early 1984, it was obvious to everybody, except Steve Jobs, that 128 K was clearly not enough. All developers understood this. Steve stuck to his guns that the basic machine should come in one simple non-expandable configuration.

      This era was well before my time, but I have often wondered if vector displays being used so much back then played a role in this. Seems like the moment one describes the memory requirements needed just to have a buffer for a full-screen raster image that's when the lightbulbs start to go on. But vector displays can operate on practically nothin. Maybe there was so much momentum...?

      Or maybe not. I'm not even sure I'm right about vector displays requiring less RAM. (I latched on to that idea while doing a lil retro-gaming..) I wish I had a good way to get my head into what that era of computing was like. I'm in a retro group for 90's DOS games and a lot of the memes that get posted are about how we just lived with the quirks of the area, like a game crashing and leaving the desktop gamma up high enough to give you a sunburn. I feel like there's a whole bunch of nuance that you had to be there for.

      --
      🏳️‍🌈 Proud Ally 🏳️‍🌈
      • (Score: 1, Interesting) by Anonymous Coward on Monday July 03, @11:40PM

        by Anonymous Coward on Monday July 03, @11:40PM (#1314251)

        First shooter game? https://www.youtube.com/watch?v=UP2OaKHaDxM [youtube.com] MIT Space War on the PDP-1 used a vector display. Friends found what they believed to be the original display tube in a Cambridge MA surplus store and grabbed it. I think it may have eventually wound up in the Computer Museum, back when it was in Boston.

        There were also hardware character displays, say 40 characters x 25 lines. Each character one byte, so 1000 bytes was all the memory needed to cover the whole screen. Of course only 256 choices for each character rectangle - usually half ASCII, other half not standardized.

      • (Score: 2) by DannyB on Wednesday July 05, @02:06PM

        by DannyB (5839) Subscriber Badge on Wednesday July 05, @02:06PM (#1314540) Journal

        I am trying to imagine attempting to read SN on a vector graphic display with green or blue phosphor.

        Raster displays were the way to go for modern GUIs. Especially once you understand how QuickDraw and it's Region data structure works. This gives you fast graphics. Over 7000 characters per second -- in 1984 on a machine with 8 MHz 68000 processor. No hardware text mode. Overlapping windows easy because QuickDraw region clipping can allow drawing in partially exposed windows even though windows partially on top if it can obscure various parts of it in arbitrarily complex ways. And you can take advantage of complex region clipping in your own programs. And did anyone mention that QuickDraw was Amazingly Fast! Insanely fast. The fundamental abstraction is the blitter with regions.

        --
        If you eat an entire cake without cutting it, you technically only had one piece.
  • (Score: 3, Touché) by Anonymous Coward on Monday July 03, @07:26PM (1 child)

    by Anonymous Coward on Monday July 03, @07:26PM (#1314211)

    Read through the linked article, no references to the Xerox Alto https://en.wikipedia.org/wiki/Xerox_Alto [wikipedia.org] and only a couple of tiny mentions of the Xerox Star. But the whole idea of a GUI for Lisa and Mac came from a tour that PARC gave to Apple employees, at least that's the version of the story I remember.

    • (Score: 4, Informative) by DannyB on Wednesday July 05, @02:16PM

      by DannyB (5839) Subscriber Badge on Wednesday July 05, @02:16PM (#1314542) Journal

      Apple did steal a lot from Xerox. If you can call it that. Xerox had no idea what they had. Management ordered their top people to show Apple everything. After all, how much could this technology really benefit Xerox. After all, Xerox was a document copy machine company.

      Eventually Apple employed people from Xerox such as Larry Tesler.

      Apple did contribute some of its own inventions, such as the menu bar at the top of the screen with pull down menus. Microsoft, in its typical fashion, skrewed this up by putting menu bars on individual windows. The utility of having the menu bar at top of screen, and swapping menu bars depending on which window is in front, is that you can SLAM the mouse pointer to the top of the screen and hit the menu bar. On MS Windows, you need to use your micromotor precision to get the pointer on a menu and then pull it down.

      Apple also invented certain features of dialog boxes.

      The biggest thing, which puzzled Xerox people, was that on Mac and Lisa, if you moved or closed a window, everything underneath the newly exposed area automatically was redrawn very quickly. And not using off screen buffers. Events were sent to all of the windows, and they could simply redraw, and QuickDraw's region clipping simply limited which actual pixels could be touched on screen, such that application drawing, such as half of a spreadsheet cell, and half of the letter Z are nicely clipped without the application having any knowledge of this. The genius for that was Bill Atkinson. He simply did not know that Xerox could not redraw damaged areas of windows. He simply assumed that this had to be done and came up with a genius way of doing it.

      --
      If you eat an entire cake without cutting it, you technically only had one piece.
(1)