Stories
Slash Boxes
Comments

SoylentNews is people

posted by NCommander on Monday May 18 2020, @02:00PM   Printer-friendly
from the from-16-to-32-to-64-complete-with-braindamage dept.

For those who've been long-time readers of SoylentNews, it's not exactly a secret that I have a personal interest in retro computing and documenting the history and evolution of the Personal Computer. About three years ago, I ran a series of articles about restoring Xenix 2.2.3c, and I'm far overdue on writing a new one. For those who do programming work of any sort, you'll also be familiar with "Hello World", the first program most, if not all, programmers write in their careers.

A sample hello world program might look like the following:

#include <stdio.h>

int main() {
 printf("Hello world\n");
 return 0;
}

Recently, I was inspired to investigate the original HELLO.C for Windows 1.0, a 125 line behemoth that was talked about in hush tones. To that end, I recorded a video on YouTube that provides a look into the world of programming for Windows 1.0, and then testing the backward compatibility of Windows through to Windows 10.

Hello World Titlecard

For those less inclined to watch a video, my write-up of the experience is past the fold and an annotated version of the file is available on GitHub

Bring Out Your Dinosaurs - DOS 3.3

Before we even get into the topic of HELLO.C though, there's a fair bit to be said about these ancient versions of Windows. Windows 1.0, like all pre-95 versions, required DOS to be pre-installed. One quirk however with this specific version of Windows is that it blows up when run on anything later than DOS 3.3. Part of this is due to an internal version check which can be worked around with SETVER. However, even if this version check is bypassed, there are supposedly known issues with running COMMAND.COM. To reduce the number of potential headaches, I decided to simply install PC-DOS 3.3, and give Windows what it wants.

You might notice I didn't say Microsoft DOS 3.3. The reason is that DOS didn't exist as a standalone product at the time. Instead, system builders would license the DOS OEM Adaptation Kit and create their own DOS such as Compaq DOS 3.3. Given that PC-DOS was built for IBM's own line of PCs, it's generally considered the most "generic" version of the pre-DOS 5.0 versions, and this version was chosen for our base. However, due to its age, it has some quirks that would disappear with the later and more common DOS versions.

PC DOS 3.3 loaded just fine in VirtualBox and — with the single 720 KiB floppy being bootable — immediately dropped me to a command prompt. Likewise, FDISK and FORMAT were available to partition the hard drive for installation. Each individual partition is limited, however, to 32 MiB. Even at the time, this was somewhat constrained and Compaq DOS was the first (to the best of my knowledge) to remove this limitation. Running FORMAT C: /S created a bootable drive, but something oft-forgotten was that IBM actually provided an installation utility known as SELECT.

SELECT's obscurity primarily lies in its non-obvious name or usage, nor the fact that it's actually needed to install DOS; it's sufficient to simply copy the files to the hard disk. However, SELECT does create CONFIG.SYS and AUTOEXEC.BAT so it's handy to use. Compared to the later DOS setup, SELECT requires a relatively arcane invocation with the target installation folder, keyboard layout, and country-code entered as arguments and simply errors out if these are incorrect. Once the correct runes are typed, SELECT formats the target drive, copies DOS, and finishes installation.

DOS Select

Without much fanfare, the first hurdle was crossed, and we're off to installing Windows.

Windows 1.0 Installation/Mouse Woes

With DOS installed, it was on to Windows. Compared to the minimalist SELECT command, Windows 1.0 comes with a dedicated installer and a simple text-based interface. This bit of polish was likely due to the fact that most users would be expected to install Windows themselves instead of having it pre-installed.

Windows 1 SETUP

Another interesting quirk was that Windows could be installed to a second floppy disk due to the rarity of hard drives of the era, something that we would see later with Microsoft C 4.0. Installation went (mostly) smoothly, although it took me two tries to get a working install due to a typo. Typing WIN brought me to the rather spartan interface of Windows 1.0.

DOS EXECUTIVE

Although functional, what was missing was mouse support. Due to its age, Windows predates the mouse as a standard piece of equipment and predates the PS/2 mouse protocol; only serial and bus mice were supported out of the box. There are two ways to solve this problem:

The first, which is what I used, involves copying MOUSE.DRV from Windows 2.0 to the Windows 1.0 installation media, and then reinstalling, selecting the "Microsoft Mouse" option from the menu. Re-installation is required because WIN.COM is statically linked as part of installation with only the necessary drivers included; there is no option to change settings afterward. The SDK documentation details the static linking process, and how to run Windows in "slow mode" for driver development, but the end result is the same. If you want to reconfigure, you need to re-install.

The second option, which I was unaware of until after producing my video is to use the PS/2 release of Windows 1.0. Like DOS of the era, Windows was licensed to OEMs who could adapt it to their individual hardware. IBM did in fact do so for their then-new PS/2 line of computers, adding in PS/2 mouse support at the time. Despite being for the PS/2 line, this version of Windows is known to run on AT-compatible machines.

Regardless, the second hurdle had been passed, and I had a working mouse. This made exploring Windows 1.0 much easier.

The Windows 1.0 Experience

If you're interested in trying Windows 1.0, I'd recommend heading over to PCjs.org and using their browser-based emulator to play with it as it already has working mouse support and doesn't require acquiring 35 year old software. Likewise, there are numerous write-ups about this version, but I'd be remiss if I didn't spend at least a little time talking about it, at least from a technical level.

Compared to even the slightly later Windows 2.0, Windows 1.0 is much closer to DOSSHELL than any other version of Windows, and is essentially a graphical bolt-on to DOS although through deep magic, it is capable of cooperative multitasking. This was done entirely with software trickery as Windows pre-dates the 80286, and ran on the original 8086. COMMAND.COM could be run as a text-based application, however, most DOS applications would launch a full-screen session and take control of the UI.

This is likely why Windows 1.0 has issues on later versions of DOS as it's likely taking control of internal structures within DOS to perform borderline magic on a processor that had no concept of memory protection.

Another oddity is that this version of Windows doesn't actually have "windows" per say. Instead applications are tiled, with only dialogue boxes appearing as free-floating Windows. Overlapping Windows would appear in 2.0, but it's clear from the API that they were at least planned for at some point. Most notable, the CreateWindow() function call has arguments for x and y coordinates.

My best guess is Microsoft wished to avoid the wrath of Apple who had gone on a legal warpath of any company that too-closely copied the UI of the then-new Apple Macintosh. Compared to later versions, there are also almost no included applications. The most notable applications that were included are: NOTEPAD, PAINT, WRITE, and CARDFILE.

WRITE

CARDFILE

While NOTEPAD is essentially unchanged from its modern version, Write could be best considered a stripped-down version of Word, and would remain a mainstay until Windows 95 where it was replaced with Wordpad. CARDFILE likewise was a digital Rolodex. CARDFILE remained part of the default install until Windows 3.1, and remained on the CD-ROM for 95, 98, and ME before disappearing entirely.

CARDFILE

PAINT, on the other hand, is entirely different from the Paintbrush application that would become a mainstay. Specifically, it's limited to monochrome graphics, and files are saved in MSP format. Part of this is due to limitations of the Windows API of the era: for drawing bitmaps to the screen, Windows provided Display Independent Bitmaps or DIBs. These had no concept of a palette and were limited to the 8 colors that Windows uses as part of the EGA palette. Color support appears to have been a late addition to Windows, and seemingly wasn't fully realized until Windows 3.0.

Paintbrush (and the later and confusingly-named Paint) was actually a third party application created by ZSoft which had DOS and Windows 1.0 versions. ZSoft Paintbrush was very similar to what shipped in Windows 3.0 and used a bit of technical trickery to take advantage of the full EGA palette.

PAINTBRUSH

With that quick look completed, let's go back to actually getting to HELLO.C, and that involved getting the SDK installed.

The Windows SDK and Microsoft C 4.0

Getting the Windows SDK setup is something of an experience. Most of Microsoft's documentation for this era has been lost, but the OS/2 Museum has scanned copies of some of the reference binders, and the second disk in the SDK has both a README file and an installation batch file that managed to have most of the necessary information needed.

Unlike later SDK versions, it was the responsibility of the programmer to provide a compiler. Officially, Microsoft supported the following tools:

  • Microsoft Macro Assembler (MASM) 4
  • Microsoft C 4.0 (not to be confused with MSC++4, or Visual C++)
  • Microsoft Pascal 3.3

Unofficially (and unconfirmed), there were versions of Borland C that could also be used, although this was untested, and appeared to not have been documented beyond some notes on USENET. More interestingly, all the above tools were compilers for DOS, and didn't have any specific support for Windows. Instead, a replacement linker was shipped in the SDK that could create Windows 1.0 "NE" New Executables, an executable format that would also be used on early OS/2 before being replaced by Portable (PE) and Linear Executables (LX) respectively.

For the purposes of compiling HELLO.C, Microsoft C 4.0 was installed. Like Windows, MSC could be run from floppy disk, albeit it with a lot of disk swapping. No installer is provided, instead, the surviving PDFs have several pages of COPY commands combined with edits to AUTOEXEC.BAT and CONFIG.SYS for hard drive installation. It was also at this point I installed SLED, a full screen editor as DOS 3.3 only shipped with EDLIN. EDIT wouldn't appear until DOS 5.0

After much disk feeding and some troubleshooting, I managed to compile a quick and dirty Hello World program for DOS. One other interesting quirk of MSC 4.0 was it did not include a standalone assembler; MASM was a separate retail product at the time. With the compiler sorted, it was time for the SDK.

Fortunately, an installation script is provided. Like SELECT, it required listing out a bunch of folders, but otherwise was simple enough to use. For reasons that probably only made sense in 1985, both the script and the README file was on Disk 2, and not Disk 1. This was confirmed not to be a labeling error as the script immediately asks for Disk 1 to be inserted.

SDK Installation

The install script copies files from four of the seven disks before returning to a command line. Disk 5 contains the debug build of Windows, which are roughly equivalent to checked builds of modern Windows. Disk 6 and 7 have sample code, including HELLO.C.

With the final hurdle passed, it wasn't too hard to get to compiled HELLO.EXE.

HELLO compilation

HELLO compilation

Dissecting HELLO.C

I'm going to go through these at a high level, my annotated hello.c goes into much more detail on all these points.

General Notes

Now that we can build it, it's time to take a look at what actually makes up the nuts and bolts of a 16-bit Windows application. The first major difference, simply due to age is that HELLO.C uses K&R C simply on the basis of pre-dating the ANSI C function. It's also clear that certain conventions weren't commonplace yet: for example, windows.h lacks inclusion guards.

NEAR and FAR pointers

long FAR PASCAL HelloWndProc(HWND, unsigned, WORD, LONG);

Oh boy, the bane of anyone coding in real mode, near and far pointers are a "feature" that many would simply like to forget. The difference is seemingly simple, a near pointer is nearly identical to a standard pointer in C, except it refers to memory within a known segment, and a far pointer is a pointer that includes the segment selector. Clear right?

Yeah, I didn't think so. To actually understand what these are, we need to segue into the 8086's 20-bit memory map. Internally, the 8086 was a 16-bit processor, and thus could directly address 2^16 bits of memory at a time, or 64 kilobytes in total. Various tricks were done to break the 16-bit memory barrier such as bank switching, or in the case of the 8086, segmentation.

Instead of making all 20-bits directly accessible, memory pointers are divided into a selector which forms the base of a given pointer, and an offset from that base, allowing the full address space to be mapped. In effect, the 8086 gave four independent windows into system memory through the use of the Code Segment (CS), Data Segment (DS), Stack Segment (SS), and the Extra Segment (ES).

Near pointers thus are used in cases where data or a function call is in the same segment and only contain the offset; they're functionally identical to normal C pointers within a given segment. Far pointers include both segment and offset, and the 8086 had special opcodes for using these. Of note is the far call, which automatically pushed and popped the code segment for jumping between locations in memory. This will be relevant later.

HelloWndProc is a forward declaration for the Hello Window callback, a standard feature of Windows programming. Callback functions always had to be declared FAR as Windows would need to load the correct segment when jumping into application code from the task manager. Hence the far declaration. Windows 1.0 and 2.0, in addition, had other rules we'll look at below.

WinMain Decleration:

int PASCAL WinMain( hInstance, hPrevInstance, lpszCmdLine, cmdShow )
HANDLE hInstance, hPrevInstance;
LPSTR lpszCmdLine;
int cmdShow;

PASCAL Calling Convention

Windows API functions are all declared as PASCAL calling convention, also known as STDCALL on modern Windows. Under normal circumstances, the C programming language has a nominal calling convention (known as CDECL) which primarily relates to how the stack is cleaned up after a function call. In CDECL-declared functions, its the responsibility of the calling function to clean the stack. This is necessary for vardiac functions (aka, functions that take a variable number of arguments) to work as the callee won't know how many were pushed onto the stack.

The downside to CDECL is that it requires additional prologue and epilogue instructions for each and every function call, thereby slowing down execution speed and increasing disk space requirements. Conversely, PASCAL calling convention left cleanup to be performed by the called function and usually only needed a single opcode to clean the stack at function end. It was likely due to execution and disk space concerns that Windows standardized on this convention (and in fact still uses it on 32-bit Windows.

hPrevInstance

if (!hPrevInstance) {
/* Call initialization procedure if this is the first instance */
if (!HelloInit( hInstance ))
return FALSE;
} else {
/* Copy data from previous instance */
GetInstanceData( hPrevInstance, (PSTR)szAppName, 10 );
GetInstanceData( hPrevInstance, (PSTR)szAbout, 10 );
GetInstanceData( hPrevInstance, (PSTR)szMessage, 15 );
GetInstanceData( hPrevInstance, (PSTR)&MessageLength, sizeof(int) );
}

hPrevInstance has been a vestigial organ in modern Windows for decades. It's set to NULL on program start, and has no purpose in Win32. Of course, that doesn't mean it was always meaningless. Applications on 16-bit Windows existed in a general soup of shared address space. Furthermore, Windows didn't immediately reclaim memory that was marked unused. Applications thus could have pieces of themselves remain resident beyond the lifespan of the application.

hPrevInstance was a pointer to these previous instances. If an application still happened to have its resources registered to the Windows Resource Manager, it could reclaim them instead of having to load them fresh from disk. hPrevInstance was set to NULL if no previous instance was loaded, thereby instructing the application to reload everything it needs. Resources are registered with a global key so trying to register the same resource twice would lead to an initialization failure.

I've also gotten the impression that resources could be shared across applications although I haven't explicitly confirmed this.

Local/Global Memory Allocations

NOTE: Mostly cribbled off Raymond Chen's blog, a great read for why Windows works the way it does.

pHelloClass = (PWNDCLASS)LocalAlloc( LPTR, sizeof(WNDCLASS) );
LocalFree( (HANDLE)pHelloClass );

Another concept that's essentially gone is that memory allocations were classified as either local to an application or global. Due to the segmented architecture, applications have multiple heaps: a local heap that is initialized with the program and exists in the local data segment, and a global heap which requires a far pointer to make access to and from.

Every executable and DLL got their own local heaps, but global heaps could be shared across process boundaries, and as best I can tell, weren't automatically deallocated when a process ended. HEAPWALK could be used to see who allocated what and find leaks in the address space. It could also be combined with SHAKER which rearranged blocks of memories in an attempt to shake loose bugs. This is similar to more modern-day tools like valgrind on Linux, or Microsoft's Application Testing tools.

HEAPWALK and SHAKER side by side

MakeProcInstance

lpprocAbout = MakeProcInstance( (FARPROC)About, hInstance );

Oh boy, this is a real stinker and an entirely unnecessary one at that. MakeProcInstance didn't even make it to Windows 3.1 and its entire existence is because Microsoft forgot details of their own operating environment. To explain, we're going to need to dig a bit deeper into segmented mode programming.

MakeProcInstance's purpose was to register a function suitable as a callback. Only functions that have been marked with MPI or declared as an EXPORT in the module file can be safely called across process boundaries. The reason for this is that Windows needs to register the Code Segment and Data Segment to a global store to make function calls safely. Remember, each application had its own local heap which lived in its own selector in DS.

In real mode, doing a CALL FAR to jump to a far pointer automatically push and popped the code segment as needed, but the data segment was left unchanged. As such, a mechanism was required to store the additional information needed to find the local heap. So far, this is sounding relatively reasonable.

The problem is that 16-bit Windows has this as an invariant: DS = SS ...

If you're a real mode programmer, that might make it clear where I'm going with this. The Stack Segment selector is used to denote where in memory the stack is living. SS also got pushed to the stack during a function call across process boundaries along with the previous SP. You might begin to see why MakeProcInstance becomes entirely unnecessary.

Instead of needing a global registration system for function calls, an application could just look at the stack base pointer (bp) and retrieve the previous SS from there. Since SS = DS, the previous data segment was in fact saved and no registration is required, just a change to how Windows handles function epilogs and prologs. This was actually found by a third party, and a tool FixDS was released by Michael Geary that rewrote function code to do what I just described. Microsoft eventually incorporated his fix directly into Windows, and MakeProcInstance disappeared as a necessity.

Other Oddities

From Raymond Chen's blog and other sources, one interesting aspect of 16-bit Windows was it was actually designed with the possibility that applications would have their own address space, and there was talk that Windows would be ported to run on top of XENIX, Microsoft's UNIX-based operating system. It's unclear if OS/2's Presentation Manager shared code with 16-bit Windows although several design aspects and API names were closely linked together.

From the design of 16-bit Windows and playing with it, what's clear is this was actually future-proofing for Protected Mode on the 80286, sometimes known as segmented protection mode. On 286's Protected Mode, while the processor was 32-bit, the memory address space was still segmented into 64-kilobyte windows. The primary difference was that the segment selectors became logical instead of physical addresses.

Had the 80286 actually succeeded, 32-bit Windows would have been essentially identical to 16-bit Windows due to how this processor worked. In truth, separate address spaces would have to wait for the 80386 and Windows NT to see the light of day, and this potential ability was never used. The 80386 both removed the 64-kilobyte limit and introduced a flat address space through paging which brought the x86 processor more inline with other architectures.

Backwards Compatibility on Windows 3.1

While Microsoft's backward compatibility is a thing of legend, in truth, it didn't actually start existing until Windows 3.1 and later. Since Windows 1.0 and 2.0 applications ran in real mode, they could directly manipulate the hardware and perform operations that would crash under Protected Mode.

Microsoft originally released Windows 286, and 386 to add support for the 80286 and 80386, functionality that would be merged together in Windows 3.0 as Standard Mode, and 386 Enhanced Mode along with legacy "Real Mode" support. Due to running parts of the operating system in Protected Mode, many of the tricks applications could perform would cause a General Protection Fault and simply fail. This wasn't seen as a problem as early versions of Windows were not popular, and Microsoft actually dropped support for 1.x and 2.x applications in Windows 95.

Windows for Workgroups was installed in a fresh virtual machine, and HELLO.EXE, plus two more example applications, CARDFILE and FONTTEST were copied with it. Upon loading, Windows did not disappoint throwing up a compatibility warning right at the get-go.

Windows 3.1 Compatibility Warning

Accepting the warning showing that all three applications ran fine, albeit it with a broken resolution due to 0,0 being passed into CreateWindow().

HELLO on Windows 3.1

However, there's a bit more to explore here. The Windows 3.1 SDK included a utility known as MARK. MARK was used, as the name suggests, to mark legacy applications as being OK to run under Protected Mode. It also could enable the use of TrueType fonts, a feature introduced back in Windows 3.0.

MARKING

The effect is clear, HELLO.EXE now renders in TrueType fonts. The reason TrueType fonts are not immediately enabled can be see in FONTTEST, where the system typeface now overruns several dialog fields.

TrueType HELLO

The question now was, can we go further?

35 Years Later ...

As previously noted, Windows 95 dropped support for 1.x and 2.x binaries. The same however was not true for Windows NT, which modern versions of Windows are based upon. However, running 16-bit applications is complicated by the fact that NTVDM is not available on 64-bit installations. As such, a fresh copy of Windows 10 32-bit was installed.

Some pain was suffered convincing Windows that I didn't want to use a Microsoft account to sign in. Inserting the same floppy disk as used in the previous test, I double-clicked HELLO and Feature Installer popped up asking to install NTVDM. After letting NTVDM install, a second attempt shows, yes, it is possible to run Windows 1.x applications on Windows 10.

HELLO on Windows 10

FONTTEST also worked without issue, although the TrueType fonts from Windows 3.1 had disappeared. CARDFILE loaded but immediately died with an initialization error. I did try debugging the issue and found WinDbg at least has partial support for working with these ancient binaries, although the story of why CARDFILE dies will have to wait for another day.

windbg

In Closing ...

I do hope you enjoyed this look at ancient Windows and HELLO.C. I'm happy to answer questions, and the next topic I'm likely going to cover is a more in-depth look at the differences between Windows 3.1 and Windows for Workgroups combined with demonstrating how networking worked in those versions.

Any feedback on either the article, or the video is welcome to help me improve my content in the future.

Until next time,

73 de NCommander

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 0) by Anonymous Coward on Monday May 18 2020, @02:18PM (1 child)

    by Anonymous Coward on Monday May 18 2020, @02:18PM (#995783)

    it's a feature.

    That was when I reported some bug in BitBlt function that didn't work as per documentation. They knew it was a bug but can't fix it without breaking current programs. That was 25 years ago. Why do I still remember this?

    • (Score: 0) by Anonymous Coward on Monday May 18 2020, @03:07PM

      by Anonymous Coward on Monday May 18 2020, @03:07PM (#995823)

      Because that bug hasn't been patched yet. It should be fixed when KB8653367997653213568990087532e67 arrives.

  • (Score: 3, Informative) by DannyB on Monday May 18 2020, @02:23PM (3 children)

    by DannyB (5839) Subscriber Badge on Monday May 18 2020, @02:23PM (#995786) Journal

    PASCAL Calling Convention

    Windows API functions are all declared as PASCAL calling convention . . . . left cleanup to be performed by the called function and usually only needed a single opcode to clean the stack at function end. It was likely due to execution and disk space concerns that Windows standardized on this convention (and in fact still uses it on 32-bit Windows.

    This was also the case on the Macintosh (1983). Pascal was Apple's development tool. A few years later as C compilers came to the Macintosh, both the Pascal and C compilers had 'keywords' you could add to function declarations to indicate which calling convention they obeyed.

    The Pascal compiler already had Forward declarations. (eg, declare a function, but actually declare it later.) These Forward declarations had an extended syntax allowing you to specify a 4 digit HEX value for which 'system trap' this function should call. Which was yet another calling convention. (Essentially, an invalid 68000 machine instruction, followed by that 4 digit hex code. The OS caught the invalid instruction, and if it was this certain one, it looked for hex code for which OS 'syscall', followed by appropriate parameters. The C compiler was given a similar extended syntax to trivially declare functions that were OS 'syscalls'. (although 'syscall' wasn't the term)

    It was easy to have C and Pascal functions be able to call back and forth. C users had to use functions which obeyed the Pascal String convention. (a length byte followed by the character bytes)

    Just going from memory here.

    --
    People today are educated enough to repeat what they are taught but not to question what they are taught.
    • (Score: 2) by NCommander on Monday May 18 2020, @02:53PM (2 children)

      by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Monday May 18 2020, @02:53PM (#995813) Homepage Journal

      That made sense on Mac OS but Windows was written in C and assembly. Classic Mac OS had large parts written in Pascal over time. While it might have bits *written* in Pascal, Windows was a C product through and through. The main reason to use stdcall vs cdecl is the space saving. Remember, WIndows 1.0 had to cram into 256 kilobytes in a minimum configuration

      --
      Still always moving
      • (Score: 2) by DannyB on Monday May 18 2020, @06:25PM

        by DannyB (5839) Subscriber Badge on Monday May 18 2020, @06:25PM (#995952) Journal

        It's hard to think back to the days of having only 256 KB of memory. Not 256 GB, but 256 KB.

        Those were fun days though.

        --
        People today are educated enough to repeat what they are taught but not to question what they are taught.
      • (Score: 3, Interesting) by TheRaven on Tuesday May 19 2020, @11:40AM

        by TheRaven (270) on Tuesday May 19 2020, @11:40AM (#996283) Journal
        It made sense on Windows too. The PASCAL calling convention was more efficient, but it couldn't support variadic functions. The vast majority of functions are not variadic, but most C ABIs want to use the same calling convention for both variadic and non-variadic calling conventions. I think this was implicitly a requirement of K&R C. C99, at least (possible C89) makes it undefined behaviour to call a function with the wrong signature, but some big programs (e.g. Perl and Python) depend on being able to call a non-variadic function via a variadic function pointer, rather than casting it to the correct type first. As a result, every subsequent C ABI has had to do the sub-optimal thing or break compatibility.
        --
        sudo mod me up
  • (Score: 0) by Anonymous Coward on Monday May 18 2020, @03:09PM (8 children)

    by Anonymous Coward on Monday May 18 2020, @03:09PM (#995826)

    If you want to reconfigure, you need to re-install.

    And here we see the genesis of one of the worst parts of MS operating systems:

    You must now reboot to make that configuration change. Reboot now?

    Which this design flaw continues to this day with W10 (although many of the silly reasons for a reboot have slowly been removed over the years).

    But it seems that the MS code monkeys set the world on the path to believing that:

    1. You must reboot to make any configuration change; and
    2. Crashing, and having to reboot, 5+ times per day is totally normal (note, it is not 'normal').

    right from the start.

    • (Score: 3, Informative) by NCommander on Monday May 18 2020, @03:17PM (1 child)

      by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Monday May 18 2020, @03:17PM (#995834) Homepage Journal

      The technical reason is actually for performance. Windows is relinked (ala old UNIX kernels) with your mouse/kb/display drivers. The SDK has an entire document detailing this and changing it to do it dynamically. A lot of it is that Windows didn't support dynamic driver unloading until Windows Driver Modem, and even then its not exactly a common operation.

      --
      Still always moving
      • (Score: 2) by Reziac on Tuesday May 19 2020, @03:43AM

        by Reziac (2489) on Tuesday May 19 2020, @03:43AM (#996161) Homepage

        Interesting. And thanks for the detailed article and video -- this was all way more fun than I'd have expected!

        --
        And there is no Alkibiades to come back and save us from ourselves.
    • (Score: 1, Interesting) by Anonymous Coward on Monday May 18 2020, @03:54PM (2 children)

      by Anonymous Coward on Monday May 18 2020, @03:54PM (#995881)

      It's been a long time since you'd have to reboot Windows that often. Even 20 years ago, 5+ times/day was abnormal. I gained some respect for where they were headed around '98 or '99. We had a Windows machine that kept crashing, and the first impulse was to blame MS. When we dug into it and searched on MS's web site, we found them blaming the RAM. We were like, "yeah, right", but decided to change the RAM anyway. The problem went away. Faulty RAM.

      This was always the thing about Windows vs. Macs. Apple had tight control over hardware and drivers. MS had to run on all kinds of junk hardware, with all kinds of junk drivers which are basically kernel modules.

      This is not to say that MS was always blameless, but with good hardware and drivers, full day up time or maybe 1 reboot per day was fairly standard 20 years ago. A lot of people would jump on them for that--but they were coming from a sysop/server mentality. Multi-day uptime wasn't as high a priority for desktops. Snappy UIs, the ability to at least have a go on a wide variety of hardware, that was what mattered.

      Those boot/day Windows machines were a staple in our office, while the Linux guys were "I hope it installs this time, on this hardware", and they were forever configuring their desktops rather than just using them. Then at some point, Windows got multi-day uptimes and Linux desktops became relatively stable, and able to run on more hardware without the dire "misconfiguring X-windows may fry your machine" warnings.

      It was all a matter or what developers prioritized.

      • (Score: 0) by Anonymous Coward on Monday May 18 2020, @05:38PM

        by Anonymous Coward on Monday May 18 2020, @05:38PM (#995938)

        blame is what u get, when the masses are kept uneducated...
        they should have had a ramtest option at boottime, maybe with the bitfade test as option, like linux now...
        they should have drivers usermode in the early days, instead of waiting to, what windows 7?

        all in all, i think, if they made the "choice" to be there with everything sold as pc, they should have taken the necessary precautions, and they did not, until rather late...
        they would have better reputation today, and not take so much undeserved hate...

        -zug

      • (Score: 2) by Reziac on Tuesday May 19 2020, @03:40AM

        by Reziac (2489) on Tuesday May 19 2020, @03:40AM (#996160) Homepage

        One reboot per day, hell. My WFWF and 9x systems routinely ran for weeks between restarts, and my first XP system routinely had uptimes in excess of two *years*. So I think this is normal. :)

        But I'd also noted that crashes usually derived from bad hardware or shit drivers, and became fussy about my components.

        This was also why for a long long time, I was sorely disappointed with linux... I was expecting stability, and got an Adventure. Now it's different; my everyday linux box is more stable than Win7/10 on identical hardware. (Tho still has a ways to go to catch up with my XP64 fileserver; it *never* goes down unless the power is out too long. It hasn't been deliberately restarted in 3 or 4 years.)

        --
        And there is no Alkibiades to come back and save us from ourselves.
    • (Score: 2) by Reziac on Tuesday May 19 2020, @03:27AM (2 children)

      by Reziac (2489) on Tuesday May 19 2020, @03:27AM (#996157) Homepage

      I don't know that I'd blame Windows for this. It was pretty common with DOS programs (dBase and Wordstar leap to mind) to have to use a separate config utility (which was really a specialized hex editor) then restart the program for changes to take effect.

      Wordstar's config editor was, even worse, linear. If you made a mistake you had to go through all the options again (about 20 screens worth) to get to the one you missed or messed up. This contributed to my becoming a WordPerfect bigot (it sensibly used an external config file).

      --
      And there is no Alkibiades to come back and save us from ourselves.
      • (Score: 3, Interesting) by NCommander on Thursday May 21 2020, @07:34PM (1 child)

        by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Thursday May 21 2020, @07:34PM (#997529) Homepage Journal

        WordPerfect's real killer feature was it's printer support, and the fact that it shipped with a Printer Driver development kit right in the box. Most printers of the era listed all the commands they recognized, so with a given printer and its manual, it was entirely possible to add a driver to WordPerfect with a minimum amount of effort. I also think it could handle generic PostScript and PCL printers. I know it was able to work (as of WP5.1) with a LaserJet 3 and 4 out of the box and we didn't have a postscript card installed in the printer.

        --
        Still always moving
        • (Score: 2) by Reziac on Thursday May 21 2020, @07:48PM

          by Reziac (2489) on Thursday May 21 2020, @07:48PM (#997537) Homepage

          Yep, WP's printer drivers (and their expandability) were a super big deal back in the day, and even if you couldn't roll your own, it shipped with good-enough drivers that most any printer could use. This is how I got the habit of routinely setting up any laser printer as an HPLJ 2 or 3Si, or one of a couple other semi-generic drivers for inkjets and pin-impact. I was right annoyed when printers progressed to where they couldn't emulate one of those old standards anymore and I had to actually keep track of driver disks. :D

          --
          And there is no Alkibiades to come back and save us from ourselves.
  • (Score: 5, Interesting) by canopic jug on Monday May 18 2020, @03:14PM (14 children)

    by canopic jug (3949) Subscriber Badge on Monday May 18 2020, @03:14PM (#995831) Journal

    CP/M [ieee.org] was not so bad from that era but I don't hold the least bit of nostalgia from any version of Windows, especially the early ones up to and including 3.11. They were bloated, complicated, unsteady, unreliable, slow and crashy. They also were subject to "bit rot", needing frequent reinstallation from a tall stack of floppy disks. Then the file system would need frequent defragmentation, also a major time-waster. Even running it on DR-DOS [theregister.co.uk] did not do much for performance.

    I had a brief bit of interest when it was asserted that it could run two or more desktop programs at the same time similarly to the Macintosh. However, when I found that doing that just made it crash more often, then I lost interest. The pre-NT versions were just GUIs on DOS any way. So if DOS is what you are nostalgic for, there are both FreeDOS [freedos.org] and DOSBox [dosbox.com] to choose from. Have at it. Either will run the old retro games CGA/EGA/VGA.

    A more worthy target of retro nostaligia would be the Apple ][ series. Those came with schematic diagrams, instruction manuals [apple2online.com], and in some cases, the source code. They were expensive as hell but quite extensible [charlieharrington.com], even the later models. If you had the right skills and equipment, or could get in contact with someone with both, then you could even burn your own ROMs or make your own peripheral cards. Or if you had an overly large lump of cash burning a hole in your pocket, you could buy various peripherals.

    Or GEM [toastytech.com] or Desqview [toastytech.com] or, on better hardware, the Amiga [amigaos.net].

    But Windows, especially back in the early days was simply a rip-off of the old Macintosh interface [lowendmac.com]. Interestingly IBM did not learn from Apple's mistakes and got screwed by Bill Gates over OS/2. OS/S was not around long enough to develop much of a fan base but it was an excellent choice, except for the lack of applications...

    Or another target for nostalgia would be BeOS [birdhouse.org]. BeOS is gone, but it has inspired HaikuOS [haiku-os.org].

    --
    Money is not free speech. Elections should not be auctions.
    • (Score: 3, Interesting) by NCommander on Monday May 18 2020, @03:22PM (10 children)

      by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Monday May 18 2020, @03:22PM (#995844) Homepage Journal

      I'd like to cover the three original operating systems for the PC 5150, DOS 1.0, CPM-86, and USCD-Pascal, the last which is almost entirely forgotten. I'm not sure what to say on CP/M aside from a lead-in to talking more about Digital Research and DR-DOS though.

      --
      Still always moving
      • (Score: 3, Informative) by canopic jug on Monday May 18 2020, @03:39PM (5 children)

        by canopic jug (3949) Subscriber Badge on Monday May 18 2020, @03:39PM (#995865) Journal

        The first version of MS-DOS was not much other than an exact rip-off of CP/M. It didn't do much more than allow you to boot and then run your programs.

        CP/M was quite common in most computer-using offices, prior to Visicalc and then Lotus-1-2-3 [pingdom.com]. Though there were several dialects. From what I remember that had to do with the number of sectors on the floppies and whether they were double-sided or not. The killer apps, the reason people bought the microcomputers, were WordStar and dBase-II. So a common use for CP/M was to run WordStar and many in-house dBase-II [blogspot.com] scripts. Though it was also possible to write a lot of custom applications and run them on CP/M. The makers of dBase, Ashton-Tate, fired the footgun with a lawsuit [edesber.com], however, and that was that. From there it seemed that Lotus-1-2-3 became the killer app.

        FoxPro was another good database. dBase / Ashton-Tate did itself in but M$ killed FoxPro.

        --
        Money is not free speech. Elections should not be auctions.
        • (Score: 2) by NCommander on Tuesday May 19 2020, @10:49AM (4 children)

          by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Tuesday May 19 2020, @10:49AM (#996264) Homepage Journal

          The thing is, while CP/M was big on 8-bit micros, the 8086 version kinda flopped, and flopped hard. I know there were some productivity apps, but I don't even have anything beyond an assembler to show for a development environment, and there isn't even that much to talk about from a technology level. C/PM wasn't exactly cutting edge when it shipped, and it morphed into DR-DOS. M/PM and Concurrent C/PM have trouble running in emulation (PCem MIGHT be able to run them) due to using the x86's TSS features which virtually nothing else used. Even if I could get them to run, I don't have any good applications to show, multiuser CPM required software for it specifically or was limited to text mode stuff.

          GEM, and TaskVIEW were all DR-DOS era stuff. There's enough about DR-DOS to go into detail, and if I did it as a two-part series, I might be able to go more into CPM, but I'm not confident I could do a freestanding video/article on them.

          --
          Still always moving
          • (Score: 2) by canopic jug on Tuesday May 19 2020, @01:15PM

            by canopic jug (3949) Subscriber Badge on Tuesday May 19 2020, @01:15PM (#996318) Journal

            CP/M didn't so much flop on the x86, it was more like it was outmaneuvered by chance and Bill's mom. Remember that Bill inherited IBM's software monopoly through PC-DOS. That's the market where the x86 sales would have taken place. Had his mom not set him up with the right meetings, IBM probably would have gone through with purchasing licenses for CP/M or paid for an improved version of CP/M from DRI and maybe from Kildall himself.

            Although it is rather tedious to browse through the electronic editions, as opposed to real paper copies, there are old BYTE issues [vintageapple.org] online. Specifically, these are the issues from September 1975 through July 1998 are covered. The early years will have a lot about CP/M and all the other operating systems and packages, without pro-M$ revisionism. Kildall's abridged memoirs [ieee.org] are available too, but the missing parts are probably critical of Bill. Kildall was going to write a book, but that never got quite to publishing before he met his untimely demise [nytimes.com] under confusing circumstances. Last I heard, which was ages ago, his family still had the full manuscript but were unwilling to publish it.

            --
            Money is not free speech. Elections should not be auctions.
          • (Score: 2) by Reziac on Thursday May 21 2020, @07:56PM (2 children)

            by Reziac (2489) on Thursday May 21 2020, @07:56PM (#997543) Homepage

            I never used CP/M, but had friends who did, and might be it flopped hard of its own lack of merit. They were always telling me about how much better it was, then bitching about stuff it couldn't do that my 286-with-DOS did with no trouble (by now I don't recall what). Anyway, I think a video comparing CP/M and the various DOS incarnations would be interesting.

            BTW there also existed Concurrent DOS; I'm sure you're aware of it. I have a retail copy somewhere in my stash, tho never used it.

            --
            And there is no Alkibiades to come back and save us from ourselves.
            • (Score: 2) by NCommander on Thursday May 21 2020, @08:44PM (1 child)

              by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Thursday May 21 2020, @08:44PM (#997578) Homepage Journal

              I do infact know of concurrent DOS. I might see if I can find it on ebay or some versions of CPM as I do prefer to have the physical media if at all possible.

              --
              Still always moving
              • (Score: 2) by Reziac on Thursday May 21 2020, @10:06PM

                by Reziac (2489) on Thursday May 21 2020, @10:06PM (#997610) Homepage

                Yeah, I understand the physical media fetish... my preference too. Tho if the only copy to be had comes from Low Places... well, I see they've got a whole bunch of versions.

                --
                And there is no Alkibiades to come back and save us from ourselves.
      • (Score: 2) by canopic jug on Monday May 18 2020, @06:25PM (3 children)

        by canopic jug (3949) Subscriber Badge on Monday May 18 2020, @06:25PM (#995951) Journal

        and USCD-Pascal ...

        Something about UCSD Pascal would be, for me, very interesting. I used it in several contexts. I guess it is available in a non-commercial form over at FreePascal. Does any of it live on in either FreePascal or Lazarus?

        --
        Money is not free speech. Elections should not be auctions.
        • (Score: 2) by NCommander on Monday May 18 2020, @08:56PM (2 children)

          by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Monday May 18 2020, @08:56PM (#996013) Homepage Journal

          It's on my TODO list, but it's a few places down. UCSD Pascal shipped as a free standing OS for Apple II, PC IBMs and a bunch of others. It's really quite a bizarre thing because the whole thing is in pcode and it's pretty clear using it that what they're using on 8-bit Apple II is basically the same on IBM PCs. I also want to touch the Apple Lisa at some point and ObjectPascal.

          --
          Still always moving
          • (Score: 0) by Anonymous Coward on Tuesday May 19 2020, @12:20AM

            by Anonymous Coward on Tuesday May 19 2020, @12:20AM (#996098)

            My brother (gone now) wrote engineering code in UCSD Pascal for the Apple ][ -- he would have preferred C but the compiler we found didn't include floating point. UCSD Pascal compiled to their P-code, included acceptable floating point and was, iirc, considerably faster than interpreted Basic.

          • (Score: 2) by dry on Tuesday May 19 2020, @03:11AM

            by dry (223) on Tuesday May 19 2020, @03:11AM (#996153) Journal

            IIRC, in theory programs were portable, write something on the Apple II and run the binary on a PC. In practice it evolved and wasn't so compatible between the different versions and the later releases made some bad design decisions and put them in silicon. I quite liked Apple Pascal on my souped up II+, 16 bit pcode that used both 64 kb banks on my TransWarp (looked like a IIE) as well as using the left over ram as a ram disk after I wrote the ram disk driver. Also writing assembly for it introduced me to relocatable code, not easy on the 6502, as you never knew what address it would be loaded at.
            Played a lot of Wizardry too, that was written in Apple Pascal.

    • (Score: 2, Interesting) by ncc74656 on Monday May 18 2020, @10:44PM (2 children)

      by ncc74656 (4917) on Monday May 18 2020, @10:44PM (#996058) Homepage

      A more worthy target of retro nostaligia would be the Apple ][ series. Those came with schematic diagrams, instruction manuals [apple2online.com], and in some cases, the source code. They were expensive as hell

      Not for what you got for the price. My first IIe system was $2100, but for that price you got 128K, two floppy drives, a monitor capable of displaying 80-column text, and a printer...a complete system ready to do real work. Most of the 8-bit competition were basically game consoles with keyboards. Floppy drives for them were expensive and slow, and while they offered a wider variety of color-graphics modes that were useful for games, lack of a proper 80-column mode made doing any sort of real work (word processing, spreadsheets, software development, etc.) somewhat painful.

      Desqview

      A look at this would be interesting. I had a BBS running on my IIe for a few months in the early '90s. I wasn't able to use it for other stuff while the BBS was online, though, so I pieced together another computer to run the BBS: a 286 with a couple megs of RAM, a large hard drive, a fast modem, and a monochrome monitor. It ran DR DOS 6.0 and DESQview (think that's the casing they used). Not only was the IIe freed up, but with DESQview on the BBS box, the BBS could keep running while I ran lightweight maintenance tasks on it at the same time. I had this running for a couple or three years until I swapped out the 286 motherboard for a 386SX, which led to trying out early versions of a UN*X workalike that was calling itself Linux. (SLS Linux in particular, installed from a stack of 5.25" floppies. :-) )

      • (Score: 0) by Anonymous Coward on Tuesday May 19 2020, @12:34AM (1 child)

        by Anonymous Coward on Tuesday May 19 2020, @12:34AM (#996102)

        A few years before the ][e was available, I think the price I paid was about the same for a ][+ system. Only had 40 column text, 64K memory. I did get 2 floppies, a green screen monitor (which squealed terribly) and a dot printer.

        Fairly soon we added a Microsoft CP/M card https://en.wikipedia.org/wiki/Z-80_SoftCard [wikipedia.org] which sat on the Apple ][+ bus and had its own Z-80 and another 64K memory, that gave me 80 column text x 25 lines and real wordprocessing with Mince/Scribble (based on CMU Scribe). Years later I briefly met https://en.wikipedia.org/wiki/Neil_Konzen [wikipedia.org] who worked on the CP/M product (someone should edit his wiki page?)

        • (Score: 2) by NCommander on Tuesday May 19 2020, @10:53AM

          by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Tuesday May 19 2020, @10:53AM (#996268) Homepage Journal

          One big advantage though was the Apple II architecture was incredibly upgradable. Add an 80 columns card, some more memory, and away you go. It wasn't until the Apple IIgs that got you to the point that an original Apple II couldn't be upgraded. The largest thing that was a pain to upgrade was the soldered on ROM which couldn't autostart but you could always PR#x the Disk II controller and I think Apple officially offered ROM replacements (it wasn't a socketed chip however).

          The big killer app for Apple II was that 80 columns card which made Apple really competitive in business. since 80 columns was the line most people used to divide systems between "home" and "business". Tandy machines mostly ran in 40 columns mode although I know it's possible to convince at least some models to run in 80.

          --
          Still always moving
  • (Score: 4, Insightful) by hemocyanin on Monday May 18 2020, @03:16PM (11 children)

    by hemocyanin (186) on Monday May 18 2020, @03:16PM (#995833) Journal

    I think these are the first pictures I've ever seen on SN. Not sure I like it.

    • (Score: 4, Informative) by NCommander on Monday May 18 2020, @03:18PM (6 children)

      by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Monday May 18 2020, @03:18PM (#995837) Homepage Journal

      We've always had the technical capability to post images, and we've done it for other original content before in the past as well as the rare site statistics post. That being said, demonstrating much of this article without images wouldn't be anywhere as clear.

      --
      Still always moving
      • (Score: 3, Insightful) by hemocyanin on Monday May 18 2020, @08:54PM (4 children)

        by hemocyanin (186) on Monday May 18 2020, @08:54PM (#996010) Journal

        I get that the pictures help make it understandable, but I would hate to see more of that. It feels wrong here for one, but more to the point, this place is a respite from other forums with avatars and embedded video links, memes, graphical emoticons and other crap. There's also something to be said for making TFS very condensed with appropriate links to all the greater detail whether textual, pictorial, audio, or video -- it takes effort to do it well. Anyway, SN is my Clean Well Lit Place and I just want it to stay that way.

        • (Score: 2) by NCommander on Monday May 18 2020, @09:02PM (3 children)

          by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Monday May 18 2020, @09:02PM (#996021) Homepage Journal

          It should be noted that its *not* an embedded video. It's a screenshot that is a link because I didn't want to iframe in YouTube, and we're hosting all assets here on SN; no third party. We do respect our users privacy. If there's really a lot of people who dislike this (no offense, you're the only one to raise it), we can move my articles to a separate nexus which you can disable.

          --
          Still always moving
          • (Score: 2, Informative) by hemocyanin on Monday May 18 2020, @09:38PM (1 child)

            by hemocyanin (186) on Monday May 18 2020, @09:38PM (#996041) Journal

            I was speaking more generally -- pictures being the camel's nose and such. I don't want to sound like a whining asshole -- it is obvious much work went into TFS and I trust SN more than anywhere else to make good decisions about privacy and such.

            It was just sort of shocking to see pictures here -- I would compare the shock level I experienced to walking in on one's parents while they're fucking. Definitely not something you want to see every day (or ever).

          • (Score: 2) by dry on Tuesday May 19 2020, @03:16AM

            by dry (223) on Tuesday May 19 2020, @03:16AM (#996155) Journal

            Personally, I hope you keep writing these types of articles and don't mind the odd picture when needed, though likewise I wouldn't want to see them regularly when unneeded and it seems they're only needed for original content.
            Thanks for the interesting article.

      • (Score: 2) by Reziac on Thursday May 21 2020, @08:01PM

        by Reziac (2489) on Thursday May 21 2020, @08:01PM (#997544) Homepage

        I was very surprised by the images, not to mention the article's length and detail, but was too interested to complain. :D

        I suppose if enough folks don't like it, in the rare instance of another such article, one could do a text summary in the usual way, with a link to the full version in a journal entry, where we already expect everyone to do as they damn well please.

        --
        And there is no Alkibiades to come back and save us from ourselves.
    • (Score: 0) by Anonymous Coward on Monday May 18 2020, @11:51PM

      by Anonymous Coward on Monday May 18 2020, @11:51PM (#996087)

      I clicked on it by accident, and the youtube app opened up. I don't want to give Google more data to mine on me, so that was an annoyance.

    • (Score: 2) by The Mighty Buzzard on Tuesday May 19 2020, @02:02AM (2 children)

      Fear not, original content isn't something we do every day or even every year right now. Mind you, we're as open to OC from a random AC as we are from the guy sporting a UID of 2, it's just not something we get a lot of so far.

      --
      My rights don't end where your fear begins.
      • (Score: -1, Flamebait) by Anonymous Coward on Tuesday May 19 2020, @05:57AM (1 child)

        by Anonymous Coward on Tuesday May 19 2020, @05:57AM (#996193)

        So, when did Ncommander get out of jail?

  • (Score: 5, Insightful) by Anonymous Coward on Monday May 18 2020, @03:18PM (5 children)

    by Anonymous Coward on Monday May 18 2020, @03:18PM (#995836)

    Of all the shitty things Microsoft has done, and as terrible as they (and Windows 10) are for the computing industry and everything associated with it, one thing they did that puts almost every Linux distro to absolute shame is backwards binary compatibility. Linus insists on it, but the distros shit on his work there with a library nightmare. And I say this as a Linux fan going back to the 90s. The soft walled garden approach of the distros doesn't get you very far if you have anything non-standard, or anything not compiled by the distro maintainers. And it seems like a huge number of platforms hold backwards compatibility in somewhat similar contempt (especially since Apple goes out of its way to kill unwanted platforms). The viewpoint these days seems to say that software that hasn't had a mandatory update shoved down your throat in a few months is "legacy" and to be treated with disdain if not disgust.

    But not Windows. And that's one reason Windows rules the desktop to this very day (and one reason they took it over in the first place), even if they're trying to kill it with Windows 10 S.

    • (Score: 4, Insightful) by NCommander on Monday May 18 2020, @03:21PM (4 children)

      by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Monday May 18 2020, @03:21PM (#995842) Homepage Journal

      Honestly, IBM mainframes utterly destroy Windows in regards to backward compatibility. 50-60 years is entirely possible, although due to licensing, and lack of experience, I haven't dived into the topic. That being said, I may do something on mainframes in the future.

      The Linux kernel ABI is (relatively) stable. a.out binaries from very early Linux are iffy, but the modern kernel should be fully compatible with any ELF based distro. As you note, a lot of the breakage comes from distros and libraries themselves not maintaining their ABI. Even GCC has broken their ABI (specifically the C++ one).

      --
      Still always moving
      • (Score: 1, Interesting) by Anonymous Coward on Monday May 18 2020, @11:56PM

        by Anonymous Coward on Monday May 18 2020, @11:56PM (#996088)

        Twenty years ago my coworkers on the mainframe side said there were VM instances up that were first started 30 years ago. That would make them 50 now.

      • (Score: 1, Interesting) by Anonymous Coward on Tuesday May 19 2020, @03:19AM (1 child)

        by Anonymous Coward on Tuesday May 19 2020, @03:19AM (#996156)

        Original poster of parent here (re: the one praising Windows backwards compatibility... I should get an actual account sometime, but, anyway!).

        I don't know if that's really a fair comparison. One of the main reasons mainframes even exist is backwards compatibility. They are also exorbitant to purchase and maintain, especially since (AFAIK) just about anyone running any mainframe has an expensive contract with the manufacturer for the entire lifetime of the mainframe, to the extent that in some cases the manufacturer will switch a reserve mainframe to act as a backup to your mainframe on short notice in the unlikely event that it goes down. This may no longer be true, but I think it is, especially since that's the sort of customer who is interested in a mainframe at all nowadays. The real plus of a mainframe over a gaggle of servers in the modern world is "short of nuclear war or continent-wide natural disaster, YOUR PROGRAM WILL FUNCTION, PERIOD."

        While it is true that a mainframe will run programs written in the 1960s just as well today as the day it was first punched onto cards, there's no real analog in the personal computer world without resorting to virtual machines or emulators. Even then it's less than perfect, and often requires some level of technical skill to really make it work well. So while the mainframe does beat Windows on backwards compatibility, no two ways about it, on a small-scale platform used by a typical end user (personal, or even commercial), you just aren't going to get much better at it than Windows on a binary level. And believe me, I hate saying that, and am baffled that Microsoft's Windows 10-S efforts are effectively trying to get rid of their strongest point even now, but credit is due here. The only thing I can think of off the top of my head that realistically compares is the original BIOS interface specification, but I don't think that's practical to use anymore, and even if it is, whatever is left of it is slated to go away in the next few years.

        And as a final aside, I absolutely cringe at what the distros have done to Linus' ABI. It takes a lot of work to do something like that, and they basically just throw it in the trash. I'm wondering if it's going to get even worse whenever Wayland takes over - as far as I know the network-oriented nature of X11 means static-compiled binaries have at least a realistic chance of working with a wide variety of versions, but given how the library situation has been, the future does not look good there... and then, after all this mess, a lot of them have the gall to wonder why containers are taking over all over the place on Linux.

        • (Score: 2) by NCommander on Tuesday May 19 2020, @10:58AM

          by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Tuesday May 19 2020, @10:58AM (#996271) Homepage Journal

          X's "network compatibility" has been something of a joke for years. I'm annoyed to see it go, but X has some serious problems that Wayland MIGHT fix. That being said, I'm not convinced Wayland actually fixes these problems in a meaningful sense.

          In truth, if it works, it only works from Xorg to Xorg, or from older X forward. The largest problem is that anything using a GPU can't interact with X directly due to that network bit. XRandr basically hacks around the problem by blitting GPU graphics as an X framebuffer but this causes X to *lag*. It's not noticeable as much on localhost, but you can see it across the network. Applications using the Xlib framework basically send BASIC-like drawing commands which are how they managed such good performance over networks.

          The downside is that made Xlib somewhat difficult to use, and Motif even worse. Programmers did what they could to patch around X and basically treat it as a dumb drawing surface (Gtk, and Qt both do this). The practical upshot is that networked X works more like a really shit VNC than a networked windowing system.

          --
          Still always moving
      • (Score: 0) by Anonymous Coward on Tuesday May 19 2020, @12:50PM

        by Anonymous Coward on Tuesday May 19 2020, @12:50PM (#996303)

        Bringing up gcc ABI changes (which have been only for C++) is hardly fair in a Microsoft context.
        Microsoft compiler used to change C++ ABI on EVERY SINGLE VERSION!
        Many Linux libraries have really good and stable APIs. But people want to use C++, and ABI compatibility with C++ interfaces is such an absurd pain that basically Qt and KDE are the only ones even trying (LLVM is an utter horror and seems to have tried to tick every single box on "how to not write an interface").
        I suspect a lot of Linux programs from 20-30 years ago will work ok if you install them in a chroot with matching libraries from back then. Difference to Windows is in many parts that those libraries were not shipped along with the program, because that's bad design and causes loads of compatibility and security issues. But I guess that's just a rehash of the snap/FlatPak/... discussion.

  • (Score: 1, Insightful) by Anonymous Coward on Monday May 18 2020, @03:21PM

    by Anonymous Coward on Monday May 18 2020, @03:21PM (#995843)

    how to run Windows in "slow mode"

    I think this has always been the default

    If you want to reconfigure, you need to re-install.

    Yep, also always how it's been

  • (Score: 3, Interesting) by maxwell demon on Monday May 18 2020, @04:08PM (5 children)

    by maxwell demon (1608) on Monday May 18 2020, @04:08PM (#995892) Journal

    Internally, the 8086 was a 16-bit processor, and thus could directly address 2^16 bits of memory at a time

    That's not the reason. The 8080 was internally (and externally) an 8-bit processor, yet it was able to address more than 2^8 bytes at a time. Indeed, it was able to address 2^16 bytes at a time. (Oh, and I just notice you wrote "bits" instead of "bytes"). And while I don't know much about Intel's early 4-bit processor 4004, I'd be very surprised if it could only address 2^4 bytes (or 2^4 nibbles, given that a byte already has 8 bits).

    Being an n-bit processor just means that the width of the data registers is n bits. The 8080 had 8-bit registers, and thus was an 8-bit processor. Addresses on the 8080 were 16 bits, however, and if you wanted to access a byte at a computed address, you'd use two registers (more specifically, the registers h and l) to hold that address. The same would certainly have been possible for 16 bit processors.

    The true reason why the 8086 had that segmented memory model is that Intel wanted to enable automatic translation of machine code from 8080 to 8086.

    --
    The Tao of math: The numbers you can count are not the real numbers.
    • (Score: 0) by Anonymous Coward on Monday May 18 2020, @05:14PM (2 children)

      by Anonymous Coward on Monday May 18 2020, @05:14PM (#995926)

      The true reason why the 8086 had that segmented memory model is that Intel wanted to enable automatic translation of machine code from 8080 to 8086.

      Well, yes, the design did facilitate that, but that was not likely the actual reason behind the design of the 8086.

      The real reason is that the 8086 chip was a rushed, emergency, oh shit we've got to do something product that came about from the panic of realizing their next big thing at the time was heading directly towards failure.

      What was that next big thing? Why the iAPX 432 [wikipedia.org] project of course. The 432 was supposed to be Intel's big breakthrough, and the 8086 was designed as a stopgap when Intel realized they needed something to offer the market to hold the market over until they could get the 432 out the door. The 8086 was that "offer something to keep us in the market until we can take over with the 432 chip". However, the 432 ended in total failure, and the 8086, by accident, became the savior of the company when IBM picked it for their new "PC" in 1982.

      The 8086 is almost exactly what one would obtain if one's design parameters are: start with the 8080 design, make the minimum changes to make it a 16 bit chip with a somewhat larger address space, and oh, by the way, if we can translate existing 8080 code automatically into 8086 code, that's a bonus. And you've got one weekend to create the architectural design, so get working.

      • (Score: 2) by maxwell demon on Monday May 18 2020, @05:31PM (1 child)

        by maxwell demon (1608) on Monday May 18 2020, @05:31PM (#995932) Journal

        From the Wikipedia page you linked:

        The iAPX 432 enlarged address space over the 8080 was also limited by the fact that linear addressing of data could still only use 16-bit offsets, somewhat akin to Intel's first 8086-based designs

        So at least the segmented addressing cannot be explained by the processor being rushed.

        Anyway, a rushed processor design is a perfect fit for the rushed computer design that was the IBM PC. :-)

        --
        The Tao of math: The numbers you can count are not the real numbers.
        • (Score: 1, Informative) by Anonymous Coward on Monday May 18 2020, @05:48PM

          by Anonymous Coward on Monday May 18 2020, @05:48PM (#995942)

          No, segmentation was not directly due to the 'rush'. Reality was that back in the days that the 8086 was designed (mid 1970's or thereabouts), segmented memory architectures were quite common, as were paged memory architectures, and no clear winner had been shaken out yet.

          The one difference was that for the mainframe/mini CPU's that had segmented memory models was that they had variable sized segments and maximum segment sizes that were much larger than 64k. So while the OS managed it's physical memory in terms of segments assigned to processes, the actual processes themselves did not have to concern themselves with the segmentation, they just saw a linear address space within their assigned segments.

          It was not until more time elapsed that the paged memory model was the one the CPU world coalesced around as the clear winner for a memory model design.

    • (Score: 2) by NickM on Monday May 18 2020, @06:12PM

      by NickM (2867) on Monday May 18 2020, @06:12PM (#995949) Journal

      Somewhere in between there was the 8088, a 16bit processor with a 8bit data bus. Since it was cheaper to buy and easier to integrate than the 8086 ( most of the supporting chip from that era had only 8bit for data) IBM selected that chip for the original XT PC.

      One of the first project I had in college was to build a small system¹ based on that chip. That board and the accompanying sound card on a breadboard were an exceptional learning experience. I learned more in those 3 years of technical college than I did in 4 year of CS at the university.

      1- it consisted of a 8259 (the interrupt controller), a 8254 (a programmable timer), a 8250 (the uart) with it's max232 , one small sram chip (8155?) and one rom socket on a 2" x 5" PCB. The power and the signals of the clock, chips select, IO/M and ice were on the PCB and the data and address bus were wire-wrapped.

      --
      I a master of typographic, grammatical and miscellaneous errors !
    • (Score: 3, Informative) by canopic jug on Tuesday May 19 2020, @03:38PM

      by canopic jug (3949) Subscriber Badge on Tuesday May 19 2020, @03:38PM (#996382) Journal

      The 8088 and the 8086 weren't any good with floating point calculations and work could really be sped up using a numeric coprocessor. For some activities adding the presence of a co-processor was like night and day different. Ken Shirriff has looked at the 8087 numeric coprocessor [righto.com] under the microscope to discern what he can about what it does and how it does it.

      --
      Money is not free speech. Elections should not be auctions.
  • (Score: 4, Insightful) by fadrian on Monday May 18 2020, @04:23PM (2 children)

    by fadrian (3194) on Monday May 18 2020, @04:23PM (#995903) Homepage

    This was not a pleasant stroll down memory lane - it was more like the worst hike you've ever been on times eleven (ten's not high enough). I came from the UNIX world, where a flat, 32-bit memory model and a great shell and toolchain was pretty much taken for granted - an elegant, coherent system. I went into the Windows world with its segment pointers and thunks, lousy shell, and ugly and expensive tools. It was truly a nightmare. Of course, I could go into Microsoft's many issues along the way, but to be fair, it's ugliness and stupidity were not entirely its own, given that it was actually the hellspawn of IBM and Intel. The IBM PC was ugly, archaic, and stupid, even for the standards of that day. Much of this ugliness stems from the weedy and baroque nature of the X86's instruction set with its special purpose registers and amazingly stupid address segmentation. And as usual, the VHS/Betamax decision delivered by the magic finger of the market pointed squarely upward, away from UNIX and the 32-bit processors that ran it and somehow, simultaneously, pointed it's thumb downward in a semiotic gesture known by all who enter the gladiatorial battle of man vs. machine. So there you have it, after 40+ years of bad decisions being rendered by almost every player with almost a constant Wintel hegemony, we're left with a "choice" between Linux, Windows, and MacOS. Wow! Two fifty year-old OS'es and one that's only forty years-old. All hail the progress brought by the magic finger of the market! Now with SystemD!!!

    --
    That is all.
    • (Score: 2) by NCommander on Thursday May 21 2020, @07:26PM (1 child)

      by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Thursday May 21 2020, @07:26PM (#997522) Homepage Journal

      XENIX-86 and 286 were both things (Xenix 386 has been featured here) which brought segmentation to UNIX. I honestly have to give props to MS for engineering talent because UNIX as a whole basically is coded on the assumption of a flat memory model. The 386 removed segment size limits (although still limited to 8196 selectors), but since that also brought paging, I know of diddly squat that actually uses it with the exception of early Xen which could run paravirtualized hosts in ring 1. OS/2 could run drivers in Rings 1 and 2, but that was for IOPB reasons, and I believe still used 16-bit segments as those were devised for the 80286.

      Intel processor rings are directly dependent on segments as the CS selector is tied to ring, and ring changes are handled through CALL FAR (call gate mechanism). amd64 basically removed processor rings as a side effect because they removed segmentation.

      --
      Still always moving
      • (Score: 0) by Anonymous Coward on Tuesday May 26 2020, @03:37AM

        by Anonymous Coward on Tuesday May 26 2020, @03:37AM (#999083)

        Check out the RMX operating system, especially the DOSRMX variant. It's like Windows 3.x had a baby with VxWorks. RMX was Intel's feature demo OS. It used every x86 feature.

        DOSRMX is particularly insane. You can start it from DOS, just as you might start Windows 3.x from DOS. RMX becomes a TSR, enters protected mode, and then sort of virtualizes the DOS instance. There is a hotkey, Alt-SysRq, that can switch between the DOS console and the RMX console. You get both! DOS runs as the lowest-priority task under RMX.

        Filesystems can be native to either OS, with RMX native programs able to access filesystem drivers in DOS (like maybe a Novell Netware share or a FAT12 floppy) and with DOS native programs being able to access filesystem drivers in RMX. Lower-level disk access also goes both ways, with filesystem drivers in both OSes able to access the disk drivers in both OSes. So a DOS program can open a file that is handled by an RMX filesystem driver that runs the BIOS for disk IO, and an RMX program can open a file that is handled by the DOS kernel but with RMX-native drivers handling the disk.

        If you boot RMX natively, without DOS, you can later start up the DOS task. You can kill the DOS task. If you boot RMX via DOS, you can still kill the DOS task, and then you can start a new DOS task. Just don't do that if your RMX filesystem is routed through the DOS/BIOS task for disk IO, since that won't go so well.

        Most RMX executables use lots of little segments with non-zero bases, even when 32-bit code is in use. You can run 16-bit code and you can run flat-mode paged 32-bit code, but the norm is to run lots and lots of tiny little 32-bit segments. It's like 16-bit Windows programs, except that the code is actually 32-bit.

        The C library is simply awe-inspiring. Functions in the C library are reached via call gates or task gates. The pointers you pass are normally far pointers with 32-bit offsets, so 48 bits get pushed on the stack for each pointer. That is needed because of all the little segments, and because the C library is in a different x86 hardware task, and because the paging is normally disabled.

        Every stupid little OS structure gets a GDT entry with an appropriate limit. You simply can't overflow out of an array, because the hardware will stop you.

        BTW, this OS is still sold today. It's used to run the real-time train control for the London Tube. It's in a lot of safety-critical places.

  • (Score: 4, Funny) by Snotnose on Monday May 18 2020, @04:32PM

    by Snotnose (1623) on Monday May 18 2020, @04:32PM (#995911)

    Hungarian Notation applied to English:
    artThe adjHungarian nGovernment vHas vOrdered pnIts adjCivil nServants infToSpend artThe adjSame nAmount prepOf nCash prepOn adjOpenSource nProjects advAs pnThey vSpend prepOn adjProprietary nSoftware

    Makes things so much easier to read.

    --
    Why shouldn't we judge a book by it's cover? It's got the author, title, and a summary of what the book's about.
  • (Score: 2) by srobert on Monday May 18 2020, @05:12PM (1 child)

    by srobert (4803) on Monday May 18 2020, @05:12PM (#995925)

    Isn't there someplace where I can just download the binary Hello?

  • (Score: 3, Interesting) by throckmorten on Monday May 18 2020, @05:37PM (2 children)

    by throckmorten (3380) on Monday May 18 2020, @05:37PM (#995936) Homepage

    You should have tried doing this in Modula-2 :)
    My college thesis was a Win 3.1 app written in JPI M2 that required real mode to the NIC. Reminds me of more than a few sleepless nights.

    PROCEDURE WinMain(hInstance: Windows.HANDLE; hPrevInstance: Windows.HANDLE; lpszCmdLine: Windows.LPSTR; cmdShow: INTEGER): Windows.BOOL;
    VAR msg : Windows.MSG;
            hWnd : Windows.HWND;
            hMenu : Windows.HMENU;
    BEGIN
            IF hPrevInstance = Windows.HANDLE(0) THEN
                    IF NOT Init(hInstance) THEN
                            RETURN INTEGER(FALSE);
                    END;
              END;

  • (Score: 3, Interesting) by shortscreen on Monday May 18 2020, @07:16PM (6 children)

    by shortscreen (2252) on Monday May 18 2020, @07:16PM (#995967) Journal

    You say there was a Windows 1 SDK that contained a special purpose linker... I wonder if that linker is the one that creates the "mtswslnkmcjklsd" padding that can be seen repeatedly in old exe files.

    Another thing to mention about hPrevInstance on 16-bit Windows is that you can start 10 instances of sound recorder (or whatever) and have 10 sound recorder processes with only one copy of the code segment in memory. So they are all the same code but with separate state and different instance number. But they also use a common address space, unlike modern equivalents (ie. virtual address spaces with "duplicate on write" pages)

    • (Score: 2) by NCommander on Monday May 18 2020, @08:59PM

      by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Monday May 18 2020, @08:59PM (#996016) Homepage Journal

      The actual behavior is a bit different. if hPrevInstance is set to NULL, you could just signal the running process if its class is available to launch a new window, or in the case of CARDFILE, just bring it to the front. There's loads of apps that used to have issues with this which got fixed on Windows 95 removing the concept entirely.

      Under a protected segmented mode, you could have everything in the same space abiet with a different segment, and Windows 286 in fact did use this under the hood, although writing actual 32-bit programs wasn't officially supported. Watcom figured out how to do it as a hack with Wat386, and Microsoft did it in the WIndows 3.1 kernel, and with Win32s.

      --
      Still always moving
    • (Score: 2) by NCommander on Monday May 18 2020, @09:00PM (4 children)

      by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Monday May 18 2020, @09:00PM (#996019) Homepage Journal

      Forgot to address the second part. I actually remember reading about that garbage as padding bytes. I posted the binaries below, but that sort of artifact could also come from the compiler's OMF files, and Borland C and Watcom C could both generate those. Technically, Watcom C can generate Windows 1.x/2.x binaries, but it has issues with the windows.h header, and it's C initialization code fails. That being said, it would be fixable.

      --
      Still always moving
      • (Score: 2) by dry on Tuesday May 19 2020, @05:02AM (2 children)

        by dry (223) on Tuesday May 19 2020, @05:02AM (#996177) Journal

        I ran lxlite (which also handles NE) on a copy of the binary,

        The file HELLO1.EXE contains 23 bytes in non-resident names table
                                          HELLO1.EXE initial: 5184 final: 4535 gain: 12.6%
        Total gain: 649 bytes

        So some padding there.
        Also notice file considers it an OS/2 executable, double clicking opens an OS/2 full screen session with a sys1804: The system can't find the file USER. Seems to be confused about whether user.dll is an OS/2 DLL.
        Opening hello.exe's properties and going to the session tab gives the options of using either OS/2 or DOS full screen or windowed sessions with Win-OS2 greyed out.
        Firing up File Manager (full screen as seemless Win 3.1 doesn't work on this I5) to run the programs you get the warning with the unmarked executables and both open in the upper left with only the title bar until dragging the window larger.
        Be interesting to try installing Windows 1 in a VDM, though without a floppy it might be tricky.

        • (Score: 2) by NCommander on Tuesday May 19 2020, @09:37AM (1 child)

          by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Tuesday May 19 2020, @09:37AM (#996252) Homepage Journal

          NE was also used for 16-bit Windows and Presentation Manager shares a lot of design with Windows 1.x/2.x. It also was used for "family mode" binaries which have a DOS version, and an OS/2 version similar to how PE files have a DOS stub.

          It's never been clear how much, if any code, is shared, but the two are close that you could actually build for both with some careful coding and preprocessor macros. I need to see if OS/2's SDK for 1.1 (the first version with presentation manager) survived and we may take a closer look at it.

          --
          Still always moving
          • (Score: 2) by dry on Tuesday May 19 2020, @04:13PM

            by dry (223) on Tuesday May 19 2020, @04:13PM (#996392) Journal

            Yea, I think that there are some header flags missing in such an old NE, there's bits on OS/2 that say whether a program is WINDOWAPI, WINDOWCOMP and such which control things like whether full screen, where a program can access the graphics directly and such, or a PM program and perhaps whether OS/2 or Windows. Without those flags, the default is full screen.
            I wouldn't be surprised if there is shared code, MS mostly wrote the 16 bit Presentation Manager and took what they learned and applied it to Win 3.x and I assume that the single threaded library that still exists for VACPP was to help with family mode programs or running Win 3.x programs as WLO (Windows Libraries on OS/2) which allowed running Win 3.0 programs directly as OS/2 executables.
            Not sure about the SDK for 1.1 but the IBM SDK for 1.3 is at https://winworldpc.com/product/ibm-developers-toolkit/130 [winworldpc.com] and I doubt much besides bug fixes (the MS SDK was very buggy) changed for the Presentation Manager.
            Even today all the 16 bit code or at least exports lives on, though a lot of it is just entry points with thunking to the 32 bit API.

      • (Score: 2) by Reziac on Thursday May 21 2020, @08:12PM

        by Reziac (2489) on Thursday May 21 2020, @08:12PM (#997553) Homepage

        You know about this guy's site? It goes on forever... here's one chunk:

        https://www.geoffchappell.com/new/17/05.htm [geoffchappell.com]

        --
        And there is no Alkibiades to come back and save us from ourselves.
  • (Score: 2) by zoward on Monday May 18 2020, @10:25PM

    by zoward (4734) on Monday May 18 2020, @10:25PM (#996051)

    Brings back memories of learning to program Windows, and OS/2, from a pair of books from Charles Petzold. It wasn't pretty, but you learned the real nuts and bolts of the system with him.

  • (Score: 1, Interesting) by Anonymous Coward on Tuesday May 19 2020, @04:23AM (1 child)

    by Anonymous Coward on Tuesday May 19 2020, @04:23AM (#996172)

    Is your username NCommander a callout to the classic DOS utility Norton Commander, later to become Midnight Commander? I loved that utility. Didn't love those days though. I remember buying a Soundblaster16 and having to plan out my IRQs, move my cards around, disable onboard peripherals to reclaim the limited IO ports and IRQs, setup extended memory managers, use RAM compressors, configure TSRs. ... My 386 and 486 felt like a series of kludges that would fall over if you changed any thing in the system.

    • (Score: 2) by NCommander on Tuesday May 19 2020, @09:43AM

      by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Tuesday May 19 2020, @09:43AM (#996253) Homepage Journal

      The N isn't for Norton, at least not on a conscious level. I had Norton Commander, and a lot of the old pre-Symmatic Norton stuff back in the day. Having had to re-live IRQ hell with the PCI card issues, I'll agree with much of the pain of those days too. I like playing with it, but not as a day-to-day thing.

      --
      Still always moving
  • (Score: 0) by Anonymous Coward on Tuesday May 19 2020, @11:48AM (1 child)

    by Anonymous Coward on Tuesday May 19 2020, @11:48AM (#996287)

    Another oddity is that this version of Windows doesn't actually have "windows" per say. Instead applications are tiled, with only dialogue boxes appearing as free-floating Windows. Overlapping Windows would appear in 2.0, but it's clear from the API that they were at least planned for at some point. Most notable, the CreateWindow() function call has arguments for x and y coordinates. My best guess is Microsoft wished to avoid the wrath of Apple who had gone on a legal warpath of any company that too-closely copied the UI of the then-new Apple Macintosh.

    The timing doesn’t really for work out. Yes, Apple already had a reputation for being quite aggressive about protecting their work but at the point in time you’re looking at, there was a formal cooperation agreement between Apple and Microsoft that gave MS a lot of leeway. Bluntly, the most likely explanation is that Microsoft knew they wanted to support arbitrary placement of windows but, like Xerox, hadn’t actually worked out how to do it effectively. It’s kind of odd to contemplate, because Xerox at the time had a patent on using a backing store to preserve and restore screen contents, but they hadn’t actually applied it to document windows.

    • (Score: 2) by NCommander on Tuesday May 19 2020, @04:48PM

      by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Tuesday May 19 2020, @04:48PM (#996403) Homepage Journal

      Dialog boxes could overlap though in this version of windows and contents update properly under it, and WM_PAINT is sent to mark that a window needs to redrawer because an above window overwrote part of its framebuffer. Those properly use X/Y coordinates. The base system was clearly designed with the concept in mind.

      The Digital Research/GEM lawsuite with Apple was already under way at this point as well.

      --
      Still always moving
  • (Score: 1) by cybernoid1975 on Wednesday May 20 2020, @10:09PM (1 child)

    by cybernoid1975 (10761) on Wednesday May 20 2020, @10:09PM (#997118)

    Hi, thanks for the great article. It is very detailed and has depth, I had read it in one breath :) Had finally understood difference between FAR and NEAR pointers in Windows programming. Only article point that I did not understood is MakeProcInstance, got competly lost on this line:
    "The problem is that 16-bit Windows has this as an invariant: DS = SS ..."
    I have no clue what is meant by that :/ Maybe because I never programmed in real mode on Windows :/ ...

    • (Score: 2) by NCommander on Thursday May 21 2020, @07:30PM

      by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Thursday May 21 2020, @07:30PM (#997524) Homepage Journal

      So, basically, when you CALL FAR, the processor changes the code segment register for you automatically. However, under Windows, every application also has a local heap, stored in memory pointed to by the DS register.

      For cross-process function calls to work, DS has to be loaded to the correct value before you CALL FAR or the whole thing goes up in smoke. MakeProcInstance registers the data segment and the procedure within the global Windows resource manager to make this work. The catch here though is that you also need to save the Stack Segment when doing so (SS register); this is done as part of code generated for far calls. Because DS = SS, actually saving the old DS value is irrelevant, you can simply retrieve the old SS by walking the stack and loading it to DS directly. That's what FixDS does.

      --
      Still always moving
(1)