Stories
Slash Boxes
Comments

SoylentNews is people

Meta
posted by NCommander on Tuesday August 30 2016, @12:14PM   Printer-friendly
from the int-21h-is-how-cool-kids-did-it dept.

I've made no secret that I'd like to bring original content to SoylentNews, and recently polled the community on their feelings for crowdfunding articles. The overall response was somewhat lukewarm mostly on dividing where money and paying authors. As such, taking that into account, I decided to write a series of articles for SN in an attempt to drive more subscriptions and readers to the site, and to scratch a personal itch on doing a retro-computing project. The question then became: What to write?

As part of a conversation on IRC, part of me wondered what a modern day keylogger would have looked running on DOS. In the world of 2016, its no secret that various three letter agencies engage in mass surveillance and cyberwarfare. A keylogger would be part of any basic set of attack tools. The question is what would a potential attack tool have looked like if it was written during the 1980s. Back in 1980, the world was a very different place both from a networking and programming perspective.

For example, in 1988 (the year I was born), the IBM PC/XT and AT would have been a relatively common fixture, and the PS/2 only recently released. Most of the personal computing market ran some version of DOS, networking (which was rare) frequently took the form of Token Ring or ARCNet equipment. Further up the stack, TCP/IP competed with IPX, NetBIOS, and several other protocols for dominance. From the programming side, coding for DOS is very different that any modern platform as you had to deal with Intel's segmented architecture, and interacting directly with both the BIOS, and hardware. As such its an interesting look at how technology has evolved since.

Now obviously, I don't want to release a ready-made attack tool to be abused for the masses especially since DOS is still frequently used in embedded and industry roles. As such, I'm going to target a non-IP based protocol for logging both to explore these technologies, while simultaneously making it as useless as possible. To the extent possible, I will try and keep everything accessible to non-programmers, but this isn't intended as a tutorial for real mode programming. As such I'm not going to go super in-depth in places, but will try to link relevant information. If anyone is confused, post a comment, and I'll answer questions or edit these articles as they go live.

More past the break ...

Looking At Our Target

Back in 1984, IBM released the Personal Computer/AT which can be seen as the common ancestor of all modern PCs. Clone manufacturers copied the basic hardware and software interfaces which made the AT, and created the concept of PC-compatible software. Due to the sheer proliferation of both the AT and its clones, these interfaces became a de-facto standard which continues to this very day. As such, well-written software for the AT can generally be run on modern PCs with a minimum of hassle, and it is completely possible to run ancient versions of DOS and OS/2 on modern hardware due to backwards compatibility.

A typical business PC of the era likely looked something like this:

  • An Intel 8086 or 80286 processor running at 4-6 MHz
  • 256 kilobytes to 1 megabyte of RAM
  • 5-20 MiB HDD + 5.25 floppy disk drive
  • Operating System: DOS 3.x or OS/2 1.x
  • Network: Token Ring connected to a NetWare server, or OS/2 LAN Manager
  • Cost: ~$6000 USD in 1987

To put that in perspective, many of today's microcontrollers have on-par or better specifications than the original PC/AT. From a programming perspective, even taking into account resource limitations, coding for the PC/AT is drastically different from many modern systems due to the segmented memory model used by the 8086 and 80286. Before we dive into the nitty-gritty of a basic 'Hello World' program, we need to take a closer look at the programming model and memory architecture used by the 8086 which was a 16-bit processor.

Real Mode Programming

If the AT is the common ancestor of all PC-compatibles, then the Intel 8086 is processor equivalent. The 8086 was a 16-bit processor that operated at a top clock speed of 10 MHz, had a 20-bit address bus that supported up to 1 megabyte of RAM, and provided fourteen registers. Registers are essentially very fast storage locations physically located within the processor that were used to perform various operations. Four registers (AX, BX, CX, and DX) are general purpose, meaning they can be used for any operation. Eight (described below) are dedicated to working with segments, and the final registers are the processor's current instruction pointer (IP), and state (FLAGS) An important point in understanding the differences between modern programming environments and those used by early PCs deals with the difference between 16-bit and 32/64-bit programming. At the most fundamental level, the number of bits a processor has refers to the size of numbers (or integers) it works with internally. As such, the largest possible unsigned number a 16-bit processor can directly work with is 2 to the power of 16 (minus 1) or 65,535. As the name suggests, 32-bit processors work with larger numbers, with the maximum being 4,294,967,296. Thus, a 16-bit processor can only reference up to 64 KiB of memory at a given time while a 32-bit processor can reference up to 4 GiB, and a 64-bit processor can reference up to 16 exbibytes of memory directly.

At this point, you may be asking yourselves, "if a 16-bit processor could only work with 64 KiB RAM directly, how did the the 8086 support up to 1 megabyte?" The answer comes from the segmented memory model. Instead of directly referencing a location in RAM, addresses were divided into two 16-bit parts, the selector and offset. Segments are 64 kilobyte selections of RAM. They could generally be considered the computing equivalent of a postal code, telling the processor where to look for data. The offset then told the processor where exactly within that segment the data it wanted was located. On the 8086, the selector represented the top 16-bits of an address, and then the offset was added to it to create 20-bits (or 1 megabyte) of addressable memory. Segments and offsets are referenced by the processor in special registers; in short you had the following:

  • Segments
    • CS: Code segment - Application code
    • DS: Data segment - Application data
    • SS: Stack segment - Stack (or working space) location
    • ES: Extra segment - Programmer defined 'spare' segment
  • Offsets
    • SI - Source Index
    • DI - Destination Index
    • BP - Base pointer
    • SP - Stack pointer

As such, memory addresses on the 8086 were written in the form of segment:offset. For example, a given memory address of 0x000FFFFF could be written as F000:FFFF. As a consequence, multiple segment:offset pairs could refer to the same bit of memory; the addresses F555:AAAF, F000:FFFF, and F800:7FFF all refer to the same bit of memory. The segmentation model also had important performance and operational characteristics to consider.

The most important was that since data could be within the same segment, or a different type of segment, you had two different types of pointers to work with them. Near pointers (which is just the 16-bit offset) deal with data within the same segment, and are very fast as no state information has to be changed to reference them. Far pointers pointed to data in a different selector and required multiple operations to work with as you had to not only load and store the two 16-bit components, you had to change the segment registers to the correct values. In practice, that meant far pointers were extremely costly in terms of execution time. The performance hit was bad enough that it eventually lead to one of the greatest (or worst) backward compatibility hacks of all time: the A20 gate, something which I could write a whole article on.

The segmented memory model also meant that any high level programming languages had to incorporate lower-level programming details into it. For example, while C compilers were available for the 8086 (in the form on Microsoft C), the C programming language had to be modified to work with the memory model. This meant that instead of just having the standard C pointer types, you had to deal with near and far pointers, and the layout of data and code within segments to make the whole thing work. This meant that coding for pre-80386 processors required code specifically written for the 8086 and the 80286.

Furthermore, most of the functionality provided by the BIOS and DOS were only available in the form of interrupts. Interrupts are special signals used by the process that something needs immediate attention; for examine, typing a key on a keyboard generates a IRQ 1 interrupt to let DOS and applications know something happened. Interrupts can be generated in software (the 'int' instruction) or hardware. As interrupt handling can generally only be done in raw assembly, many DOS apps of the era were written (in whole or in part) in intel assembly. This brings us to our next topic: the DOS programming model

Disassembling 'Hello World'

Before digging more into the subject, let's look at the traditional 'Hello World' program written for DOS. All code posted here is compiled with NASM

; Hello.asm - Hello World

section .text
org 0x100

_entry:
 mov ah, 9
 mov dx, str_hello
 int 0x21
 ret

section .data
str_hello: db "Hello World",'$'

Pretty, right? Even for those familiar with 32-bit x86 assembly programming may not be able to understand this at first glance what this does. To prevent this from getting too long, I'm going to gloss over the specifics of how DOS loads programs, and simply what this does. For non-programmers, this may be confusing, but I'll try an explain it below.

The first part of the file has the code segment (marked 'section .text' in NASM) and our program's entry point. With COM files such as this, execution begins at the top of file. As such, _entry is where we enter the program. We immediately execute two 'mov' instructions to load values into the top half of AX (AH), and a near pointer to our string into DX. Ignore 9 for now, we'll get to it in a moment. Afterwords, we trip an interrupt, with the number in hex (0x21) after it being the interrupt we want to trip. DOS's functions are exposed as interrupts on 0x20 to 0x2F; 0x21 is roughly equivalent to stdio in C. 0x21 uses the value in AX to determine which subfunction we want, in this case, 9, to write to console. DOS expects a string terminated in $ in DX; it does not use null-terminated strings like you may expect. After we return from the interrupt, we simply exit the program by calling ret.

Under DOS, there is no standard library with nicely named functions to help you out of the box (though many compilers did ship with these such as Watcom C). Instead, you have to load values into registers, and call the correct interrupt to make anything happen. Fortunately, lists of known interrupts are available to make the process less painful. Furthermore, DOS only provides filesystem and network operations. For anything else, you need to talk to the BIOS or hardware directly. The best way to think of DOS from a programming perspective is essentially an extension of the basic input/output functionality that IBM provided in ROM rather than a full operating system.

We'll dig more into the specifics on future articles, but the takeaway here is that if you want to do anything in DOS, interrupts and reference tables are the only way to do so.

Conclusion

As an introduction article, we looked at the basics of how 16-bit real mode programming works and the DOS programming model. While something of a dry read, it's a necessary foundation to understand the basic building blocks of what is to come. In the next article, we'll look more at the DOS API, and terminate-and-stay resident programs, as well as hooking interrupts.

Related Stories

RFC: Crowdfunding Articles 44 comments

So, during the last site update article, a discussion came up talking about how those who work and write for this site should get paid for said work. I've always wanted to get us to the point where we could cut a check to the contributors of SoylentNews, but as it stands, subscriptions more or less let us keep the lights on and that's about it.

As I was writing and responding to one specific thread, part of me started to wonder if there would be enough interest to try and crowdfund articles on specific topics. In general, meta articles in which we talk deploying HSTS or our use of Hesiod tend to generate a lot of interest. So, I wanted to try and see if there was an opportunity to both generate interesting content, and help get some funds back to those who donate their time to keep the lights on.

One idea that immediately comes to mind that I could write is deploying DNSSEC in the real world, and an active example of how it can help mitigate hijack attacks against misconfigured domains. Alternatively, on a retro-computing angle, I could cook something in 16-bit real mode assembly that can load an article from soylentnews.org. I could also do a series on doing (mostly) bare metal work; i.e., loading an article from PXE boot or UEFI.

However, before I get in too deep into building this idea, I want to see how the community feels about it. My initial thought is that the funds raised for a given article would dictate how long it would be, and the revenue would be split between the author, and the staff, with the staff section being divided at the end of the year as even as possible. The program would be open to any SN contributor. If the community is both interested and willing, I'll organize a staff meeting and we'll do a trial run to see if the idea is viable. If it flies, then we'll build out the system to be a semi-regular feature of the site

As always, leave your comments below, and we'll all be reading ...

~ NCommander

Retro-Malware: DOS TSRs, Interrupt Handlers, and Far Calls, Part 2 30 comments

The Retro-Malware series is an experiment on original content for SoylentNews, written in the hopes to motivate people to subscribe to the site and help grow our resources. The previous article talked a bit about the programming environment imposed by DOS and 16-bit Intel segmented programming; it should be read before this one.

Before we get into this installment, I do want to apologize for the delay into getting this article up. A semi-unexpected cross-country drive combined with a distinct lack of surviving programming documentation has made getting this article written up take far longer than expected. Picking up from where we were before, today we're going to look into Terminate-and-Stay Resident programming, interrupt chaining, and get our first taste of how DOS handles conventional memory. Full annotated code and binaries are available here in the retromalware git repo.

In This Article

  • What Are TSRs
  • Interrupt Handlers And Chaining
  • Calling Conventions
  • Walking through an example TSR
  • Help Wanted

As usual, check past the break for more. In addition, if you are a licensed ham operator or have ham radio equipment, I could use your help, check the details at the end of this article.

[Continues...]

FreeDOS Turns 25 Years Old 15 comments

Last week, FreeDOS turned 25 years old. FreeDOS is a complete, Free Software Disk Operating System (DOS) and a drop-in replacement for MS-DOS which has disappeared long ago. It is still used in certain niche cases such as playing legacy games, running legacy software, or certain embedded systems. Back in the day, it was also quite useful for updating BIOS.

Of those that will be, are, or have been using it, what tasks has it been good for?

Also, at:
The Linux Journal : FreeDOS's Linux Roots
OpenSource.com : FreeDOS turns 25 years old: An origin story
OS News : FreeDOS’s Linux roots
Lilliputing : FreeDOS turns 25 (open source, DOS-compatible operating system)

Earlier on SN:
Jim Hall on FreeDOS and the Upcoming 1.2 Release (2016)
Retro-Malware: DOS TSRs, Interrupt Handlers, and Far Calls, Part 2 (2016)
Retro-Malware: Writing A Keylogger for DOS, Part 1 (2016)


Original Submission

Retrotech: The Novell NetWare Experience 92 comments

On what is becoming a running theme here on SoylentNews, we're reliving the early 90s, and picking up right where I left off from Windows for Workgroups, it was time to look at the 800-pound gorilla: Novell NetWare.

Unlike early Mac, UNIX and Windows, I didn't actually have any personal experience with NetWare back in the day. Instead, my hands were first guided on a stream of my weekly show, HACK-ALT-NCOMMANDER, hosted as part of DEFCON 201, combined with a binge reading marathon of some very hefty manuals. In that vein, this is more of my impressions of what NetWare user and administration is like, especially compared to the tools of the day.

Ultimately, I found NetWare a very strange experience, and there were a lot of pluses and minuses to cover, so as usual, here's the tl;dr video summary, followed by more in-depth write-up.

Novell NetWare video

If you haven't ABENDed your copy of server.exe, click below the fold to learn what all the hubbub was about!

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Funny) by Anonymous Coward on Tuesday August 30 2016, @12:20PM

    by Anonymous Coward on Tuesday August 30 2016, @12:20PM (#395225)

    And he's writing Malware?!!

    Call Homeland Security immediately.

    • (Score: 2) by The Mighty Buzzard on Tuesday August 30 2016, @12:38PM

      by The Mighty Buzzard (18) Subscriber Badge <themightybuzzard@proton.me> on Tuesday August 30 2016, @12:38PM (#395232) Homepage Journal

      Millennial?! He's a late 80s child. That's firmly in Gen-Y territory.

      --
      My rights don't end where your fear begins.
    • (Score: 2) by NCommander on Tuesday August 30 2016, @12:42PM

      by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Tuesday August 30 2016, @12:42PM (#395235) Homepage Journal

      You know, I thought my age was relatively common knowledge on this site :P

      --
      Still always moving
      • (Score: 3, Funny) by MostCynical on Tuesday August 30 2016, @01:53PM

        by MostCynical (2589) on Tuesday August 30 2016, @01:53PM (#395269) Journal

        Those of us the wrong side of forty just like to pretend "young" means "about 35". Always a shock to find anyone under that knows *anything* :-)

        --
        "I guess once you start doubting, there's no end to it." -Batou, Ghost in the Shell: Stand Alone Complex
        • (Score: 1, Insightful) by Anonymous Coward on Tuesday August 30 2016, @06:35PM

          by Anonymous Coward on Tuesday August 30 2016, @06:35PM (#395391)

          It's weird. Somewhere in my mid 20s, my attitude shifted away from "don't trust anyone over 30" to "oh god, please don't fucking associate me with these jackass smug fucking college students."

          Now, at the age of 32, it's solidified on "don't trust anyone under 35." Seems to be the magic number for whether I'm going to think someone is full of shit or not. We'll see if that age goes up again when I hit 35 myself.

  • (Score: 1, Interesting) by Anonymous Coward on Tuesday August 30 2016, @12:48PM

    by Anonymous Coward on Tuesday August 30 2016, @12:48PM (#395238)

    It was primitive and there was zero protection, but everything was right there in front of you. Great for tinkerers.

    Game devs in particular loved DOS. There was a series of video modes which IBM came up with, which was later extended by the VESA manufacturers consortium, I think. EGA, VGA, XGA, and "SVGA" (which was really a bunch of modes) were the ones business programmers worked with, but there were also several low-resolution modes which were very convenient for game development.

    I remember reading the Microsoft Press book on Direct X. The author, who worked at Microsoft, admitted that the before Direct X was introduced, company evangelists pitching Windows as a game platform were met by chants of "DOS! DOS!" at conferences.

    • (Score: 4, Informative) by NCommander on Tuesday August 30 2016, @01:01PM

      by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Tuesday August 30 2016, @01:01PM (#395242) Homepage Journal

      I didn't bring it up in this post, but video operations on DOS are an interesting beast since you have direct access to video memory. Depending on your hardware, you had the monochrome, CGA, EGA, and then VGA address space above conventional memory.

      Assuming you wanted more than what ANSI.SYS could provide, you would have to directly poke that memory to make the magic happen. Sound and other similar stuff would require accessing the proper TSRs and making that magic happen. I don't miss DOS per say, but I do miss the flexibility it provided in a lot of ways. My biggest regret is that the 80286 protected mode flopped in market though; the segmented memory model actually provides natural protection similar to the NX-bit (but better). A stack smash could only destroy the stack segment, and not the program as a whole which drastically kills an entire range of attacks.

      While C would still need a flat memory model ((E)CS=(E)DS at a minimium) to act like it should, it would have allowed other programming languages to afford a hell of a lot more security and prevent all sorts of various stupidity.

      --
      Still always moving
      • (Score: 2) by FatPhil on Tuesday August 30 2016, @01:52PM

        by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Tuesday August 30 2016, @01:52PM (#395268) Homepage
        > C would still need a flat memory model ((E)CS=(E)DS at a minimium)

        Are you sure? I think all it really needs is sizeof(void*) = sizeof(void(*)()), so that a void* cast can be reversibly performed. There's no requirement to ever be able to call data or dereference code, so who cares if the segments are different?
        --
        Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
        • (Score: 0) by Anonymous Coward on Tuesday August 30 2016, @02:15PM

          by Anonymous Coward on Tuesday August 30 2016, @02:15PM (#395280)

          You're probably right, at least most of the time, provided that the compiler handled the segments properly. A more significant issue was passing a stack address and as an argument to a function,and then dereferencing it, which would fail miserably if SS != DS. And they usually weren't for 16-mode DOS programming.

        • (Score: 2) by NCommander on Tuesday August 30 2016, @02:47PM

          by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Tuesday August 30 2016, @02:47PM (#395294) Homepage Journal

          Segmented model breaks pointer arithmetic, and makes a lot of things much harder in C. The compiler has to do some epic magic to make it work.

          The canonical example is dealing with pointer comparison. Because multiple segment:offset pairs can point to the same logical place, you can't compare two pointers and know they're in the same place without fully evaluating them out to flat memory notation. For example, a pointer to F000:FFFF and F555:AAAF point to the same place, but if you compare them to each other, you would get not equal. The solution is using the third type of pointer, known as huge pointers which normalize pointers to the highest possible segment (which breaks segment aliasing). If a pointer is modified in any way, the pointer has to be recalculated to the huge model to make those comparisons work.

          This also causes a lot of pain when dealing with nested arrays if they're in two different segments because you have to have huge pointers to know if they point to the same location. Once again, you have to fix at runtime because you don't know for sure where your data structures will land in memory. (in DOS, this wasn't a big deal since the compiler could assume you had all of conventional memory to play with, since conventional memory is always a 1:1 mapping. 80286 Protected mode threw that out the window since you now had a MMU and hardware based task switching.

          --
          Still always moving
          • (Score: 2) by maxwell demon on Tuesday August 30 2016, @05:51PM

            by maxwell demon (1608) on Tuesday August 30 2016, @05:51PM (#395370) Journal

            Segmented model breaks pointer arithmetic

            No, it doesn't, it just makes it more complicated to implement. And actually the big problem of the 80286 wasn't really the segmentation, but the fact that segments were only 64 KByte at a time where larger data structures were already reasonable. Also, in protected mode, it was perfectly possible (and reasonable) to make different segments not overlap (thus making a simple segment/offset comparison sufficient).

            Note that the only operation C guarantees to work for pointers to unrelated objects is equality comparison. If you limit the maximal object/array size to the maximal segment size, for all other cases you just need to do arithmetic on the offset part. As I already wrote, the problem was that this was 64KB, which was no longer a reasonable limit at that time.

            Note that in real mode, equality comparison could be done by just calculating the linear address on the fly. There's no need to normalize all pointers. Nowhere does C require that equal pointers have equal bit patterns.

            --
            The Tao of math: The numbers you can count are not the real numbers.
        • (Score: 2) by maxwell demon on Tuesday August 30 2016, @05:27PM

          by maxwell demon (1608) on Tuesday August 30 2016, @05:27PM (#395359) Journal

          Actually the C standard doesn't guarantee casting between data pointers and function pointers. And DOS and the various memory models (SMALL, LARGE, HUGE) are probably the reason.

          --
          The Tao of math: The numbers you can count are not the real numbers.
          • (Score: 2) by FatPhil on Tuesday August 30 2016, @10:50PM

            by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Tuesday August 30 2016, @10:50PM (#395485) Homepage
            Perhaps I shouldn't have snipped "to act like it should", which had various implicit "I need certain things to work" assuptions that seemed to include some kind of mixing of code pointers and data pointers. C doesn't guarantee any compatibility between the two, but it doesn't guarantee anything working if you cast to an incorrect type of function pointer either - so void* is no worse an error than a void(*)(). In order for the mixing of data and function pointers (presumably casting a fn ptr to a void* and then back again), all you need is a programmer-friendly-do-the-sane-thing compiler and the condition I specified, you don't need CS = DS because no sane programmer would attempt to use a code pointer as a data pointer or vice versa. (Unless you are part of the OS/loader/linker/toolchain, in which case, all bets are off.)
            --
            Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
            • (Score: 2) by maxwell demon on Wednesday August 31 2016, @10:33AM

              by maxwell demon (1608) on Wednesday August 31 2016, @10:33AM (#395638) Journal

              But some of the DOS memory models had different sizes for code and data pointers. So in casting between the two you might lose the segment part.

              --
              The Tao of math: The numbers you can count are not the real numbers.
              • (Score: 2) by FatPhil on Wednesday August 31 2016, @11:02AM

                by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Wednesday August 31 2016, @11:02AM (#395643) Homepage
                Doctor, it hurts when I do >this<. Well, don't do that then. CS = DS ("tiny") is not required for the lack of pain, "small" model works fine.
                --
                Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
                • (Score: 2) by maxwell demon on Wednesday August 31 2016, @10:40PM

                  by maxwell demon (1608) on Wednesday August 31 2016, @10:40PM (#395904) Journal

                  Well, maybe you were only writing SMALL programs. Others were writing MEDIUM or COMPACT programs.

                  And frankly, apart from some dynamic linking interfaces (which of course don't exist on DOS), I never saw any need to cast between a function pointer and a data pointer.

                  --
                  The Tao of math: The numbers you can count are not the real numbers.
                  • (Score: 2) by FatPhil on Thursday September 01 2016, @07:45AM

                    by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Thursday September 01 2016, @07:45AM (#396079) Homepage
                    I was writing code for all different memory models - doing image processing on 100 megapixel images was tricky whilst staying within 64KB. And Borland C in DOS did have dynamic linking (probably from version 2, definitely by version 3).
                    --
                    Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
      • (Score: 2) by Fnord666 on Tuesday August 30 2016, @11:58PM

        by Fnord666 (652) on Tuesday August 30 2016, @11:58PM (#395506) Homepage

        Assuming you wanted more than what ANSI.SYS could provide, you would have to directly poke that memory to make the magic happen. Sound and other similar stuff would require accessing the proper TSRs and making that magic happen. I don't miss DOS per say, but I do miss the flexibility it provided in a lot of ways.

        Ah the good old days of double buffering and page flipping to reduce flicker. Wait, are there people on here who know what a command line compiler looks like?

        • (Score: 0) by Anonymous Coward on Wednesday August 31 2016, @12:07PM

          by Anonymous Coward on Wednesday August 31 2016, @12:07PM (#395651)

          are there people on here who know what a command line compiler looks like?

          i cut my teeth on turbo pascal in high school

          i'm only a handful of years older than NC

        • (Score: 2) by NCommander on Thursday September 01 2016, @05:14AM

          by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Thursday September 01 2016, @05:14AM (#396038) Homepage Journal

          Command line compiler as in MASM, NASM, or early watcom?

          I still sometimes call cl from the command line when I'm testing it, and I suspect most linux developers have invoked GCC by hand too :)

          --
          Still always moving
    • (Score: 3, Informative) by NCommander on Tuesday August 30 2016, @01:12PM

      by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Tuesday August 30 2016, @01:12PM (#395247) Homepage Journal

      Second followup: Microsoft really tried hard to kill DOS for gaming. DirectX was finally what made Windows 95 a viable platform but it wasn't the first attempt by a longshot.

      Before DirectX, you had WinG in Windows 3.1 which gave you roughly the equivalent of what DirectDraw does today. The big problem was that you didn't control the hardware under Windows and were dependent on a full set of working VxDs to get functionality. In addition, you were constrained by the fact that you were still operating in real mode (for the most part). The design of Windows meant that far calls were basically unavoidable since you had to context switch to the kernel thread to make anything happen. As such any operations would take the performance hit that far calls would endure, and you had much less memory available. While Windows wasn't super bloated (it would fit in 2 MiB of RAM), games frequently wanted everything it had.

      As an addition consequence of this, 32-bit pointers are slower than 16-bit near pointers. Most people don't realize this, but you can access the e*x registers even in real mode if they exist essentially providing you with a 16-bit upper word for additional storage if you were coding in assembly. In addition, a quirk in the 80386 actually let you put the processor in protected mode, set up a flat memory mode, and then return to real mode with the selectors remaining 32-bit. That meant you could get the performance of near calls (and 16-bit code) while having a 32-bit flat memory layout, known as unreal mode. That let you do really fun stuff like mov dx, ds:[eax].

      --
      Still always moving
      • (Score: 2) by Post-Nihilist on Tuesday August 30 2016, @09:32PM

        by Post-Nihilist (5672) on Tuesday August 30 2016, @09:32PM (#395449)

        Someone who remember about the unreal mode, I am quite pleasantly surprised. It was a nice undocumented feature that made external DPMI like DOS/4G mostly useless. However, outside the demo scene, I never saw a distributed application that ran in unreal mode...

        --
        Be like us, be different, be a nihilist!!!
        • (Score: 0) by Anonymous Coward on Tuesday August 30 2016, @10:45PM

          by Anonymous Coward on Tuesday August 30 2016, @10:45PM (#395484)

          What was the last processor that supported unreal mode?

          Does it work in dosbox?

          Does it still work on x86_64 chips running in x86/16 bit emulation mode?

        • (Score: 2) by NCommander on Thursday September 01 2016, @05:12AM

          by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Thursday September 01 2016, @05:12AM (#396037) Homepage Journal

          Unreal mode actually was a bug/implementation quirk in the processor. Internally, everything on the 80386 was based on 32-bit segment/selectors, with real mode simply faking being 16-bit by masking up the top half of the world. Unreal mode allowed an implementation detail to leak. It wasn't a problem on the 80286 because the only way to leave protected mode was a processor reset. As far as I know, according to Raymond Chen's "The Old New Thing", qutie a few games used unreal mode which prevented them from working under Windows entirely. HIMEM.SYS also used unreal mode vs. protected mode according to wikipedia (likely for performance reasons).

          --
          Still always moving
  • (Score: 2, Informative) by crb3 on Tuesday August 30 2016, @01:02PM

    by crb3 (5919) on Tuesday August 30 2016, @01:02PM (#395243)

    Look into https://en.wikipedia.org/wiki/Borland_Turbo_C. [wikipedia.org] It came out about then and gave MS C a real run for its money. The libraries were good and the documentation was actually helpful. I bought in at V1.5; V2.0 Professional added Turbo Assembler and Turbo Debugger; I learned a *lot* about how C works, just from stepping code I'd written and compiled in TDB's CPU pane and watching the stack-frame mechanism in action. The debugger in the IDE didn't have that assembly-level detail but it made chasing bugs fun. Versions up to V3.1 have an IDE that'll run in a real-mode system, after that it's Windows-dependent.

    Another toolbox was Spontaneous Assembly, SpontAsm. My copy of the V2 manual is dated 1989,1990, so it came out at the late end of your window. It consisted of C-like libraries in assembly, for assembly, again with good documentation; working with it made X86 assembly comprehensible for me. Warning: the textmode windowing code in it can be slo-o-ow, especially on a 12MHz '286, which is what I had then (but I was probably abusing it with constant refreshes in the modem program I wrote).

    • (Score: 3, Interesting) by NCommander on Tuesday August 30 2016, @01:21PM

      by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Tuesday August 30 2016, @01:21PM (#395251) Homepage Journal

      I actually had tried to use OpenWatcom to get a period-like assembler for writing this but I think the x86_16 bitrotted out of it as when I did mov dx, offset str_hello, I got a garbage pointer (the assembler set dx to the top of the CS). Combined with the fact that WASM16 is essentially undocumented and I gave up in frustration and switched to NASM since I at least know what I'm doing with that.

      I knew Borland C had been released as freeware, but I didn't realize the older 16-bit versions were as well, and it appears Turbo Assembler was released as well. I may have to install it on FreeDOS and go really period specific. I need an actual linker to build an EXE so I don't have to deal with relocating things in conventional RAM; writing COMs as a TSR is generally a "bad idea" because they load to 0x100 and unless you relocate it yourself, you can get clobbered really really easily. If I load high into UMA or at least the top of conventional RAM, it makes it that much easier to survive shitty software. Course I might allow myself to pretend I'm on a 80286, with >1 MiB of RAM, and do 80286 protected mode magic, and just leave a thunk in conventional memory to kick me to and from. In that case I need to patch the memory map calls from the BIOS and hide myself in EMS (by marking that region of memory as unavailable).

      The tricky bit is going from protected mode on the 80286 back into real mode requires a triple fault and catching the reset vector to go back into real mode. It wasn't until the 80386 until a quick way to go protected->real mode existed.

      --
      Still always moving
      • (Score: 3, Interesting) by crb3 on Tuesday August 30 2016, @02:01PM

        by crb3 (5919) on Tuesday August 30 2016, @02:01PM (#395274)

        > writing COMs as a TSR is generally a "bad idea" because they load to 0x100 and unless you relocate it yourself, you can get clobbered really really easily.

        Umm, it's been awhile, but IIRC you load it as a normal *.com (64kcode, 64kdata max), it gets loaded into normal memory just like any other program, hooking into the interrupt table as required, and the TSR process fiddles the pointers so that subsequent programs load above it. TSRs is how various drivers get added, and best if they're loaded first thing (from config.sys and autoexec.bat) because they effectively lock up the memory below them by fiddling with the pointers, including the memory footprint of any program running at the time the TSR is loaded.

        I currently run BC3.1's textmode IDE in DOSbox (haven't really touched the Borland C in a long while, but it works, and I run old-DOS-OrCAD v3 a lot for schematics so I know that environment is robust); TC2 should run there just fine so you've got normal tooling available for development.

        TASM also has a remote capability, in case that proves helpful for your purposes: install its hooks on the target machine, then step/run/examine (across a serial link) using a more comfy console. IIRC it's not quite as flexible as the gdb/gvd combo I use in Linux C, but it can step code down at the assembly level as well as C source level.

        • (Score: 1) by crb3 on Tuesday August 30 2016, @02:10PM

          by crb3 (5919) on Tuesday August 30 2016, @02:10PM (#395278)

          > TASM also has a remote capability

          Oops, no, that's TDB.

        • (Score: 2) by NCommander on Tuesday August 30 2016, @02:16PM

          by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Tuesday August 30 2016, @02:16PM (#395281) Homepage Journal

          Does it?

          I'm working from very old Microsoft documentation describing how TSRs work in practice, but the damn shit is hard to follow without a copy of the Intel System Reference at hand to explain the 16-bit magic. My understanding is when you call TSR and give it the paragraphs you want to save, it just leaves them where they are in conventional memory. No rebase is implied. I was going to use this weekend with a debugger and NASM working out the exact behavior of TSRs on DOS. I've done x86_16 programming before in firmware, but not much in DOS before. I've found plenty of code examples on how to write a TSR, but most of them lack linker invocation. Given the lack of .org 0x100 in most of the example source I've seen, I assumed they were being linked to EXE and then rebased by the LE loader.

          --
          Still always moving
          • (Score: 2) by sjames on Tuesday August 30 2016, @08:08PM

            by sjames (2882) on Tuesday August 30 2016, @08:08PM (#395411) Journal

            A key to TSRs is to write PIC code. That is, code that uses only relative accesses so that it runs OK at any arbitrary address. Then you can just grab a bit of RAM and copy it in.

            If you haven't seen it, get Ralf Brown's Interrupt List [cmu.edu]. It has a lot more than a list of interrupts.

          • (Score: 0) by Anonymous Coward on Tuesday August 30 2016, @11:04PM

            by Anonymous Coward on Tuesday August 30 2016, @11:04PM (#395487)

            But NCommander, have you looked at bitsavers.org?

            Look in the file listing file about a page down. (Recent.txt)

            It has a treasure trove of documentation in t going from Pentium Pro era back to Univac (1940s!!!!)

            They are adding stuff on a weekly if not daily basis and could use all the help they can get in case any other soylentils have vintage software or hardware documentation, or firmware/oses dating back 20 years or more.

            Hope that helps!

      • (Score: 2) by bzipitidoo on Tuesday August 30 2016, @02:10PM

        by bzipitidoo (4388) on Tuesday August 30 2016, @02:10PM (#395277) Journal

        Watch out, Borland C has some serious bugs. Stay under 64k of data memory, and you should be safe. Go over that, and you're in trouble.

        A minor bug in version 2.0 was that x>>=1 was translated into assembler incorrectly. The compilation would abort and the IDE gave the programmer a dump of assembler code. Was simple enough to workaround by writing x = x >> 1 instead, but still, doesn't give one a feeling of confidence in the compiler.

        The killer bug was the inability to handle segmented memory correctly. On programs that reserved more than 64K for data, the compiler would generate code that reused the same 64k segment instead of separate segments. Two different variables would end up trying to use the same address. While the x>>=1 problem was fixed sometime between version 2.0 and 4.5, the mismanagement of segmented memory was still present in 4.5 and I think 5.1. I learned of this problem when I was trying to figure out why a program I'd written just was not working. In the debugger, I put watches on everything and saw an element of an array change at the same time as a loop counter was incremented. I switched to gcc in Linux and the problem went away.

        • (Score: 2) by NCommander on Tuesday August 30 2016, @02:22PM

          by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Tuesday August 30 2016, @02:22PM (#395285) Homepage Journal

          I was planning on doing this pure assembly, no C. I'd like to think Turbo Assembler is relatively bug free; writing a x86_16 assembler isn't exactly rocket science.

          TSRs need to be written in assembly anyway because you're operating as ISRs; its a bad idea to try and write those in C unless you're very sure of what your compiler is doing and the code is emitting because you have to push state, disable interrupts, do X, pop state, then iret back into DOS. If I actually want to do anything beside load and store stuff, I'm going to have to override the return vector before I iret, My plan was to put a mini-binary in UMA with the ISR, override the return vector on the stack, and iret into call I control.

          That save me a lot of headache as DOS interrupts are not reentrant; I can't safely call one within the ISR unless I'm sure I'm not in DOS (INDOS=0). That way I'm not fighting DOS to do things like operate the network controller. Or in other words, I'm going to make DOS multitask :)

          --
          Still always moving
      • (Score: 0) by Anonymous Coward on Tuesday August 30 2016, @04:37PM

        by Anonymous Coward on Tuesday August 30 2016, @04:37PM (#395337)

        I gave up in frustration and switched to NASM
        Having used both you are better off with NASM (which I think has a MASM compat mode). I think I have the docs for the watcom one kicking around here somewhere in book form (I bought it ages ago). If I remember right the 32bit asm compiler pretty much spits out 16 bit codes. You have to go out of your way to get it to spit out 32bit codes and its compat with MASM is crappy. Just stay away from the other registers and it should encode correctly. It will fault out pretty quick if you get something wrong :) MASM would be the one to get if you want to stick to period style coding. It had much better support and everyone used it. NASM is at least semi supported still these days so you can get some help easily. If I were doing any ASM work with DOS NASM is probably the one I would pick.

        I wrote a few keyboard hooks myself. I usually used turboc and watcomc. They made it pretty dead easy to do. Basically just write the hook. Call the right dos functions with your function pointer and make sure you call the old hook at the end and you were usually good.

        I recommend this book https://www.amazon.com/dp/0201403994 [amazon.com]
        One of the better ones if you want to understand the PC/DOS architecture. There is supposedly a 4th ed. So you probably could get that.

        I personally am trying to do some win3.1 reverse engineering. Have not found a 'good' bit on how the Win16NE format works, Win32PE is very well documented. Very few of the free disassemblers work with it. IDAPro supposedly does but the free ones do not. The win16 platform is interesting as it depends heavily on DOS. But in many cases basically sucks the brains out of it. Win9x was even more brain sucking but at the bottom still depended on DOS for a few things.

        • (Score: 2) by NCommander on Tuesday August 30 2016, @05:12PM

          by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Tuesday August 30 2016, @05:12PM (#395348) Homepage Journal

          Well, I wanted to use free tools if at all possible so others can take the code and play with it. 16-bit MASM hasn't been part of Visual C++ in decades, and I believe it only persists in the Windows DDK. I'm using FreeDOS to test my stuff in VirtualBox with FTPing the files across (annoying but workable).

          Part of me is tempted to see if I could get one of the older LANman implementations to talk to it; I think Samba still has support for LANman, though I don't have a linux install on this laptop due to technical reasons (my old laptop committed suicide on my last week, and my current contract work is all Windows specific)

          --
          Still always moving
          • (Score: 0) by Anonymous Coward on Tuesday August 30 2016, @08:36PM

            by Anonymous Coward on Tuesday August 30 2016, @08:36PM (#395424)

            https://www.microsoft.com/en-us/download/details.aspx?id=12654 [microsoft.com]

            Oh I agree with you. Although, MASM can be handy to have around for those old asm scripts that still crawling around on the net. Not sure it was ever included with the VC kit (maybe the DDK like you said). I always had to get it stand alone. It was not a cheap package for someone on a college budget. By the time they made it 'free' I had little use for it. But yeah NASM is the better choice these days. Not too sure how good that download is without the docs. Which was a large part of the package. 3 VERY thick very well written books that described how to use it. A couple of months ago I threw out some copies as we were shutting down an office that had been around since 1992 and there was a lot of *old* useless software laying around.

            Also I would look into what some of the IDEs can help you with. I would bet there is a plugin for eclipse and I know there is probably one for notepad++. If you want to stick strictly 'DOS' the watcom vi editor is one of the best there is for that sort of thing. But it is probably more tied into the watcom build stack. But you may be able to bend it to use NASM as it is very configurable thru scripts.

            Pretty sure LANMAN should still work on just about any windows box NT4 and up. If you are using something like win10 you may have to fiddle a couple of settings to get it to join the workgroup correctly. Think it is just a couple of checkboxes in the network settings and setting the workgroup name under the machinename. But it should be basically baked in.

            DR-DOS is what I used for years. As that is what came with my computer and I didnt have 80 bucks to buy a copy of MS-DOS. At least for something that basic and it mostly worked anyway... Only had 1 or 2 programs that did not work. So it was 'good enough'.

            I used to use a program called HELPPC. http://stanislavs.org/helppc/ [stanislavs.org] I personally like the format better than rbrown stuff. The rbrown stuff seems to be a bit more extensive and more up to date though. I have been using his stuff for some of my win3x spelunking. Should get ahold of my old boss. He was a wizz at that win31 stuff and he could point out what I need.

            If you are using a VM for DOS programs you may want to look into some of the TSRs that help with the CPU usage (dosidle, winwait, etc). FreeDOS may already have built that in but it is worth checking into.

            • (Score: 2) by Post-Nihilist on Tuesday August 30 2016, @09:53PM

              by Post-Nihilist (5672) on Tuesday August 30 2016, @09:53PM (#395464)

              ... I would bet there is a plugin for eclipse...

              Using eclipse to write pure ASM programs (unless it is java bytecote asm) is quite blasphemous. If i remember correctly winasm would be a reasonable choice if you really wanted to use an IDE for i686 ASM

              --
              Be like us, be different, be a nihilist!!!
              • (Score: 2) by NCommander on Thursday September 01 2016, @05:07AM

                by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Thursday September 01 2016, @05:07AM (#396035) Homepage Journal

                I'm using NASM and Notepad++ since at the moment I'm running primarily in Windows due to unrelated work stuff. I initially test code in DOSbox for sanity before popping it over via FTP to FreeDOS as I have yet to get lanman client setup.

                --
                Still always moving
            • (Score: 2) by dry on Wednesday August 31 2016, @05:05AM

              by dry (223) on Wednesday August 31 2016, @05:05AM (#395583) Journal

              Getting LANMAN to work on XP wasn't too hard. Getting it to work on Win7 was very hard and getting it to work on Win 10 is next to impossible. The problem is the authentication is just too insecure. Even OS/2 ships with a Samba server/client now so it communicate with Windows.

  • (Score: 2) by Thexalon on Tuesday August 30 2016, @01:49PM

    by Thexalon (636) on Tuesday August 30 2016, @01:49PM (#395264)

    8086? Luxury! Try the 8088, the original chip on the IBM PC, where they were bragging about being able to address 1 whole megabyte of memory using a weird system based on "segments" where the actual physical address was determined by both a "segment" and an "address" register, e.g. DS Undocumented DOS: A Programmer's Guide to Reserved MS-DOS Functions and Data Structures , which can show you all about how to do things like figure out how all the memory blocks are allocated, how FAT is laid out on disk, and how the APIs all work at a very low level. Knowing this stuff had some real benefits: For example, my dad and I were able to recover a bricked Windows 3.1 system by calculating out the location of the FAT filesystem structures on disk, and then discovering that somebody had managed to delete Config.sys (which still mattered, a lot, on Windows 3.1).

    --
    The only thing that stops a bad guy with a compiler is a good guy with a compiler.
    • (Score: 2) by Thexalon on Tuesday August 30 2016, @01:51PM

      by Thexalon (636) on Tuesday August 30 2016, @01:51PM (#395267)

      Dang it, formatting!

      When talking about addressing, I was referring to DS << 4 + DX, which apparently we need HTML special chars to make work on Soylent.

      --
      The only thing that stops a bad guy with a compiler is a good guy with a compiler.
    • (Score: 2) by NCommander on Tuesday August 30 2016, @02:11PM

      by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Tuesday August 30 2016, @02:11PM (#395279) Homepage Journal

      That's essentially how UNDELETE worked. DOS used a special character in the FAT to mark a file as deleted. Thus if you walk the FAT, and things haven't been overriden, undeletion is possible. Format basically worked the same way which is why UNFORMAT was a thing.

      As for using the 8086, well, the 8088 in the XT is not fully forward compatible. DOS-compatible software for the XT will work on AT-compatible systems, but anything that talked to the BIOS (aka almost everything) would generally break on the XT->AT jump. I haven't actually decided if I want to try and make this run on a real AT (via emulation), but it might be a nifty challenge, and then do a follow up showing it running on bare metal on some i7 running DOS 6.22 or something. I think I have a i7 with a NE2000 compatible NIC which should at least in theory work. Funny enough, the i7 was the first processor that simply said "eh, fuck it", and locks A20 to on, which means DOS 3.3 won't run in it due to lacking the wrap around. Later DOS versions should be fine though.

      --
      Still always moving
      • (Score: 2) by sjames on Tuesday August 30 2016, @08:42PM

        by sjames (2882) on Tuesday August 30 2016, @08:42PM (#395427) Journal

        The AT had an 80286. It also had a few odd workarounds to increase compatibility with the 8088 like an AND gate on the A20 address line so a few programs that depended on addresses wrapping at the 1MB mark would work. Extended memory worked by enabling the A20 line and using dirty segment tricks to access beyond 1MB while still in real mode.

        • (Score: 2) by NCommander on Tuesday August 30 2016, @09:14PM

          by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Tuesday August 30 2016, @09:14PM (#395438) Homepage Journal

          I must have been dead last night when I wrote story and comments. For some reason I thought the AT had a max of 1 MiB, but wikipedia says it had a max of 16 MiB. I think somewhere between my notes and backslash (admin console) AT and XT got crossed. It's hard to tell based on Google what a typical late 80s AT would have looked like though having 1-2 MiB of RAM probably is in the realm of reasonable.

          Actually looking at the Wikipedia page, an AT with enough RAM could probably have run Windows 3.1 Standard and DOS 5 if you put enough RAM into it. Maybe I'll show off 80286 protected mode if I can think of a reason to enter it; you can bounce back to real mode via the triple-fault check.

          --
          Still always moving
          • (Score: 2) by sjames on Tuesday August 30 2016, @09:43PM

            by sjames (2882) on Tuesday August 30 2016, @09:43PM (#395454) Journal

            IIRC Win 3.1 was a problem on the AT due to the crippled (compared to 386) protected mode. It was also dog slow in protected mode (and so the real mode segment trick to access extended memory).

            In practice, protected mode was avoided as much as possible until the '386 got it right, including the ability to return to real mode without a reset or triple fault.

          • (Score: 2) by dry on Wednesday August 31 2016, @05:51AM

            by dry (223) on Wednesday August 31 2016, @05:51AM (#395589) Journal

            I think that the AT (286) could actually address a GB of virtual memory in protected mode. 32 bit OS/2 limited itself to a GB of address space (512MBs user) so the 16 bit API could address all memory.

            • (Score: 2) by NCommander on Thursday September 01 2016, @01:57AM

              by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Thursday September 01 2016, @01:57AM (#395985) Homepage Journal

              The 80286 is not at 32-bit processor; it could max address 24 bits of memory, for 16 MiB of total. Intel jumped to 32-bit with the 80386.

              --
              Still always moving
              • (Score: 2) by NCommander on Thursday September 01 2016, @03:02AM

                by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Thursday September 01 2016, @03:02AM (#396002) Homepage Journal

                Sorry, I should sai not a 32-bit clean processor as it didn't have a 32-bit address bus.

                --
                Still always moving
              • (Score: 2) by dry on Thursday September 01 2016, @03:31AM

                by dry (223) on Thursday September 01 2016, @03:31AM (#396011) Journal

                Address 24 bits of physical memory. In protected mode, the segment selector was more versatile allowing 1GB of virtual memory with each task seeing 16MB max. Play with the GDT and it is possible for one process to access the full GB though not very practical on the 286
                See eg http://nptel.ac.in/courses/Webcourse-contents/IIT-KANPUR/microcontrollers/micro/ui/Course_home4_32.htm [nptel.ac.in]
                To quote the relevant section under Memory Addressing in 80286

                Protected Virtual Addressing Mode (PVAM) - In this we have 1 GByte of virtual memory and 16 Mbyte of physical memory. The address is 24 bit. To enter PVAM mode, Processor Status Word (PSW) is loaded by the instruction LPSW.

                • (Score: 2) by NCommander on Thursday September 01 2016, @04:16AM

                  by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Thursday September 01 2016, @04:16AM (#396028) Homepage Journal

                  Looks like you technically correct. The best kind of correct.

                  The (un)fortunate truth is outside of Intel programming manuals which is a last resort due to how blasted dry they are, segmented protected mode is basically undocumented. Its close to unheard of that a period specific 80286 would have even hit the 16 MiB limit of RAM, and not a single online resource I've seen talking about the LGDT actually talk about setting up true segments. They basically set a ring 0/3 segment, and call it good. The LDT gets a footnote at best.

                  Since the MMU is enabled in protected mode, and W^X is also a thing, you can't call into real mode code assuming it would work in standard protected mode since DOS expected that any address could be RWX even if you limited the GDT to have a ring 0 segment where DOS would expect it. I'd love a chance to play with segmented protected mode in this article, but I can't think of a real world way it could work; on very low memory systems, you don't have anything beyond convential memory and thus the issue is moot. Newer systems might have RAM above 1 MiB, *but* entering protected mode would break standard DOS applications (and EMM386) unless I did some complicated magic to shunt down to real mode, plus making sure it played nice with anything using a DOS extender.

                  --
                  Still always moving
    • (Score: 2) by LoRdTAW on Tuesday August 30 2016, @02:21PM

      by LoRdTAW (3755) on Tuesday August 30 2016, @02:21PM (#395284) Journal

      They were both i86 and could run the same code. The big difference was 8088 had only an 8 bit data bus multiplexed on the address bus. Since it could only transfer one byte per I/O read, it was half as slow as the 8086's 16 bit memory bus. This made the part cheaper to produce but a big bottleneck memory wise. It was also I/O signal compatible with the older 8 bit 8085.

    • (Score: 2, Informative) by jimtheowl on Tuesday August 30 2016, @02:27PM

      by jimtheowl (5929) on Tuesday August 30 2016, @02:27PM (#395290)

      The 8088 is just a cheaper model of the 8086 (8 bit external data bus instead of 16 bit), not a different design.

      The 8086 precedes it.

      They did the same thing with later chips, such as the 80386/80386SX

  • (Score: 3, Interesting) by LoRdTAW on Tuesday August 30 2016, @01:59PM

    by LoRdTAW (3755) on Tuesday August 30 2016, @01:59PM (#395273) Journal

    I am going to assume it would work like this:
    - Hijack interrupt 09, the keyboard interrupt.
    - dump a copy of the keystrokes into a buffer.
    - when the buffer is full call some network code to transmit the buffer.
    - Most likely the network adapter is NE2000. The protocol is what, IPX? Or raw Ethernet packets?

    Time to nerd out a bit...
    i86 assembler in DOS along with building an 8088 board in college were probably the most useful courses I ever took. The board they gave us was awful though. The original was hamstrung with only a single set of 8 dip switches and 8 LED's for I/O. The only memory outside of the four CPU registers was a 2k eprom. Thats right, eprom. Messed up your program because you forgot to put the fucking code at 0xFFFF0? (reset vector, the first addresss the CPU looks for instructions after a reset or cold boot) Well then! Put that little bastard in the UV eraser. For 20 minutes. I used to smoke back then so I'd go out for a smoke and a short walk while cursing a little. The code was assembled in DOS's debug in an VMware DOS VM. Then the binary machine code was hand entered into a windows based USB EPROM burner. Fucking painful only begins to describe programming this thing. I'd say half of the classes time was wasted trying to get the device programmed.

    I both loved the board and hated it. About half way through the course fater the board was built and I started programming, I began to redesign the board in KiCAD. I kept the 5x6 inch footprint and greatly improved on the peripherals including providing a standard 16 pin header for a 20x2 LCD and an I/O expansion port. I kept the dip switches and 8 bit bar graph LED's, positioned the LCD right above them, and added a 40 pin header to break out the address bus, data bus, interrupt line, I/O control lines, and clock signals. I also expanded the chip select ability for the I/O ports using a demux. The best part was I added a ZIF socket and upgraded to flash memory. I can't remember but I think a 28 pin flash chip has the same pin-out if it was an 8, 16 or 32kB part. So I added a jumper to select the ROM size if you wanted to go beyond the 8kB default. This was my first PCB design attempt. Ever. Using just two sides for traces (cheated routing by using the then available free router). Awesome learning experience that I started on my own in parallel with the course. I even kept the BOM cost within the course's original $50 student material fee. I had a prototype board fabricated by Futurlec (slow, hard to communicate with, okay quality but CHEAP!). Turns out I made a footprint assignment mistake which halted the build and I stopped at building the clock generator circuit. By then it was half way through the summer after class had already ended :(

    My next idea was to build an expansion board with dual SRAM and flash. There were two SRAM sockets each hard wired for 8kb SRAM chips for 16KB total. One of the SRAM sockets had a battery backup controller from maxim which could be omitted. Two jumper switched flash sockets with a write circuit I rigged up to send 12V to the write enable pin which put the chip into write mode. I forgot how I mapped those flash chips into address space as I think I was trying to keep everything in one 64kB segment. This board I think made it to PCB layout but not sure if I finalized it.

    The I/O boards were stackable and my next board was an I/O board that could sit atop the memory board. It contained an 8bit R-2R DAC with a 0-5V op-amp buffered output, a 0-5/10V successive approximation ADC using the system clock, 8 bit PWM output, and 8 opto-isolated inputs and outputs. I also thought of mixing relays with the outputs as well. I designed everything and is was to be built using dip chips. This board had a more basic version with only non isolated digital I/O and the ADC, PWM circuit make it to PCB layout. The final I/O packed version never made it past the paper sketches.

    My original plans were to get the department to switch to my board design, and use a new assembler tool chain along with a basic C compiler (BCC, Bruce's C compiler) with a bare bones non-standard C library to provide basic functionality without RAM such as on board I/O reading/writing, and an LCD printf. I had a function which could for example take an 8 bit number and display its binary representation on the LED bar graph. Same with reading the dip switches. I also began writing the LCD printf called printlcd(const char *format); not variadic of course. That was in the plan though. I even began work on the tool chain by simulating the hardware in a program called emu8086. I wrote some of the C library and simulated it with success. Had a basic read dip switches and write to the LED lopp and a very basic LCD print function working.

    Beyond those basic building blocks I also planned on writing a command interpreter and adding a com port to hook to a terminal emulator. The idea was to let students pop in a pre-flashed firmware chip, have it boot and display diagnostics on the LCD, and if you connected it to a terminal emulator, get a basic command prompt with the ability to upload a program into RAM and execute it. The idea was to use base64 encoding and you could load that code into the battery backed SRAM and use the volatile SRAM for variables, buffers and such. Since we got to keep the boards, the idea was to make it more interactive and even useful to the student fater class. So making it more arduino/PLC/early 8bit PC like was certainly a much better idea. The course didn't have to go beyond the basic programming stuff and the LCD along with pre-built I/O boards could be kept in the lab and given to students on an as needed basis. Teh board could also have been used in the crappy embedded c++ course I took that sucked. Half of the semester was spent in front of visual studio writing bad c++ code (some half assed c with classes approach) and the second half was spent in front of a goofy 8085 board that was programmed in c and assembler. I could have had the students sitting in front of the SAME 8088 board from their CPU class, on a Linux machine or VM and immediatly building code and seeing results. They could use prebuilt boards or bring in their own board depending on how they scheduled that course.

    Unfortunately ADD/Anxiety got the better of me after the board design flaw, stopped going to college after that semester and I moved shortly after. The damn thing is still in a box somewhere. I keep telling myself I'm going to fucking finish that thing one day. The sad part is I showed the final circuit design as well as the 3D rendered output of the PCB to my professor during the last week of class and he was floored. He loved the idea and how I even went as far as keeping the BOM cost just below the student fee. Said I should take it right to the head of the tech department. I never did. Story of my life.

    • (Score: 3, Informative) by NCommander on Tuesday August 30 2016, @02:31PM

      by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Tuesday August 30 2016, @02:31PM (#395292) Homepage Journal

      Well I haven't decided where I want to hook just yet. The keyboard controller itself is IRQ 1, but the BIOS exposes it as 9 for sanity reasons in the IVR. If I hook IRQ 1, I can catch absolutely everything, but have to deal with scancodes and then feed it back into BIOS. That makes it harder to detect, but much harder to code. Hooking 9 puts me between BIOS and DOS is saner, since I don't have to do scan code -> ASCII mapping, and I can't remember off hand if I can read a scan code twice (aka, if I read it in the IRQ 1 ISR, can it be read further down the line).

      My rough plan of attack is to allocate a small static buffer which the ISR pushes the value in and then chains, then hook either network I/O operations or the DOS IDLE task. To avoid the DOS re-entrant problem, I was going to clobber the IRET value and kick out of the ISR. The upload code would then operate as a normal DOS app, send the static buffer, call back into the TSR to say its done, and restore the original exception vector. My concern is if I try to do network I/O operations in a ISR, I'm going to break something.

      --
      Still always moving
      • (Score: 1) by tekk on Tuesday August 30 2016, @03:41PM

        by tekk (5704) Subscriber Badge on Tuesday August 30 2016, @03:41PM (#395306)

        My gut feeling is that you probably can't read twice, but what do you want to bet there's some way of pushing it back on. I'm not an expert in how this works but given how there's no memory protection couldn't you just proxy the DOS routine too? Change that code pointer to point to you and act like the DOS routine to the outside world while doing your stuff.

        • (Score: 0) by Anonymous Coward on Tuesday August 30 2016, @10:26PM

          by Anonymous Coward on Tuesday August 30 2016, @10:26PM (#395476)

          couldn't you just proxy the DOS routine too? Change that code pointer to point to you and act like the DOS routine to the outside world while doing your stuff.

          That is the polite way to handle an interrupt when you program for DOS, but you must call the old routine only if the interrupt flask is unset on that interrup at the time the handler is installed

          • (Score: 2) by Post-Nihilist on Tuesday August 30 2016, @11:13PM

            by Post-Nihilist (5672) on Tuesday August 30 2016, @11:13PM (#395488)

            interrupt flask

            sure sign that I need a drink.... I meant mask

            --
            Be like us, be different, be a nihilist!!!
      • (Score: 2) by Post-Nihilist on Tuesday August 30 2016, @10:11PM

        by Post-Nihilist (5672) on Tuesday August 30 2016, @10:11PM (#395473)

        On an AT PC, the keyboard is on IRQ2 (starting from 1) and IRQ2 call int 9
        IRQ1 is the tick timer usually called at a precise 55ms interval. it is mapped on int 8.
        IRQ3 is chained to another PIC controller.
        int 1 is the Single Step interrupt, when the trap flag is set, it is called back after each instruction executed.

        --
        Be like us, be different, be a nihilist!!!
    • (Score: 2) by tibman on Tuesday August 30 2016, @02:34PM

      by tibman (134) Subscriber Badge on Tuesday August 30 2016, @02:34PM (#395293)

      You might enjoy taking another look at it. Prices for old dip stuff is pretty reasonable. You can buy parts in single quantities and casually work on breadboarding it out. Since you mentioned Arduino then you are probably still doing electronics? As a hobby or professionally?
      Ram example: http://www.digikey.com/product-detail/en/alliance-memory-inc/AS6C6264-55PCN/1450-1036-ND/4234595 [digikey.com]

      I'm breadboarding an MC68000 based computer. Terrible at electronics but love programming : )

      --
      SN won't survive on lurkers alone. Write comments.
      • (Score: 2) by LoRdTAW on Tuesday August 30 2016, @04:59PM

        by LoRdTAW (3755) on Tuesday August 30 2016, @04:59PM (#395344) Journal

        I'm working with this type of stuff at work. I have really taken a liking to industrial automation as I have been working with a lot of CNC stuff and PLC/PAC's. In fact, I'm rebuilding a whole CNC Laser system we got from a customer who was going to toss the whole system into the dumpster. Seriously, a whole Aerotech A3200 three axis CNC system with the XYZ stage and a 500W JK701 NdYAG laser. I am working with our machinist to make it a dedicated workstation for a customer who is sending in a massive order to weld fuel pumps. I have both learned and taught myself a whole lot working here. And in addition to lasers and CNC systems, I have also learned a lot about electron beam welders, their high voltage systems, electron gun design and even high vacuum systems.

        One of my biggest influences in my CPU board design and the multi I/O card was a sort of amalgamation of this newfound love of from scratch computer building, industrial automation, and the simplicity of the arduino/AVR and other micro controllers as well as this guy: http://www.users.qwest.net/~kmaxon/index.html [qwest.net] (unfortunately a lot of the picture links are broken). The dude builds everything himself. He even built a small plastic injection molding machine complete with a from scratch built M68k controller. I'll say this, as much as I love the idea of feature packed micros, there is something so absolutely satisfying about building a computer which then controls something you made. That's why my I/O board Idea was based on PLC I/O and the API was to be Arduino like. You could take that board home and make it control your furnace or blinds or whatever else you wanted. Hell, I even went as far as thinking the serial port could also do modbus.

        I'm breadboarding an MC68000 based computer. Terrible at electronics but love programming : )

        I'm the opposite. Good with electronics, mediocre at programming. But that is simply because I don't get enough time to do any actual programming. The M68k is a CPU I never worked with but have really wanted to work with for a while.

    • (Score: 0) by Anonymous Coward on Tuesday September 20 2016, @06:41PM

      by Anonymous Coward on Tuesday September 20 2016, @06:41PM (#404402)

      God, I hate it when idiots call assembly language, "assembler." It's like calling C, "compiler."

    • (Score: 1, Funny) by Anonymous Coward on Wednesday September 21 2016, @01:38AM

      by Anonymous Coward on Wednesday September 21 2016, @01:38AM (#404614)

      "8088 in college" "redo in kicad". We had 8085's, Z80's, toggle switches, pencil/paper and were damned happy about it!

      Now, get off MY lawn.

  • (Score: 2) by fishybell on Tuesday August 30 2016, @04:24PM

    by fishybell (3156) on Tuesday August 30 2016, @04:24PM (#395325)

    Everything about the A20 line is hilarious:

    The 80286 had a bug where it failed to force the A20 line to zero in real mode. Due to this bug, the combination F800:8000 would no longer point to the physical address 0x00000000 but the correct address 0x00100000. As a result, some DOS programs would no longer work. In order to remain compatible with these programs, IBM decided to fix the problem on the motherboard.

    • (Score: 3, Interesting) by NCommander on Tuesday August 30 2016, @05:02PM

      by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Tuesday August 30 2016, @05:02PM (#395345) Homepage Journal

      A20 is just a pile of fail in so many ways. If A20 had a standardized way of managing it it (the KB controller method got replaced), it would be annoying but not horrid. Instead different PCs have different ways of setting A20. Some implement a BIOS call, others have a magic port I/O, and others provide the AT compatible KB interface. A20 issues still manage to crop up every now and then in current hardware. I can't find the link right now, but one x86 tablet basically implemented two methods of setting A20; one *lied* and said it succeeded causing the Linux kernel to go belly up in very interesting ways.

      The funniest/cringeworthiest of them is the use of A20 allowed people to defeat the original Xbox's security system. By forcing the A20 pin low, the managed to get the processor to come out of reset in unprotected flash memory completely bypassing the secure BIOS system on the original Xbox.

      --
      Still always moving
  • (Score: 1) by jimtheowl on Tuesday August 30 2016, @04:30PM

    by jimtheowl (5929) on Tuesday August 30 2016, @04:30PM (#395329)

    " I decided to write a series of articles for SN in an attempt to drive more subscriptions and readers to the site"

    This article did it for me; Looking forward to part 2.

    • (Score: 2) by NCommander on Tuesday August 30 2016, @05:03PM

      by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Tuesday August 30 2016, @05:03PM (#395346) Homepage Journal

      Thanks! I'll give you a shoutout when Part II (which hopefully will be this weekend) goes up.

      The next part is going to go into the basics of terminate and stay resident programs, and how to implement an interrupt hook. After that, we'll hook the keyboard, and store it in a buffer until we send it on its way.

      --
      Still always moving
  • (Score: 1) by Eristone on Tuesday August 30 2016, @05:13PM

    by Eristone (4775) on Tuesday August 30 2016, @05:13PM (#395349)

    The biggest problem you are going to run into is that it is hard to hide malware in the environment you are running. You'd actually have to hijack something like command.com or one of the TSRs to avoid detection just from simple things like "Hmmmm.. why is available memory 1K low?" (In the old QEMM support days, we'd spot viruses by that tell-tale 639k top of memory and Manifest listing stuff that shouldn't show up with a clean boot)

    • (Score: 2) by NCommander on Tuesday August 30 2016, @05:44PM

      by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Tuesday August 30 2016, @05:44PM (#395364) Homepage Journal

      Well, with real malware, I'd simply include it as functionality of an existing product. Here's a nifty product that lets you copy and paste across DOS apps so the memory drop would be expected. Assuming more modern circa 1992, one good candidate to hide in is the DOS monochromatic video buffer at B000-B7FF is appealing as not a lot of software actually used it. There are also a few other gaps that likely could work but I risk EMM386 clobbering me if I use them. EGA was more or less going to be there for 386 and later. EMM386 marks that area reserved so it can hide there at the cost of breaking very old DOS software.

      I'm probably not going to try to run it on the PC/AT in real life. Right now, I can't even find a network stack that could have potentially ran on it. I know LAN manager existed for the PC/AT as it worked with OS/2 server editions, but its completely fallen off the face of the internet. I may write something that uses the packet buffer directly as I can't find anything beyond TCP/IP stacks. Apparently its possible to initialize Windows Winsock even in real-mode and get access to it. If it can actually be done, then Winsock has a NetBEUI interface I can use.

      --
      Still always moving
      • (Score: 0) by Anonymous Coward on Wednesday August 31 2016, @02:15AM

        by Anonymous Coward on Wednesday August 31 2016, @02:15AM (#395554)

        There should be tons of space to 'hide' above 640k. There is usually 2 sections that are reserved out. The bios and then video. Some cards would have their own reserved areas in there. DOS 5.0 introduced the idea of 'highmem' where you could load an TSR up in the above 640k area. Usually controlled thru the loadhigh/devicehigh command. If you just scanned thru and found all 0s and started from where the bios is you probably could hide there. Another trick progs would like to do is 'attach' themselves to other existing progs. Basically find the prog in memory put yourself in the bit after the exe then modify the mem usage tables.

  • (Score: 3, Funny) by goodie on Tuesday August 30 2016, @05:27PM

    by goodie (1877) on Tuesday August 30 2016, @05:27PM (#395358) Journal

    For example, in 1988 (the year I was born)

    Thanks for making me feel even older ;-). Fun read though!

  • (Score: 2) by iWantToKeepAnon on Tuesday August 30 2016, @05:47PM

    by iWantToKeepAnon (686) on Tuesday August 30 2016, @05:47PM (#395366) Homepage Journal

    I wrote several, they all followed TesSeRact "API" (https://web.archive.org/web/20060903084827/http://hdebruijn.soo.dto.tudelft.nl/newpage/interupt/out-4800.htm#4712). You could discover what was running, request information (interrupts used and such), and even request unlink/unload. Very "mature" and robust it was.

    My company spent about a week running reports, queries, print outs, etc... to close each month. I wrote a TSR that captured their initial work flow and another one ran monthly that would read the screen and replay the correct key strokes. Kind of a DOS screen based expect tool; come to think of it, it was written right about the time expect was being written. Anyway with the new system the users could start the process in the afternoon and come in the next morning and it was done, spreadsheet tallies and all. And I archived electronic copies of all the dead tree reports and saved bunches of dollars and paper at the same time. Good times:)

    --
    "Happy families are all alike; every unhappy family is unhappy in its own way." -- Anna Karenina by Leo Tolstoy
    • (Score: 2) by iWantToKeepAnon on Tuesday August 30 2016, @05:50PM

      by iWantToKeepAnon (686) on Tuesday August 30 2016, @05:50PM (#395368) Homepage Journal

      Oh, forgot; one of the things you could query was the hotkey being used so you knew if you could use it without interfering with other TSR-es.

      Here's a note from the archived page:

      Notes: Borland's THELP.COM popup help system for Turbo Pascal and Turbo C (versions 1.x and 2.x only) fully supports the TesSeRact API, as do the SWAP?? programs by Innovative Data Concepts.. AVATAR.SYS supports functions 00h and 01h (only the first three fields of the user parameter block) using the name "AVATAR "

      The good old days ...

      --
      "Happy families are all alike; every unhappy family is unhappy in its own way." -- Anna Karenina by Leo Tolstoy
    • (Score: 2) by NCommander on Tuesday August 30 2016, @06:07PM

      by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Tuesday August 30 2016, @06:07PM (#395377) Homepage Journal

      TSRs let you do ungodly things if you knew what you were doing. My dad used to use Reference Manager for DOS which would hook into WordPerfect by hitting SysRq and pop up a UI to put references right in the application. Refman for Windows never worked anywhere as well.

      WordPerfect for DOS was probably one of the best editors I've ever used. Still haven't found anything I liked as much.

      --
      Still always moving
      • (Score: 2) by iWantToKeepAnon on Thursday September 01 2016, @05:09PM

        by iWantToKeepAnon (686) on Thursday September 01 2016, @05:09PM (#396265) Homepage Journal
        Since you seem to be in the mood for nostalgia, have you read this : http://www.wordplace.com/ap/ [wordplace.com]
        --
        "Happy families are all alike; every unhappy family is unhappy in its own way." -- Anna Karenina by Leo Tolstoy
        • (Score: 2) by Reziac on Friday September 16 2016, @04:47AM

          by Reziac (2489) on Friday September 16 2016, @04:47AM (#402622) Homepage

          Thanks for the link. I'd forgotten it was online. I have a hardcopy in one of these library boxes -- I sorta collect WordPerfect stuff.

          --
          And there is no Alkibiades to come back and save us from ourselves.
      • (Score: 2) by Reziac on Friday September 16 2016, @05:12AM

        by Reziac (2489) on Friday September 16 2016, @05:12AM (#402628) Homepage

        You're the same age as my trusty and beloved 286. Which I still have. :)

        I was still using WPDOS5.1+ for everyday up until about 5 years ago. RoughDraft finally weaned me off it in WinXP (better multifile search function, better HTML export, tho with the usual limitations of RTF) but I still use WPDOS for some work, and collect old versions and books and stuff (at least when it's cheap enough). There's nothing else quite like it for competent, compact, efficient, and as close to bug-free as complex software gets. You know it was written in assembly? Unfortunately I've been told by a Corel rep that the source code has been lost. :(

        --
        And there is no Alkibiades to come back and save us from ourselves.
        • (Score: 2) by NCommander on Friday September 16 2016, @05:39AM

          by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Friday September 16 2016, @05:39AM (#402632) Homepage Journal

          My very first computer was a hand-me-down Premium Exec AST/386 laptop with 4 MiB of RAM, DOS 6.0, and Windows 3.1. Primarily apps I used on that thing was Windows 3.1, ProComm Plus, and WordPerfect.

          I primarily used WordPerfect 6.x at the time, as it had a WYISWYG, and had better printer compatibility. Damn thing weighted 12 pounds but worked really really well for editing and such. I don't think I could use WP 5.1 these days or similar due to low resolution; until I switched to using Linux primarily, I used WordPerfect for Windows, which while not "great", at least had the all important and magical reveal codes feature.

          I just finished coding up the last section of the example code and have most of the article written (scheduled to go out on Monday; I try not to do major things on Friday or the weekend). Ran into a compatibility quirk on FreeDOS that I had to patch around (DOSBox sets default handler for all interrupts, FreeDOS doesn't, so trying to chain blindly cases a GPF. I don't have a copy of MS-DOS or DR-DOS to confirm their functionality). If you want to grab the current TSR code and run it to check to see what actual period-specific DOS does, I'll throw a credit to you in the next article.

          --
          Still always moving
          • (Score: 2) by Reziac on Friday September 16 2016, @07:01PM

            by Reziac (2489) on Friday September 16 2016, @07:01PM (#402903) Homepage

            I don't offhand see the link to the code (I assume it's compiled, I don't have any tools handy and hardly any knowledge of how to use 'em anyway), and I'd have to find the 286 an ISA vidcard before it could be used (assuming there's even two bytes of RAM not already in use, I had it really packed solid with TSRs), it has MSDOS6.00, but come to think of it, the 386 laptop is here somewhere (I swear I've seen it since I moved) and it has MSDOS5 or 6. My everyday DOS machine is MSDOS7, tho the only real difference is that it groks FAT32. I've never seen it behave any different otherwise, and all the external utils are interchangeable.

            People like to bitch about DOS, but that 286 had multiple uptimes of ~2 years in heavy use, and only ever got restarted (once I'd ID'd and locked out the bad memory chip) when that old MFM HD needed a fresh low-level format. Tho after I added a fan blowing directly on the HD, that problem went away, which oughta tell us something.

            --
            And there is no Alkibiades to come back and save us from ourselves.
            • (Score: 2) by NCommander on Saturday September 17 2016, @05:29AM

              by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Saturday September 17 2016, @05:29AM (#403034) Homepage Journal

              The code and binaries are linked from my journal for the next article (which is set to go live on Monday: https://soylentnews.org/comments.pl?sid=15551). [soylentnews.org]

              Testing on real DOS would be rather nifty. I also won't mind seeing if it worked on DR-DOS or such. As written, the code should work on anything DOS 2.x+ or above as of right now.

              --
              Still always moving
              • (Score: 2) by Reziac on Saturday September 17 2016, @06:30AM

                by Reziac (2489) on Saturday September 17 2016, @06:30AM (#403045) Homepage

                DRDOS can be persnickety, especially its protected-mode memory manager. Had to beat it with sticks to get DOOM to run, back in the day. On my old Win31 and 9x boxen I used a sort of MSDOS/DRDOS hybrid with parts of both. Eventually gave up on DRDOS as it's both slower (by about 10%, IIRC) and compared to MSDOS, rather buggy. I'd have to root through the pile of boot disks to see if I've got DRDOS in there. Old versions of Partition Magic came with a DRDOS boot disk.

                I probably have an MSDOS 3.2 boot disk somewhere, gods know which box.

                I preferred MSDOS 6.00 (and its younger sibling 7.x) to 6.2x, which had a couple bugs 6.0/7.x lack (IIRC in 6.22 you could hang FORMAT with one of the abort options).

                --
                And there is no Alkibiades to come back and save us from ourselves.
                • (Score: 2) by NCommander on Saturday September 17 2016, @07:15AM

                  by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Saturday September 17 2016, @07:15AM (#403054) Homepage Journal

                  What I'm mostly trying to figure out is if MS-DOS installs a default TSR for all interrupts. Basically, calling an interrupt blindly on FreeDOS will cause the processor to fault because the IVT points to 0000:0000 as a default entry if there's no TSR sitting on it. DOSBox on the other hand installs a default handler to F000:xxxx which is where the DOS kernel would live and appears just to do iret.

                  Protected mode in general is a bitch on DOS since you have to do a lot of setup to switch to and from. From real mode, to enter protected mode, you have to:

                    * Disable interrupts
                    * Setup the GDT
                    * Setup a 32-bit IDT
                    * Configure segments/paging
                    * Throw the protected bit
                    * Far call into protected mode call
                    * Re-enable interrupts
                    * Thunk interrupts from protected mode to real mode.
                        * If necessary to call a real mode interrupt (i.e. BIOS), you have to leave protected mode back to real mode, then do the above all over again.

                  Not very surprising an entire cottage industry sprouted on making DOS extenders for doing all this insanely. On the 80286 it was worse since you have to fault or reset the processor to kick back into protected mode since protected mode was believed to be the future, and backwards compatibility wasn't really necessary for protected mode OSes. (the expectation was OS/2 would replace DOS entirely).

                  Unreal mode primarily got used since you got the advantages of more memory space (since you could put the segment registers above 1 MiB via LOADALL or a protected/real mode magic), and none of the disadvantages. Generally you could get code to fit with in 640 KiB (even today that's generally true) without too much work. You could do Unreal huge mode to move CS high though that had its own set of pain to deal with; I can't even find an example that does so under DOS.

                  --
                  Still always moving
  • (Score: 2) by shortscreen on Tuesday August 30 2016, @09:21PM

    by shortscreen (2252) on Tuesday August 30 2016, @09:21PM (#395443) Journal

    Good article, but I noticed you specified '87 as the launch of the AT while according to wiki it should be '84.

    I have an AT motherboard which appears to be functional (shows text on the screen at least, but I didn't try to find a 5.25" floppy to boot it from). I guess it is one of the older versions which has a 6MHz 286 and 32x 64Kb DRAMs on the board with another 32x piggy-backing on top of them for a grand total of 512KB.

    • (Score: 0) by Anonymous Coward on Tuesday August 30 2016, @11:25PM

      by Anonymous Coward on Tuesday August 30 2016, @11:25PM (#395493)

      If it helps any, I still have a machine that reads 5.25 floppies. I also have a DOS6.22/Win3.1 virtual machine that runs under VMWare. The virtual machine also has TurboC and Turbo Pascal.

    • (Score: 2) by NCommander on Tuesday August 30 2016, @11:47PM

      by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Tuesday August 30 2016, @11:47PM (#395503) Homepage Journal

      Fixing. Thanks :)

      --
      Still always moving
    • (Score: 1, Informative) by Anonymous Coward on Wednesday August 31 2016, @12:11AM

      by Anonymous Coward on Wednesday August 31 2016, @12:11AM (#395515)

      There are sd to 3.5/5.25" emulators available that connect to a regular pinned floppy connector. IF you get one of the edge to pin adapters (like my 386 came with!) you could connect that to your AT and run it with a virtual floppy disk. Those adapters run somewhere between 60 and 120 dollars on ebay. I am not sure if they can still be found via a more reputable dealer.

      Additionally there is a 50-pin SCSI to SD adapter available for ~65 bucks for running your legacy systems that contained a SCSI adapter. I think it has a couple of jumpers for setting different range caps for what your system can actually address as well.

      There is also a raspberry pi to commodore 64 disk controller project that can provide C64 disk emulation off a linux filesystem, or usb to joystick port emulation for the c64, or most other legacy joystick ports of that era (atari, apple, and possibly a few others.)

      Don't have links for these handy but they should be pretty straightforward to google.

      For anyone needing floppy disk recovery, there are at least two or three controller boards and disk imaging programs out there. The only open hardware/open source one I know of is the DiskFerret or something, but the guy who produced it ran out of money and it has been essentially dead for 2-3 years, although he apparently had at least 10 bare boards without money to populate them in case anybody wants to buy one or maybe push for a kickstarter to get legacy disk imaging hardware into everyone's hands. The alternative is Kryo-something or other but the software is proprietary, the hardware costs ~100-150USD, and while the licensing for the imaging software is free for private use, commercial/archival use is on a 'ask us for a quote' basis.

      Hope that helps. All the legacy info I have for now.

  • (Score: 2) by Fnord666 on Wednesday August 31 2016, @12:35AM

    by Fnord666 (652) on Wednesday August 31 2016, @12:35AM (#395522) Homepage
    Who here remembers low level formatting a hard drive by using debug to execute the routine on the drive controller?
    • (Score: 0) by Anonymous Coward on Wednesday August 31 2016, @12:28PM

      by Anonymous Coward on Wednesday August 31 2016, @12:28PM (#395652)

      i remember using a debug routine a couple of times to clear a hdd after installing redhat on the same disk as windows 95 or 98

      eventually i got bored with redhat, but the windows fdisk wouldn't recognize the linux partition and so wouldn't delete it

      with some internet searching (probably altavista or excite at the time) it was debug to the rescue!

      In a text file:

      a 100
      int13
      rax
      0301
      rbx
      0200
      f 200 l 200 0
      rcx
      0001
      rdx
      0080
      p
      q

      then run:

      debug <file.txt

    • (Score: 2) by pkrasimirov on Wednesday September 21 2016, @11:11AM

      by pkrasimirov (3358) Subscriber Badge on Wednesday September 21 2016, @11:11AM (#404739)

      g c800:5

  • (Score: 2) by MichaelDavidCrawford on Wednesday August 31 2016, @12:35AM

    by MichaelDavidCrawford (2339) Subscriber Badge <mdcrawford@gmail.com> on Wednesday August 31 2016, @12:35AM (#395523) Homepage Journal

    It originally ran on System 6.5.

    It wasn't meant to steal passwords. To prevent that it made its presence quite obvious and could be easily disabled.

    Rather it was meant to save valuable text in the event of power failure, crash or closing a document without saving. I got lots of fan mail due to all the Next Great American Novels I saved.

    --
    Yes I Have No Bananas. [gofundme.com]
  • (Score: 0) by Anonymous Coward on Tuesday September 20 2016, @06:36PM

    by Anonymous Coward on Tuesday September 20 2016, @06:36PM (#404398)

    The AT the "common ancestor," really? I realize you're too old (born in 1988, HAHAHAHA), but the AT was predated by the XT, the original IBM PC (5150), and a whole slew of microcomputers. To call the AT the common ancestor is just retarded.