Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 14 submissions in the queue.
posted by janrinok on Monday January 26 2015, @10:04PM   Printer-friendly
from the when-I-was-a-boy..... dept.

Spotted over at HackerNews is a link to a detailed description of the build steps for a homebrew 8 bit (6502 based) home computer.

The pages, by Dirk Grappendorf, cover stage by stage descriptions of the implementation, including the hardware schematics, assembler listings and a prototype built up on breadboard.

This is a description of my attempt to build a simple microcomputer system with an 8-bit MOS 6502 CPU that was used in many popular home computers of the 1970s and 1980s like the Commodore 64 or the Apple II. This project was started in September 2014 and finished in January 2015. Above you can see an image of the final product. This is no in-depth tutorial on how to build a 6502 based computer system. It is more like a developer diary, which describes the evolution of the system design over time from the first simple support circuits to the complete product.

Original hackernews discussion thread.

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by VLM on Tuesday January 27 2015, @02:40PM

    by VLM (445) on Tuesday January 27 2015, @02:40PM (#138547)

    I can't speak for op, and he's probably talking about a specific API on a specific machine/os. I do remember 6502 assembly sucked because it was the most non-orthogonal architecture I ever attempted to use. From memory, I pretty much said F this when I got sick of workarounds like there is no way to push / pop the index registers on the stack. So you'd play elaborate transfer games like push a onto the stack then register transfer parts of X into a and push a again and ... It was not like a pdp11 or z80 or motorola chips (6809, 68hc11, 68000, etc). Those were "real" chips where you just did what you want instead of playing a game of working around turing tarpit BS limitations. The 6502 pretty much sucked. It was used in machines with a decent community which made it not so bad, but for its era its architecture was a total WTF.

    I believe culturally there's some carry over from the 6502 coders in the 80s into turing tarpit esoteric languages. 6502 was kinda the BF or intercal of the assembly language world. Although I could see people who got used to the 6502s peculiarities thinking the rest of the world is kinda weird, all Oliver Twist at the idea of please sir may I push X onto the stack without a beating please?

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 0) by Anonymous Coward on Tuesday January 27 2015, @03:18PM

    by Anonymous Coward on Tuesday January 27 2015, @03:18PM (#138560)

    The 6502 was a *cheap, cheap, cheap* processor, especially when it first came out. In order to hit that price point, the chip had to be kept simple, i.e., fewer transistors, and therefore a chopped-down instruction set. I think it was pretty clean for what it aimed to be--it was just a minimal chip. I agree that the Z80 was a nice chip, though. I had a Commodore 128 (C-128) which was a step up from the C-64, with double the memory, double the number of characters on screen (80 columns!), and double the number of CPUs: it had a 6502-derivative *and* a Z80. The 6502-derivative let you run Commodore software, and the Z80 was a sort of hedge in that you could run CP/M software with it. CP/M was already waning as a platform by the time the C-128 came out. CP/M was a really boring OS, with no graphics, sound, or anything else--just text. The only people who would have any interest in running it would be people with legacy CP/M software for their business, like some accounting app. Some guy wrote a knock-off of CP/M and named it QDOS (Quick and Dirty Operating System), and that was later bought by Microsoft to power the first IBM PC, and they renamed it MS-DOS. The rest is history.

    P.S. I must say I preferred the memory-mapped I/O approach of the 6502 to using the separate IN/OUT I/O instructions, as was done on the Z80. Just seems simpler to program.

    • (Score: 2) by emg on Tuesday January 27 2015, @04:35PM

      by emg (3464) on Tuesday January 27 2015, @04:35PM (#138572)

      The 6502 was kind of the RISC chip of its day: the instructions didn't do much, but they executed quickly. Z80 CISC instructions could do more, and the chips ran at a higher clock speed, but they took far more clocks to do anything. So it pretty much evened out between the two, and well-designed 6502 code could do many things faster.

      It's no surprise that the 6502-based Acorn machines led to ARM.

  • (Score: 2) by dry on Wednesday January 28 2015, @01:47AM

    by dry (223) on Wednesday January 28 2015, @01:47AM (#138705) Journal

    That was one of the things that made the 65C02 was such an improvement, additional instructions such as PHY and PHX. The 65816 was even nicer with being able to put the stack and zero page anywhere in the first 64KB including overlapping them, push instructions into the zero (now called the direct) page.

  • (Score: 2) by JoeMerchant on Saturday January 31 2015, @02:52PM

    by JoeMerchant (3937) on Saturday January 31 2015, @02:52PM (#139821)

    The 6502 was conceived to run toasters, small appliances, maybe a washing machine.

    Wrapping keyboard, video and mass storage device interfaces around it was a cruel and unusual extension of its functionality - one that it stepped up to quite admirably, but still - the original design was not intended to carry the load that it did. It (and similar small chips repurposed for large jobs) was a front runner for the RISC philosophy that came around a few years later. RISC faded not long after, at least compared to the nuclear inferno that was Wintel.

    Personally, I'd like to see 6502-like cores respun in 14nm 3GHz glory - how many thousands could you fit (including 8 bit data interconnect matrices) on the die space of a single Haswell?

    --
    🌻🌻 [google.com]
    • (Score: 2) by VLM on Sunday February 01 2015, @12:48PM

      by VLM (445) on Sunday February 01 2015, @12:48PM (#140029)

      The 6502 was conceived to run toasters, small appliances, maybe a washing machine.

      Yeah, but thats a marketing level view. From an engineering level if the arch or implementation is weird, writing code is not any easier or more fun just because some marketing dude said they'll try to sell it to toaster makers. In some ways it sucks worse because now code failures mean product recalls and dead people and lawsuits instead of desktop software where with a wink and nod everyone expects it to fail regularly. Also if you try to do anything pushing the computational limits like DSP code you want a nice architecture in embedded.

      The RISC failure was more of a merger anyway. Lets run x86 on RISC microcode, sorta. The failure of RISC was the assumption that memory bandwidth would increase at faster rates simultaneously CISC processing would slow down with complexity, or more precisely the ratio of the two, and neither really happened so the ratio didn't take off and RISC turned all "whatever". This would also be the problem with your 14nm or FPGA design. Its not hard to fit lots of Z80 in a medium sized FPGA... now giving them enough memory BW or communications BW to actually do anything off chip, thats a huge challenge. I bet you could run NOPs at 3 GHz on tens, maybe hundreds on a large FPGA but good luck accessing an external bus to actually do anything, especially at 3 GHZ.