Stories
Slash Boxes
Comments

SoylentNews is people

posted by FatPhil on Thursday June 22 2017, @01:33PM   Printer-friendly
from the Pi-in-the-sky dept.

Submitted via IRC for TheMightyBuzzard

The results are in: The Raspberry Pi 3 is the most desired maker SBC by a 4-to-1 margin. In other trends: x86 SBCs and Linux/Arduino hybrids get a boost.

More than ever, it's a Raspberry Pi world, and other Linux hacker boards are just living in it. Our 2017 hacker board survey gives the Raspberry Pi 3 a total of 2,583 votes — four times the number of the second-ranked board, the Raspberry Pi Zero W.

[...] Note that by "votes" we are referring to Borda rankings that combine 1st, 2nd, and 3rd choice rankings [...]

So, which if any credit-card-sized computers are you lot playing around with?

Source: http://linuxgizmos.com/2017-hacker-board-survey-raspberry-pi-still-rules-but-x86-sbcs-make-gains/


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by cafebabe on Friday June 23 2017, @04:28PM (2 children)

    by cafebabe (894) on Friday June 23 2017, @04:28PM (#530067) Journal

    An ARM micro-controller which only executes 16 bit Thumb instructions is a relatively good choice in typical circumstances. However, there are conditions where ARM should be avoided. This is mostly a generic consideration of 8 bit instruction sets, 16 bit instruction sets, 32 bit instructions, larger instruction sets and miscellaneous instruction sets.

    An 8 bit instruction set may be close to optimal code density. In a general purpose computer, this minimises RAM (or virtual memory) required for program. However, In a general purpose computer, an 8 bit instruction set provides the least defense against executable code passing through ASCII filters or suchlike. In a device with finite ROM, an 8 bit instruction set provides maximum functionality. A 16 bit instruction set and the same size ROM may attain slightly more than 90% of the code density. Two address instructions, such as MOV, are generally of similar size but 16 bit loop and jump instructions may be larger.

    GCC uses a legacy algorithm for register allocation which is a middling choice suitable for a desktop environment. With -fira-region=one, GCC uses all available registers and correctly counts the cost of instruction prefixes when accessing extended registers. [soylentnews.org]

    If your code has an unusual quantity of case statements or subroutine calls, an 8 bit instruction set may be the best choice. If your code has sections of heavy computation, a micro-controller which can switch between 16 bit instructions and 32 bit instructions may be the best choice. In typical cases, a 16 bit instruction set is good enough for new projects but may be problematic for legacy projects which already target an 8 bit instruction set.

    --
    1702845791×2
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by kaszz on Friday June 23 2017, @04:51PM (1 child)

    by kaszz (4211) on Friday June 23 2017, @04:51PM (#530077) Journal

    I'm just thinking that when the minimum clock frequency for ARM is 48 MHz and memory sizes in 128 kB or more. It will not matter much that code density goes down? ie any advantages are nullified by the pure capacity in speed and memory.

    • (Score: 2) by cafebabe on Friday June 23 2017, @06:11PM

      by cafebabe (894) on Friday June 23 2017, @06:11PM (#530128) Journal

      For a micro-controller with 4KB of ROM, code density is very important. For a micro-controller with 64KB ROM, it is probably acceptable to be within 10% of optimal code density. However, as the size of the program increases, processor complexity acts as a magnifier for size and bandwidth. Where energy consumption is of minimal concern, there comes a point where all the crazy 100 million transistor stuff becomes worthwhile. Stuff like speculative execution, object-oriented branch prediction, 144 virtual registers and the associated out-of-order reconciliation hardware. For example, a 4GHz Xeon core with a peak instruction decode of 5 bytes per cycle has a peak execution rate of 20GB/s per core - and many parties with high density legacy code want that pushed further. Back in the land of sanity, PIC, AVR, SuperH, MIPS and ARM are all useful.

      --
      1702845791×2