Stories
Slash Boxes
Comments

SoylentNews is people

posted by LaminatorX on Saturday August 30 2014, @01:24PM   Printer-friendly
from the pocket-full-o-bits dept.

Apple stole a march on Android when it released the iPhone 5S with a 64-bit processor, and Android manufacturers have put the pedal to the metal in a race to catch up and make their products 64-bit. AnandTech reports that HTC has announced the Desire 510, its first 64-bit Android phone.

Meanwhile, AnandTech describe the device in more detail:

While normally one might expect high end phones to get the latest and greatest features first, this time we see a bit of a surprising reversal. The Desire 510 is HTC's first 64-bit phone, and the first announced device with Snapdragon 410. For those that aren't familiar with Snapdragon 410, it has four Cortex A53 CPU cores running at 1.2 GHz, along with an Adreno 306 GPU which suggests that it is a mild modification of the current Adreno 305 GPU that we see in the Snapdragon 400. Overall, this should make for a quite fast SoC compared to Snapdragon 400, as Anand has covered in the Snapdragon 410 launch announcement.

While it may seem strange that ARMv8 on Android phones is first to appear on a budget smartphone, it's quite easy to understand how this happened. Looking at Qualcomm's roadmap, the Snapdragon 810/MSM8994 is the first high-end SoC that will ship with ARMv8, and is built on a 20nm process. As 20nm from both Samsung and TSMC have just begun appearing in shipping chips, the process yield and production capacity isn't nearly as mature as 28nm LP, which is old news by now.

Other details include:

  • SoC: MSM8916 1.2 GHz Snapdragon 410
  • RAM/NAND: 1 GB RAM, 8GB NAND + microSD
  • Display: 4.7” FWVGA (854x480)
  • Network: 2G / 3G / 4G LTE (Qualcomm MDM9x25 UE Category 4 LTE)
  • Dimensions: 139.9 x 69.8 x 9.99mm
  • Weight: 158 grams
  • Camera: 5MP rear camera, .3MP/VGA FFC
  • Battery: 2100 mAh (7.98 Whr)
  • OS: Android 4.4 with Sense 6
  • Connectivity: 802.11b/g/n + BT 4.0, USB2.0, GPS/GNSS, DLNA
  • SIM Size: MicroSIM
 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by HiThere on Saturday August 30 2014, @07:50PM

    by HiThere (866) Subscriber Badge on Saturday August 30 2014, @07:50PM (#87644) Journal

    There actually have been designs for 128 bit cpus...but not that are based around the x86 opcodes. There are even decent arguments as to why they are desirable. IIRC, some of them have partitioned instruction sets that allow some selections of opcodes to be packed at more than one to a word, and others to handle indirect access to multiple RAM locations in the same instruction. (With modern RAM sizes, that might require more than 128 bits of instruction, but back in the day that this was being proposed, it could have worked well...possibly did work well.

    The thing about modern 64bit processors is that they are still being designed with an X86 style opcode pattern. Don't really know whether that's good or bad, but it makes converting between generations easier.

    --
    Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by BasilBrush on Sunday August 31 2014, @08:56PM

    by BasilBrush (3994) on Sunday August 31 2014, @08:56PM (#87924)

    The thing about modern 64bit processors is that they are still being designed with an X86 style opcode pattern.

    Absolutely not. This is an 64 bit ARM chip. It has nothing to do with X86. The ARM is by design a RISC chip vs the X86 CISC design, and thus by design is completely different.

    Of course more modern X86 chips are RISC running microcode to emulate the X86 instruction set. Which still makes is CISC, but in a different and still un-ARM like way.

    --
    Hurrah! Quoting works now!
    • (Score: 2) by HiThere on Monday September 01 2014, @06:03PM

      by HiThere (866) Subscriber Badge on Monday September 01 2014, @06:03PM (#88146) Journal

      I admit that I don't know much about ARM architecture, but I believe it still basically follows the x86 pattern. In comparison ICL had assembler level instructions that were packed many to a word, and words of varying length. I know this kind of thing is done in microcode with modern CPUs, but that's not the same thing at all.

      OTOH, everything has followed a similar architecture in a broader sense since the 1950s. At that time there were even systems with 10 state memories. They worked, but they were slow and expensive in comparison to the binary memories.

      If ARM is really RISC on the surface layer, then that isn't something that I expected, so I was wrong. But reading WikiPedia on their instruction set, it seems to be to still be basically x86 style instructions. (Actually, the model I take at the base of the tree is the IBM 7090, but nobody knows that anymore. Or even the Intel 8008.) There ARE different approaches, but none have been successfully developed, AFAIK, for use at the end user level. Even higher level assemblers have failed. There were CPUs set up with LISP, FORTH and Java as assemblers, but they were never generally successful. CDC tried to build a machine with APL at the assembler level, but it was never released (as such, it was released with a simpler assembler). There are probably other attempts I never heard of.

      What would be really interesting is a machine with a dataflow implementation at the assembly level. This would take a LOT of work (and I have no idea really just how much) but it would pretty much solve the problem of programming for multiple cores at the end user level. Still, any functional language would have the same capability, but most functional languages solve the problem with immutability, which is nice if your problem can readily be structured that way, but many problems require changeable state. Dataflow comes with a different series of limitations (I'm sure they're there, but the only one I'm sure of is difficulty in wrapping your mind around it), but it doesn't demand immutability. What is does do is make nearly everything lazy.

      It's hard to draw boundaries and say "This is where things fall into a different classification", so I guess it's not surprising that we should disagree (unless I'm just wrong...given my ignorance of ARM this is quite possible), but I tend to put the weight on different styles of organization. So I could pure functional designs as one category, and dataflow as one category, and standard procedural as one category. I'd count Object orientation as a separate category, but it doesn't seem to fit into this split. Any of these three categories can be Object oriented, though the benefits differ, and I think that they are largest in the procedural category. Perhaps dataflow would have as great a gain.

      To me whether memory aligned access is required or not is a nearly irrelevant detail when deciding which family of designs is being considered. Some RISC designs are of a different family than the x86 family, but by no means all of them. In fact I count most of the RISC designs I've seen as being just ways of optimizing the X86 architecture for slightly different problems. There are exceptions, but none of the ones I'm aware of ever came to market. (Again, I'm talking about the user interface layer, not the microcode, which is, indeed, radically different.)

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.