Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 14 submissions in the queue.
posted by martyb on Saturday April 30 2016, @06:19PM   Printer-friendly
from the shrinking-opportunities dept.

Intel has exited the smartphone System-on-a-Chip business, at least temporarily, with the cancellation of Broxton and SoFIA products:

Given the significance of this news we immediately reached out to Intel to get direct confirmation of the cancelation, and we can now confirm that Intel is indeed canceling both Broxton (smartphone and tablet) and SoFIA as part of their new strategy. This is arguably the biggest change in Intel's mobile strategy since they first formed it last decade, representing a significant scaling back in their mobile SoC efforts. Intel's struggles are well-published here, so this isn't entirely unsurprising, but at the same time this comes relatively shortly before Broxton was set to launch. Otherwise as it relates to Atom itself, Intel's efforts with smaller die size and lower power cores have not ended, but there's clearly going to be a need to reevaluate where Atom fits into Intel's plans in the long run if it's not going to be in phones.

[...] Thus Intel's big wins in the smartphone space have been rather limited: they haven't had a win in any particularly premium devices, and long term partners have been deploying mid-range platforms in geo-focused regions. Perhaps the biggest recipient has been ASUS, with the ever popular ZenFone 2 creating headlines when it was announced at $200 with a quad-core Intel Atom, LTE, 4GB of DRAM and a 5.5-inch 1080p display. Though not quite a premium product, the ZenFone 2 was very aggressively priced and earned a lot of attention for both ASUS and Intel over just how many higher-end features were packed into a relatively cheap phone.

Meanwhile, just under two years ago, in order to address the lower-end of the market and to more directly compete with aggressive and low-margin ARM SoC vendors, Intel announced the SoFIA program. SoFIA would see Intel partner with the Chinese SoC vendors Rockchip and Spreadtrum, working with them to design cost-competitive SoCs using Atom CPU cores and Intel modems, and then fab those SoCs at third party fabs. SoFIA was a very aggressive and unusual move for Intel that acknowledged that the company could not compete in the low-end SoC space in a traditional, high-margin Intel manner, and that as a result the company needed to try something different. The first phones based on the resulting Atom x3 SoCs launched earlier this year, so while SoFIA has made it to the market it looks like that presence will be short-lived.

Here is a previous story about the SoFIA program.

Related:
Intel Skylake & Broxton Graphics Processors To Start Mandating Binary Blobs (only Broxton story on the site)
Intel to Cut 12,000 Jobs


Original Submission

Related Stories

Intel Skylake & Broxton Graphics Processors To Start Mandating Binary Blobs 26 comments

Intel has often been portrayed as the golden child within the Linux/BSD community and by those desiring a fully-free system without tainting their kernel with binary blobs while wanting a fully-supported open-source driver. The Intel Linux graphics driver over the years hasn't required any firmware blobs for acceleration, compared to AMD's open-source driver having many binary-only microcode files and Nouveau also needing blobs — including firmware files that NVIDIA still hasn't released for their latest GPUs. However, beginning with Intel Skylake and Broxton CPUs, their open-source driver will now too require closed-source firmware. The required "GuC" and "DMC" firmware files are for handling the new hardware's display microcontroller and workload scheduling engine. These firmware files are explicitly closed-source licensed and forbid any reverse-engineering. What choices are left for those wanting a fully-free, de-blobbed system while having a usable desktop?

Time to revive the Open Graphics Project...?

(those binary blobs may contain root kits)


Original Submission

Intel to Cut 12,000 Jobs 68 comments

The BBC reports that Intel will cut 12,000 jobs, or about 11% of its workforce:

US tech giant Intel is shedding 12,000 jobs as it seeks to cut reliance on the declining personal computer market. The maker of computer chips will take a $1.2bn charge to cover restructuring costs. The job cuts, about 11% of Intel's workforce, will be made over the next 12 months, Intel said in a statement.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Informative) by bitstream on Saturday April 30 2016, @06:28PM

    by bitstream (6144) on Saturday April 30 2016, @06:28PM (#339550) Journal

    I think Intels approach was always hindered by the fact they did or had to keep the platform compatibility with x86 that won't do well when energy efficiency and low complexity (fewer transistors) matters. What they have going is being able to produce chip at really tight geometries and clock them fast. So perhaps something new or something existing like MIPS, SPARC etc approach would been something. A chip with small geometry would likely use less energy and being able to increase speed when needed would be useful.

    But what they really would need is Edge. Why Intel when ARM is cheaper, uses less energy, and is less complex to deal with?
    I'm sure there's more factors.

    On a practical side of things. Is this the end of energy efficient x86 laptops and mini-pc's ?
    If it got to be x86 on the move then Atom ain't bad.

    • (Score: 1, Touché) by Anonymous Coward on Saturday April 30 2016, @07:18PM

      by Anonymous Coward on Saturday April 30 2016, @07:18PM (#339563)

      If abandoning backwards compatibility is such a good idea, why aren't using an Itanium right now to post your comment in Esperanto?

      • (Score: 0) by Anonymous Coward on Saturday April 30 2016, @07:32PM

        by Anonymous Coward on Saturday April 30 2016, @07:32PM (#339567)

        This right here. You could easily ask why are we not using MIPs the Power instruction set.

        The real reason was the power usage on the atom line. ARM is much better in that space. It has nothing to do with with the x86 instruction set. ARM for years has paid very close attention to power usage (they had to it was their only way to be competitive in the cell market). It paid handsomely for them in the long run. The single atom x86 chip may have been semi competitive. But the other chips that went along with it were not.

        Intel is also starting to get the same power usage out of their core line. There is no reason to keep the crappier atom design kicking along if your higher end chip can do the same power envelope. I have one of the NUC i5 line chips. It is using the same power envelope that the ATOM was 5 years ago.

        Intel is stuck with x86. They are not going to let it go. They currently can not compete with the other cell chip providers. Especially in a market that is saturated.

        • (Score: 4, Interesting) by TheRaven on Saturday April 30 2016, @09:06PM

          by TheRaven (270) on Saturday April 30 2016, @09:06PM (#339586) Journal

          ARM is much better in that space. It has nothing to do with with the x86 instruction set.

          I disagree. Intel actually has some pretty neat power management stuff, powering down parts of the chips that are not in use very quickly and at a far finer granularity than anything that ARM is offering at the moment. The x86 instruction set, however, comes with a lot of fixed cost. ARM (including AArch64) instructions are fixed length, which means that the hardware logic to decode them is fairly simple. Thumb-2 is variable length, but only has 2- and 4-byte instructions so it's fairly simple to switch between the two. x86 instructions are 1-15 bytes long, with various prefixes that can be combined in complex ways. Decoding x86 instructions is more like parsing than like a traditional instruction decoder.

          On a high-end chip, this doesn't matter much. Things like the register rename engines are bigger power and area sinks. At the low end, it hurts. The instruction decoder is something that has to be powered all of the time (except in some of Intel's high-end chips when in a tight loop, but then the micro-op decoder attached to the trace cache still needs to be powered and that's at least as complex as an ARM decoder).

          Intel pushes the 'finish fast and go to sleep' model for power management, which helps to offset this a bit. The idea is that it doesn't matter if their cores use 50% more power than ARM's if they can finish the job twice as fast and then sit there in standby more drawing almost no power. That doesn't work particularly well for smartphone uses, where lots of tasks like to poll every couple of seconds. ARM's big.LITTLE strategy works very well here, because waking up an A7 or A53 core (or even keeping one awake all of the time) for these little I/O-bound jobs is very cheap and then you can wake up the A15 or A57 for when the user is actually doing something interactive.

          --
          sudo mod me up
          • (Score: 2) by Grishnakh on Sunday May 01 2016, @02:10PM

            by Grishnakh (2831) on Sunday May 01 2016, @02:10PM (#339813)

            How much of an effect is there with the load/store architecture? With an ARM chip, you can only do operations on data that's in a register, and you have to use explicit load or store instructions to move data between registers and memory. Intel's architecture allows operating directly on memory locations, which surely must make the cache logic far more complex.

            • (Score: 2) by TheRaven on Sunday May 01 2016, @07:16PM

              by TheRaven (270) on Sunday May 01 2016, @07:16PM (#339905) Journal

              It doesn't make much difference past the decoder. Intel micro-ops are load-store (with the exception of some read-modify-write atomics, which may be implemented as remote operations lower down the memory hierarchy). If you do a memory-register operation then you get a sequence of load-modify-store micro-ops.

              The added complexity comes from the fact that, at least in x86-32, you have so few registers that you actually need to treat the top few stack slots as if they are registers to get good performance. This means that the load and store forwarding mechanisms are folded into the register renaming engine (as long as you don't move esp). I'm not sure if they do this in 64-bit mode, as you have a lot more registers to play with.

              The other part of the ISA that does complicate things for register renaming engine is that you've got a lot of instructions that operate on sub-registers (e.g. having a, ah, ax, and eax as subregisters of rax). This introduces some complex dependencies that need to be tracked.

              --
              sudo mod me up
              • (Score: 2) by jasassin on Sunday May 01 2016, @09:06PM

                by jasassin (3566) <jasassin@gmail.com> on Sunday May 01 2016, @09:06PM (#339947) Homepage Journal

                Reading this post gave me a headache.

                From another post regarding "orthogonal"... I looked up the definition:

                Orthogonal:
                having a sum of products or an integral that is zero or sometimes one under specified conditions: as
                a of real-valued functions : having the integral of the product of each pair of functions over a specific interval equal to zero
                b of vectors : having the scalar product equal to zero
                c of a square matrix : having the sum of products of corresponding elements in any two rows or any two columns equal to one if the rows or columns are the same and equal to zero otherwise : having a transpose with which the product equals the identity matrix

                I'm glad I spent the time to read that. Now I completely understand wtf orthogonal means... NOT!

                --
                jasassin@gmail.com GPG Key ID: 0xE6462C68A9A3DB5A
      • (Score: 2) by jmorris on Saturday April 30 2016, @09:40PM

        by jmorris (4844) on Saturday April 30 2016, @09:40PM (#339597)

        We aren't using Esperanto because it provides no advantage over another language that any advocate can communicate to a potential new user. The only theoretical advantage is that it is entirely artificial and so the international entities like the UN were expected to adopt it to stop the whining from the users of every other language that wasn't being used. But people in the end didn't care all that much and were realistic enough to accept that English and French would cover enough to call it good, add Spanish and Chinese and you really do cover any listener who matters because they will speak one of those. Meanwhile every Esperanto speaker in the world could meet in a pretty small venue.

        Itanium was a failure. It was expected that advances in compiler tech would make Itanium faster than x86. That never happened so incompatible, more expensive, and lower performance. A trifecta of suck.

        Intel's problem here is they only have two advantages left, their stuff performs well on servers (where power drain isn't too critical because the rest of the server is also drawing a non-trivial amount and idle power really isn't an issue) and it runs Windows. In the phone and tablet market they are abandoning, neither advantage was helping. Nobody wants Windows on a phone or tablet. And nobody much wants to buy Android on x86 because the compatibility game turns to a disadvantage for them. Convertables, 2in1, laptops they do buy Windows, tablets and phones no. Intel has had to pay OEMs like ASUS to use their stuff and finally realized that was never going to change, they weren't ever getting a monopoly where they could jack the prices and make it back. Even Intel has a limit to how long they can lose money on every sale and try to make up for it with volume.

        • (Score: 2) by Grishnakh on Sunday May 01 2016, @02:15PM

          by Grishnakh (2831) on Sunday May 01 2016, @02:15PM (#339815)

          The other problem with Esperanto is that it's horribly biased towards European languages. If you look at it, it's basically just a derivative of Romance languages like French and Spanish and Italian, probably with some Germanic influence too. It's the language you'd come up with if you wanted to make an artificial language to try to linguistically unite Europe. But it bears zero resemblance to Asian languages like Mandarin or Japanese. So it isn't any more palatable as a "neutral" language to the Chinese than, say, Portuguese. It's just as hard for them to learn, and just as easy for us Westerners to learn.

      • (Score: 2) by TheRaven on Sunday May 01 2016, @10:37AM

        by TheRaven (270) on Sunday May 01 2016, @10:37AM (#339755) Journal
        I'll let others comment on Esperanto, but:

        why aren't using an Itanium right now

        That one is easy. The operating systems I use happily support multiple architectures. I'm posting this from a Mac, where Apple managed to transition from PowerPC to x86 without issues for most users (and, before that, from m68k to PowerPC). The kernel works on PowerPC, x86 and ARM (including AArch64). Most of the other machines I use run FreeBSD, which supports several architectures (I use it on a mixture of x86-64, ARM and MIPS).

        I'm not using Itanium for a very simple reason: Intel could never make it competitive. The VLIW instructions used a lot of cache space, which kept the price (and power consumption) high. Compilers couldn't statically extract enough ILP to saturate the pipelines and the design meant that you still had to pay for a lot of area for register renaming (one of the biggest area costs on modern CPUs) to get good performance, so the complexity that it offloaded to software never really translated to a win for the microarchitects.

        The value for an ISA is not so much backwards compatibility, it's the software ecosystem. Krste Asanović (speaking in the context of RISC-V) estimates the cost of the ecosystem at around $1bn. Friends at ARM think that he's underestimating by a factor of two, possibly slightly more. That's the cost of getting all of the ISA-dependent software that people care about (compilers including JITs for things like Java, operating systems, debuggers, and so on) all working nicely. There are very few companies that can afford to do this for themselves so you need buy-in from others. Intel never got the critical mass for Itanium.

        Dropping backwards compatibility with desktop applications and ISAs doesn't seem to have hurt iOS and Android though, in part because ARM had already built a lot of the ecosystem by building a network of partners over the last couple of decades. Intel, in contrast, is quite bad at building this kind of relationship - they're sufficiently big and diverse that companies are worried that they'll start competing directly with any partner that becomes successful. Most of ARM's partners are bigger companies than ARM and so like having ARM as a non-threatening mediator.

        --
        sudo mod me up
        • (Score: 1) by Francis on Sunday May 01 2016, @01:17PM

          by Francis (5544) on Sunday May 01 2016, @01:17PM (#339794)

          I think cell phones got away with that in large part because desktop applications don't really work very well on handhelds without modifications anyways. They had a huge opportunity when the various OS manufacturers were starting the handhelds to write something that wouldn't be bound to a specific processor.

          They're also decades further down the road when they started, so they've got a much better idea about what's going to be possible in the future. 3D, Virtual Reality, various sensors, projectors, you can do a lot of really cool stuff now, it's largely just a matter of figuring out how to do that stuff in a phone.

          Whereas when Intel designed the x86 instruction set, a lot of that stuff was so far off in the distance that I doubt they planned for that to be done on a related processor.

          The other thing is that Intel has grown to depend upon antitrust violations at key points when they fall behind in order to keep up. I doubt very much that they'd still be in business if they hadn't been paying system's integrators to not use AMD products and hadn't otherwise interfered with competition to their benefit. It's kind of interesting how every time AMD is well ahead of Intel, that Intel finds some illegal way of preventing them from getting much out of it.

      • (Score: 2) by bitstream on Sunday May 01 2016, @11:07AM

        by bitstream (6144) on Sunday May 01 2016, @11:07AM (#339765) Journal

        Because the price of it doesn't offer any advantage. Neither is it useful in any projects that matters.

    • (Score: 3, Insightful) by Hairyfeet on Sunday May 01 2016, @12:28AM

      by Hairyfeet (75) <bassbeast1968NO@SPAMgmail.com> on Sunday May 01 2016, @12:28AM (#339627) Journal

      I don't think that was the issue, i think its because they were as locked down and crappy as any smartphone. I mean who wouldn't want a phone you could load any X86 OS on you wanted and with a hub could be instantly turned into a quad core desktop? I know I'd love that and I bet a lot of guys here would love that, but that isn't what you got. All you got was an Android smartphone with less software...yawn.

      Wake me up when they have an X86 phone where I can triple boot Android, Linux proper, and Windows, and use all my X86 programs. Now THAT would be worth giving up a bit of battery life for!

      --
      ACs are never seen so don't bother. Always ready to show SJWs for the racists they are.
      • (Score: 0) by Anonymous Coward on Sunday May 01 2016, @05:45AM

        by Anonymous Coward on Sunday May 01 2016, @05:45AM (#339697)

        Useful for you and virtually no one else.

      • (Score: 2) by bitstream on Sunday May 01 2016, @11:24AM

        by bitstream (6144) on Sunday May 01 2016, @11:24AM (#339773) Journal

        Perhaps you're on to something. They need to step back into what made their x86 platform great to begin with. Openness and modularity!

        Doing a "me too" will get them almost nowhere.
        It's like.. why take on a x86 phone when the ARM one will be cheaper, faster, and generally more potent.

        • (Score: 3, Insightful) by Hairyfeet on Tuesday May 03 2016, @07:41AM

          by Hairyfeet (75) <bassbeast1968NO@SPAMgmail.com> on Tuesday May 03 2016, @07:41AM (#340705) Journal

          Exactly, its like they forgot to ask the most important question which is "why do people buy our products?". The answer is simple its because 1.- You have software going back 30+ years that will run on it, 2.- The hardware is well supported on both the OS and software front with literally dozens of OS versions and millions of programs, and 3.- Because its so common there is a ton of companies making both software and hardware that interacts with it.

          Now are ANY of those met by these Intel smartphones? Uhhh nope nope and...yeah no. Again if they just made the things like they do their regular desktops, so I can run what I want and pick up a little pocket size hub with HDMI and a couple of USB that will plug into it so I can have a full REAL computer in my pocket? Then it would be a case of "shut up and take my money" and I bet there are a LOT of people that would feel the same, business types could pick up a 5 inch and use it as an Android tablet on the go and when they are at the meeting? Just reboot into Windows on it and run their PPT right off it into the projector. Regular folks could choose which OS they like on their phone and when they get home just plug it into a dock and use it to watch their Netflix on the big screen, hell you could probably easily think of a ton of scenarios where having a full X86 quad core in your pocket that'll do everything a desktop or laptop can would rock!

          But instead? Just another Android smartphone, only with less software....why EXACTLY would I want this over a plain jane vanilla Android phone? I sure as hell can't think of anything, but a phone that could run my Windows software when I'm at a customer's house, surf with a full Linux install like PCLinuxOS and if that wasn't enough also be able to take advantage of the Google Playstore by simply pushing a button and rebooting? Fuck yeah that would be awesome!

          --
          ACs are never seen so don't bother. Always ready to show SJWs for the racists they are.
          • (Score: 2) by bitstream on Tuesday May 03 2016, @09:29AM

            by bitstream (6144) on Tuesday May 03 2016, @09:29AM (#340737) Journal

            Otoh, it's perhaps good to kill off x86.

            Now we just have to unlock the bootloader(s).

  • (Score: 0, Funny) by Anonymous Coward on Saturday April 30 2016, @07:28PM

    by Anonymous Coward on Saturday April 30 2016, @07:28PM (#339564)

    As soon as Facebook starts making the CPU in your phone, bro, we gonna have a bro revolution, bro. You know you want a little Facebook in your pants. With a Facebook CPU in your Facebook phone connecting you to Facebook, you gonna be balls deep in the pussy for reals.

  • (Score: 3, Insightful) by dltaylor on Saturday April 30 2016, @07:47PM

    by dltaylor (4693) on Saturday April 30 2016, @07:47PM (#339572)

    The problem with making low-end x86s is that the fundamental architecture of the chip is horrid.

    If it hadn't been for IBM's purchasing people overriding the engineers on the PC CPU (because the then-failing Intel would sell the pathetic 8088 cheap), we would not have to be dealing with that in the 21st century, or, even the last couple of decades of the 20th.

    If you are not trying to Windows-compatible binaries, then there's no reason to use the x86, because that is all that they are really good for. Sure, Intel ended up with a big pile of cash, and at least a few competent engineers, because of the IBM deal, so the chips are usable for other operating systems, but picture a world where the PC had come with a CPU with 8 fully orthogonal registers, and significantly more performance, even on an 8-bit bus, and extrapolate that to what we could have as mid-price (between ARM and POWER) CPUs now. Saddening.

    • (Score: 2) by takyon on Saturday April 30 2016, @08:17PM

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Saturday April 30 2016, @08:17PM (#339580) Journal

      And what one-time % gain would be had at the cost of breaking compatibility?

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 0) by Anonymous Coward on Saturday April 30 2016, @09:36PM

        by Anonymous Coward on Saturday April 30 2016, @09:36PM (#339595)

        Also, memory was wildly expensive. I can buy 16 gig of ram for 100 bucks these days. When x86 came out 1 meg of ram cost a few hundred bucks. A variable length instruction set made sense.

        These days x86 is little more than a decoder into a RISC microop CPU.

      • (Score: 2) by Gravis on Saturday April 30 2016, @09:47PM

        by Gravis (4596) on Saturday April 30 2016, @09:47PM (#339601)

        a 200% increase in battery life.

      • (Score: 2) by dltaylor on Saturday April 30 2016, @11:13PM

        by dltaylor (4693) on Saturday April 30 2016, @11:13PM (#339617)

        Didn't really read it, did you?

        This was when the PC was originally developed; whatever CPU selected would have been 100% compatible, because it would have been the only one.

        The engineers originally put in a Z8000, the architecture of which resembles the IBM 360 mainframe.

    • (Score: 4, Insightful) by Hairyfeet on Sunday May 01 2016, @01:19AM

      by Hairyfeet (75) <bassbeast1968NO@SPAMgmail.com> on Sunday May 01 2016, @01:19AM (#339640) Journal

      So you are saying RISC is crap then? Because I hope you are aware that both Intel and AMD has been RISC since the late 90s, and that they X86 compatibility die is less than 8% of the die and IIRC is quite easily turned off when not needed. This is why they are called "x86 compatible" and not simply X86 because the last pure X86 chip was the P5, also known as the Pentium I. After that they switched to the Pentium Pro [wikipedia.org] line, which I quote from Wikipedia: "x86 instructions are decoded into 118-bit micro-operations (micro-ops). The micro-ops are RISC-like; that is, they encode an operation, two sources, and a destination."

      So there really is not point in getting rid of the X86 compatibility layer, it would provide little to no gains while breaking a ton of software.

      --
      ACs are never seen so don't bother. Always ready to show SJWs for the racists they are.
      • (Score: 2) by TheRaven on Sunday May 01 2016, @10:51AM

        by TheRaven (270) on Sunday May 01 2016, @10:51AM (#339759) Journal

        X86 compatibility die is less than 8% of the die

        Two things. First, that 8% is on a high-end chip. It's quite a lot more if you're looking at a simple in-order pipeline. The area required to take an ARM instruction and turn it into something that can be executed by the next stage is tiny in comparison to the logic required to take an instruction that is between 1 and 15 bytes long and then split it into a sequence of micro-ops that must (to get good performance) then be fused together to things that actually make sense to stick into a pipeline.

        Second, the decoder is something that must be powered as long as you're feeding instructions into the pipeline. That's 8% (more on a simple pipeline) of the chip that you can't turn off in any low power mode other than complete standby. Oh, and that's not including the on-die memory required to store all of the microcode (most ARM cores have little or no microcode).

        and IIRC is quite easily turned off when not needed

        Sort of. Intel only does this on their really high-end chips (and only since the P4). When you're in a tight loop, it can continue executing decoded micro-ops from the trace cache. This really just moves power consumption around. You can drop the decoder into a lower-power state, but you need to keep the trace cache and the micro-op decoder (which is about as complex as an ARM decoder) powered all of the time, including when the main decoder is running. This doesn't help with your minimum power consumption.

        So you are saying RISC is crap then?

        RISC means a lot of things. Here are a few of the definitions from the original paper:

        • Every instruction is single-cycle (except loads and stores).
        • All instructions are orthogonal.
        • All instructions other than loads and stores operate solely on registers.
        • No complex addressing modes.
        • Large register set.
        • No microcode.
        • All instructions are fixed length.

        Neither ARM nor x86 meets all of these definitions, though AArch64 comes a lot closer. Modern x86 micro-ops (which vary hugely between microachitectures) meet some of these definitions, but most are not single cycle, many are non-orthogonal, and many depend on microcode.

        --
        sudo mod me up
    • (Score: 2) by bitstream on Sunday May 01 2016, @11:30AM

      by bitstream (6144) on Sunday May 01 2016, @11:30AM (#339777) Journal

      Think of a computer without memory segmentation! ;-)
      Or proper design to handle not yet implemented calls or features but still being able to present a consistent interface to software.

      *yummy*