Both Facebook and Netflix implemented their eponymous apps with Web. Despite spending millions of dollars, neither of them could achieve an iPhone-like user experience (60 frames per second and less than 100ms response to user inputs) on anything less powerful than a system-on-chip (SoC) with four ARM Cortex-A9 cores.
In contrast, numerous products like infotainment systems, in-flight entertainment systems, harvester terminals and home appliances prove that you can achieve an iPhone-like user experience (UX) on single-core Cortex-A8 SoCs. Our above-mentioned manufacturer HAM Inc. (renamed for the sake of confidentiality) verified these results by building both a Web and Qt prototype.
In this white paper, Burkhard Stubert explains how he could save one of the world's largest home appliance manufacturers millions of Euros by choosing Qt over HTML. The secret? Qt scales down to lower-end hardware a lot better, without sacrificing user experience.
With a five times smaller footprint, four to eight times lower RAM requirements and a more efficient rendering flow than HTML, Qt provides faster start-up times and maintains the cherished 60fps and 100ms response time, where HTML would struggle. The calculations show that being able to just downgrade your SoC by just one tier like this, Qt can reduce your hardware costs by over 53%.
(Score: 1, Interesting) by Anonymous Coward on Friday February 23 2018, @09:13AM (2 children)
Back when Java bytecode was still interpreted (rather than JIT'ed), Java fans kept claiming that Java code could be *faster* than C code. Even though Java at the time was the slowest of all languages.
I tried arguing that if interpreted Java bytecode is faster than compiled C code, then write the Java runtime in Java, and show that it runs Java bytecode faster than the native Java runtime. Of course it would need to run on top of the native Java runtime, just like any other Java program.
Not only would that have to be faster if the claim is true, it would also mean that running the Java bytecode version of the Java runtime on top of itself is faster than running it on top of the native Java runtime. And as a consequence, you'd be able to keep stacking it on top of it self, making it faster every time, until you approach infinite speed.
Even that didn't convince them that there is no way an interpreted language is faster than a compiled one.
(Score: 0) by Anonymous Coward on Friday February 23 2018, @05:52PM
lmao!
(Score: 2) by Wootery on Friday February 23 2018, @07:40PM
But that's a straw man. Not even the silliest hard-boiled JVM advocates ever tried to claim that a 'pure' (non-JIT) interpreter could outperform traditional ahead-of-time compilation.
Thankfully, modern JVMs are not pure interpreters. They're far faster than pure interpreters... but, yes, they're reliably slower than highly optimised C code compiled with a serious C compiler.