Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.
posted by janrinok on Saturday June 20 2015, @01:43AM   Printer-friendly
from the still-trying dept.

Mozilla's Project Electrolysis aims to allow tabs and user interfaces to run in separate processes. It has been activated by default in recent nightly builds:

In current versions of desktop Firefox, the entire browser runs in a single operating system process. In particular, the JavaScript that runs the browser UI (also known as "chrome code") runs in the same process as the code in web pages (also known as "content" or "web content"). Future versions of Firefox will run the browser UI in a separate process from web content. In the first iteration of this architecture all browser tabs will run in the same process, and the browser UI will run in a different process. In future iterations, we expect to have more than one content process.

Developer Will Bamberg says the change will bring stability and security improvements. "There are three main reasons for making Firefox run content in a separate process: performance, security, and stability, Bamberg says. "The goal is to reduce 'jank' -- those times when the browser seems to briefly freeze when loading a big page, typing in a form, or scrolling. "In multiprocess Firefox, content processes will be sandboxed. A well-behaved content process won't access the filesystem directly; it will have to ask the main process to perform the request." Bamberg says "well-behaved" content processes needs to access much of the network and file systems. This would be much more restricted under the changes.

Former CEO of Mozilla Brendan Eich has announced a project called WebAssembly that could replace asm.js:

It's by now a cliché that JS has become the assembly language of the Web. Rather, JS is one syntax for a portable and safe machine language, let's say. Today I'm pleased to announce that cross-browser work has begun on WebAssembly, a new intermediate representation for safe code on the Web.

What: WebAssembly, "wasm" for short, .wasm filename suffix, a new binary syntax for low-level safe code, initially co-expressive with asm.js, but in the long run able to diverge from JS's semantics, in order to best serve as common object-level format for multiple source-level programming languages.

Who: A W3C Community Group, the WebAssembly CG, open to all. As you can see from the github logs, WebAssembly has so far been a joint effort among Google, Microsoft, Mozilla, and a few other folks. I'm sorry the work was done via a private github account at first, but that was a temporary measure to help the several big companies reach consensus and buy into the long-term cooperative game that must be played to pull this off.


Original Submission

Related Stories

Mozilla's 18-Week Release Cycle is Too Slow and Other Browser News 47 comments

Mozilla is planning to speed up Firefox's current 18-week release cycle, code in multiprocess support, and phase out the XUL and XBL languages currently used to build the Firefox UI (a change that may eventually break extensions):

Mozilla is planning big changes in how it builds its Firefox web browser, including speeding up its release schedule and – in the long term – getting rid of some of the Mozilla-specific technologies that have traditionally been used to build the browser's UI and add-ons. The decisions were discussed at Moz's "Coincidental Work Week" meetup in Whistler, British Columbia, Canada during the last week of June and were made public in a pair of forum posts by Mozilla engineering director Dave Camp on Monday. For starters, Mozilla plans to ditch its current 18-week release cycle in favor of something more agile. "We think there are big wins to be had in shortening the time that new features reaches users," Camp wrote. "Critical fixes should ship to users in minutes, not days. Individual features rolling out to small audiences for focused and multi-variate testing."

Firefox 39 was released on Monday. Changes include vsync (smooth scrolling) on Mac OS X, the addition of Unicode 8.0 skin tone emoji, removal of SSLv3, improving IPv6 fallback to IPv4, and support for the ECMAScript 2015 Proxy object. Mozilla has also unveiled a "Games Technology Roadmap," which sets out goals of further improving HTML5 + JavaScript performance relative to native applications, shipping the unfinished WebGL 2.0, and minimizing common issues like audio/graphics latency and "jitter".

Google says TurboFan, a new optimizing JavaScript compiler that will replace Crankshaft, will speed up various aspects of JavaScript performance (it currently shows a "29% increase on the zlib score of the Octane benchmark"). It has been shipping since Chrome 41, but will be improved and switched on in more code scenarios over time until it completely replaces the Crankshaft compiler.

Microsoft's new Edge browser will not include ActiveX and Silverlight support, and will instead use HTML5's Media Source and Encrypted Media Extensions for "premium media", as well as MPEG-DASH and Common Encryption (CENC). Internet Explorer 11 will retain Silverlight support.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Saturday June 20 2015, @01:57AM

    by Anonymous Coward on Saturday June 20 2015, @01:57AM (#198517)

    Already is a real WASM-assembler, non of this silly webASM.

  • (Score: 2, Disagree) by GungnirSniper on Saturday June 20 2015, @01:58AM

    by GungnirSniper (1671) on Saturday June 20 2015, @01:58AM (#198518) Journal

    There shouldn't be anything named "chrome code" in a browser that is struggling to differentiate from Google Chrome.

    • (Score: 5, Informative) by stormwyrm on Saturday June 20 2015, @02:22AM

      by stormwyrm (717) on Saturday June 20 2015, @02:22AM (#198523) Journal

      The term 'chrome' was jargon already in use in Mozilla development since long, long before Google Chrome or even Google itself for that matter even existed. It arguably dates back at least to the Netscape days as I've heard reference to it in Netscape configuration, and refers to the user interface parts of the browser that are not the browser window, e.g. toolbars, tabs, menus, etc. See here [mozilla.org]. It ultimately seems to derive from the use of the word chrome [catb.org] as decorations that add nothing to the actual functionality of a system but are meant to attract users, by analogy to the "chrome" used to decorate motorcars.

      --
      Numquam ponenda est pluralitas sine necessitate.
      • (Score: 3, Insightful) by frojack on Saturday June 20 2015, @03:54AM

        by frojack (1554) on Saturday June 20 2015, @03:54AM (#198537) Journal

        That may be the inside baseball story, but its not really germane to the OP's point.

        --
        No, you are mistaken. I've always had this sig.
        • (Score: 0) by Anonymous Coward on Saturday June 20 2015, @07:09PM

          by Anonymous Coward on Saturday June 20 2015, @07:09PM (#198773)

          Sure it is. The fact that Mozilla have been using that term a lot longer than Google has been making a browser named Chrome is entirely relevant. The intersection of people who are technical enough to know that Mozilla uses that word for parts of their browser, but not technical enough to know that it is entirely unrelated to Google's browser has to be pretty damn small. Is it really reasonable for Mozilla to change a name they use internally to avoid confusing a small number of idiots?

  • (Score: 0) by Anonymous Coward on Saturday June 20 2015, @02:34AM

    by Anonymous Coward on Saturday June 20 2015, @02:34AM (#198526)

    That sounds like the kind of "truism" Dilbert's boss would come up with.

    • (Score: 5, Interesting) by VortexCortex on Saturday June 20 2015, @06:48AM

      by VortexCortex (4067) on Saturday June 20 2015, @06:48AM (#198561)

      As someone who develops in "pure" ASM.js from time to time, I'd say, "WASM is like a Java VM for the web." Unfortunately, we actually need one (again).

      It's too bad Sun dropped the ball and brought the whole kitchen sink of API glut (and attack surface) into Java Applets instead of tailoring Applets to have a lean mean web API for Java bytecode (similar to what they did for J2ME), because a Web VM is what we've needed. Javascript was just supposed to be the inefficient glue that let applets interact with the HTML elements, so it wasn't meant for efficiency since Java would do the heavy lifting, but now Java is all but banned from browsers. A standard VM bytecode is great because that means I can compile directly into a portable format from other languages -- You can compile C into Java VM or Perl Parrot VM bytecode, for example. It allows us to say, "Screw your 'expressiveness', damn starry eyed language devs! I'm compiling all my existing code for your VM's bytecode and saving time instead of reimplementing my wheels in your syntax."

      An efficient compilable intermediary opcode for the web is needed because Javascript's "prototypical" orientation is practically designed to be antagonistic to performance. You never hear people announcing "We've sped up C by another 15%" like you keep hearing with new "Javascript engines". Efficient languages get marginal optimizations since they weren't designed to fight basic Von Neumann architectural realities in the first place. JS engine devs are not speeding up anything, software doesn't change the hardware speed. They're just working around existing inefficiencies, and they have enough go on for decades of "speed improvements" thanks to the inefficient design of JS.

      I liked Google's NaCl offering, but it's clunky and not ARM compliant bytecode (it's a 'safe' subset of x86 machine code). JS engines are to the point now that if you write code in a procedural or functional way the JIT compiler will get the code going about as fast as on a VM's JIT (like Java's). My SHA256, 512, and SHA3 implementation in ASM.js performs as fast as my C implementations in FF, and only about 8% slower than C when running on Chrome(-ium) or Safari. IE... well, it runs my code, but at 300% slower. Last I checked they hadn't sped up the ArrayBuffers memory access yet, but I haven't tested IE for performance in a long while. Unreal devs said they got their engine code into ASM.js and running at about 2x slower than native, but that includes lots of calls outside the ASM.js which a hashing function doesn't do, so my comparisons are faster than theirs based on a much different use case.

      I'll keep targeting ASM.js, though I have many a beef with it, because it's valid JS code. That means ASM.js (and initially WEBASM) will still run even on a browser that doesn't know how to compile it to machine code and cache it like Firefox does. Chrome's generic optimizations alone make ASM.js code run pretty fast, but usually not as fast as FF (Chrome's NaCL goes faster than ASM.js, but FF and IE can't run it). The biggest bottleneck is calling out of the ASM.js context to do things like process input or manipulate the DOM or do rendering calls (Can't access Canvas, WebGL, Ajax, or raw audio pipelines directly from pure ASM.js). It looks like WASM seeks to fix some of that boundary inefficiency by eventually allowing us to take that performance hogging Javascript out of the loop completely. I probably won't switch to WASM until its more mature and has a decent debugger. Currently my metalanguage can compile down to ASM.js and provide specially formatted line number directives via comments, which allows me to single step the code in the JS debugger by preventing its compilation to ASM.js.

      If WebASM catches on then I guess I'll get to add yet another platform to my meta language and solution support list. Emscripten uses LLVM to target ASM.js so, for example, you can compile C code into WASM yourself and run it in a browser if you want. I don't use Emscripten or LLVM for my meta language because I utilize the target language paradigms where available instead of taking the convoluted trip to low level bytecode and back (unlike LLVM, my intermediary format is a high level bytecode which can also be interpreted to run directly in a VM). With a meta compiler I can say $LANG is the Assembly of $PLATFORM. Screw languages, they all suck. Best to make one you like and only write your code in it once, then compile it down to whatever newfangled language / platform they'll come out with next year.

      It used to be that language devs would frown on us performance sticklers, but now that mobiles are on the scene with lower power CPUs the poor efficiency of certain crappy language designs is getting rubbed in frustrated users faces. If I was Google, I'd fuck all these "ASM fur da webz" people over by just sticking a Davlik VM in Chrome so you could run Android apps in the browser, oh, wait... [theverge.com] Why do we need WASM again? Firefox, IE, Safari, et. al. should just integrate the open source Android Davlik VM bytecode and be done (and leverage an existing pool of Android developers).

  • (Score: 4, Interesting) by Runaway1956 on Saturday June 20 2015, @02:35AM

    by Runaway1956 (2926) Subscriber Badge on Saturday June 20 2015, @02:35AM (#198527) Journal

    The first couple iterations of the nightly builds incorporating multithreading simply crashed on my aging Sledgehammer Opteron. I sent in the crash reports, of course. Over a period of a few weeks, that problem was fixed. But, I kept watching the resource usage increase. The processors work more, and they need more memory with multithreading.

    I can't say that this is bad, exactly, but if you're stuck on old hardware, you certainly won't think this is a good thing. For me, the memory usage is worse than the increased CPU usage. I built this box with all the memory it could hold - but two sticks went bad. I'm down to 4 gig of memory, and Firefox can easily consume more than two gig. I've seen it go over three gig. If you do more than browse the web, that is just unacceptable on old hardware.

    Yeah, I've been telling myself for a long time that it's time for a major upgrade. I'm not willing to pay the price to replace the bad memory, when little more than double that money will buy a whole new box!

    • (Score: 4, Interesting) by Ethanol-fueled on Saturday June 20 2015, @03:22AM

      by Ethanol-fueled (2792) on Saturday June 20 2015, @03:22AM (#198535) Homepage

      "The goal is to reduce 'jank' -- those times when the browser seems to briefly freeze when loading a big page, typing in a form, or scrolling. "

      NoScript already did that for them. Anytime a page load chokes my Firefox it's because of either Flash or some other third party advertising or analytics bullshit running in the background. And NoScript is the method I will continue to use to avoid "jank" because I'm not going to let Mozilla help shove their corporate buddies' ads up my ass just because they're using more lube this time.

      • (Score: 3, Interesting) by K_benzoate on Saturday June 20 2015, @04:25AM

        by K_benzoate (5036) on Saturday June 20 2015, @04:25AM (#198541)

        They've actually implemented an internal feature to strip out the worst offending tracking scripts. They're calling it Tracking Protection, and it's available as a toggle in about:config in Firefox 38. It's going to get a proper UI checkbox in the next version. It'll be interesting to see if their blocklist includes Google-Analytics, by far the most ubiquitous tracking script on the web. That would essentially sever any remaining ties and goodwill between Mozilla and Google.

        It won't affect me much as I already run Noscript, and Google's tracking domains are blocked by my router, but if Mozilla can push this feature out to a substantial amount of web users that'll be a big improvement. The tracking/advertising business model needs to die, even if it means losing a lot of good sites that can't make their finances work. It just has to happen. The current state is intolerable and incentivizes site owners to wage war against their users' privacy and security.

        --
        Climate change is real and primarily caused by human activity.
        • (Score: 0) by Anonymous Coward on Saturday June 20 2015, @08:49AM

          by Anonymous Coward on Saturday June 20 2015, @08:49AM (#198585)

          Too bad they already fucked up so bad with their 3rd party commercial plugins installed by default and other crap, that i'm looking for a replacement browser. The H264 plugin was the last nail in the coffin and that whatever piece of crap commercial plugin they are going to include is the final blow to that final nail. And i have not forgotten the other crap either. One of the most important ones for me is negleting thunderbird. Mozilla has jumped the shark, and now i just don't see a way back to the future.

          As i've said before, i'm testing qupzilla for my needs. What i hope for is the noscript type of plugin to be released soon.
           

          • (Score: 0) by Anonymous Coward on Saturday June 20 2015, @01:07PM

            by Anonymous Coward on Saturday June 20 2015, @01:07PM (#198649)

            As i've said before, i'm testing qupzilla for my needs. What i hope for is the noscript type of plugin to be released soon.

            I just downloaded qupzilla for OS X. It's clunky, ugly as sin and is missing basic features. The preferences panel can't even be accessed. They've marked the bug fixed at least twice but it still doesn't work. Their excuse is they don't have a Mac to test on, so how can they even claim it's fixed? This looks like a well intentioned amateur project.

    • (Score: 2) by frojack on Saturday June 20 2015, @04:41AM

      by frojack (1554) on Saturday June 20 2015, @04:41AM (#198544) Journal

      Yeah, I've been telling myself for a long time that it's time for a major upgrade. I'm not willing to pay the price to replace the bad memory, when little more than double that money will buy a whole new box!

      What kind of memory is that expensive? I recently surfed over to Crucial.Com and upgraded the memory in my main workstation, and was amazed at how cheap the stuff had gotten.

      Be that as it may, I'm amazed at how long these guys have been getting along with a single thread. What in the world were they waiting for?

      --
      No, you are mistaken. I've always had this sig.
      • (Score: 2) by Runaway1956 on Saturday June 20 2015, @01:47PM

        by Runaway1956 (2926) Subscriber Badge on Saturday June 20 2015, @01:47PM (#198659) Journal

        http://www.newegg.com/Product/Product.aspx?Item=N82E16820614408&cm_re=3200_DDR_SDRAM_ecc_registered-_-20-614-408-_-Product [newegg.com]

        DDR2 memory is considerably cheaper, but the old DDR memory is quite expensive in gig or multigig sizes. With the original firmware, I couldn't use anything larger than 1 gig sticks. When I upgraded the firmware, it could use 2 gig sticks, and at that time, I was willing to spend the money for 8 gig of memory. Today - that same memory is more expensive than when I bought my original memory, and it just isn't worth it. If I'm willing to go with an FX chip instead of an Opteron, I can order mainboard, CPU, and memory bundle just about double the price of 2 gig of memory on that page.

        I COULD hit Ebay for working server pulls, but sometimes that doesn't work out so well either. Years ago, I bought an Opteron for overclocking - and when I got it, it just would not overclock. Good, solid CPU, at it's rated speeds, but it went freaking bonkers when you tweaked voltage, speed, or anything at all.

        Yes, an upgrade is in the near future, I just have to decide whether I'm sticking with an Opteron, or going with one of the FX chips.

      • (Score: 1) by Francis on Saturday June 20 2015, @01:49PM

        by Francis (5544) on Saturday June 20 2015, @01:49PM (#198661)

        It's a more complicated implementation. Chrome processes are completely separate and so it wastes a lot of memory for things like toolbars. The new builds of Firefox should be better thought out.

        Or it got in the way of them chasing away the users by making it into a Chrome clone.

        • (Score: 2) by frojack on Saturday June 20 2015, @09:53PM

          by frojack (1554) on Saturday June 20 2015, @09:53PM (#198809) Journal

          Why would toolbars waste a lot of memory?

          Multiple windows and processes can use the same code segments, with different data segments. Data for a tool bar is minuscule. Basically a bunch of state switches etc. The visual representation of stuff that makes up a screen is almost totally driven from static code segments with tiny amounts of data behind them.

          --
          No, you are mistaken. I've always had this sig.
          • (Score: 1) by Francis on Tuesday June 23 2015, @06:53AM

            by Francis (5544) on Tuesday June 23 2015, @06:53AM (#199762)

            They don't, but they have to be connected to something, and in this case it's basically everything. Mozilla has spent a lot of time because the toolbar doesn't have to be completely duplicated and connected to everything. That's why RAM is generally better utilized on Fx than Chrome.

      • (Score: 1, Insightful) by Anonymous Coward on Saturday June 20 2015, @01:50PM

        by Anonymous Coward on Saturday June 20 2015, @01:50PM (#198664)

        What kind of memory is that expensive?

        I'm guessing older ECC memory. The price arc for RAM is:
        - high when it's new/faster.
        - drops as it becomes common in new computers.
        - low when it's not the latest tech and the resellers have a lot of it.
        - lower when it's at least 2 generations behind and resellers have enough stock.
        - increases when it's old enough that it's not readily available.
        - higher when it's sold out of most places.
        - higher than ever when only a few resellers have it.

        tl;dr - supply and demand applies to RAM.

      • (Score: 1) by Pino P on Saturday June 20 2015, @03:08PM

        by Pino P (4721) on Saturday June 20 2015, @03:08PM (#198700) Journal

        What kind of memory is that expensive?

        The memory that comes with a new motherboard because you've already maxed yours. There are still plenty of compact laptops that won't go higher than 2 GB, including a lot of netbooks and convertible x86 tablets.

  • (Score: 4, Insightful) by jmorris on Saturday June 20 2015, @03:21AM

    by jmorris (4844) on Saturday June 20 2015, @03:21AM (#198534)

    Ok, somebody explain why we need yet another 'portable', 'universal', bytecode? I read the document but is there really anything here that a JVM doesn't do and have close to two decades of experience with performance tuning and security hardening already invested in it? And I write that as somebody who loathes Java.

    Seems the only proposed advantage is it gets to reuse some of the existing JavaScript engine, assuming they all are similar enough internally.

    Considering that everybody is obsessed with 'mobile' these days and that means Google and Android, which is already based on Java, it seems a good question to be asking about now. The big question is whether Android browsers implement tehir JS engines in native code or bytecode. Would seem a bit daft to burn battery to compile this down to 'native' Java based bytecodes instead of just send that in the first place.

    And yes, at the risk of wandering offtopic, the future is Android, we passed peak iOS already and it is destined to slowly fade to the customary 10% Apple share; mostly concentrated in a few wealthier cities of the 1st World. A few more percent might end up on something like FirefoxOS or something else oddball, but unless the landscape gets shook up, and pretty soon, the vast majority will be running Android. Eventually iOS will, like Microsoft already seems on the verge of doing, be forced to install and run .apk files.

    • (Score: 0) by Anonymous Coward on Saturday June 20 2015, @05:18AM

      by Anonymous Coward on Saturday June 20 2015, @05:18AM (#198548)

      Ask Google how it likes Oracle to cash in on Java APIs. Ask Microsoft how it liked their lawsuit about Java. Javascript is at least now owned by patent-happy yobbos.
      That said, I utterly despise javascript, but it is the language of the most portable, universal VM -- browsers. Seriously, JVM is not half as popular as a VM as browsers. As far as I recall, the best trick of asm.js is that it was implemented in such a way, that it could even run on older browsers, which weren't built to support it.
      As for peak iOS... no, just no. Apple has rabid and vendor-locked in fans. I'm not saying it's a bad product, but they have a nice, stable target.

      • (Score: 2) by jmorris on Saturday June 20 2015, @06:48AM

        by jmorris (4844) on Saturday June 20 2015, @06:48AM (#198562)

        Ask Google how it likes Oracle to cash in on Java APIs.

        A good point but those lawsuits look to have quieted down of late. And I think they were mostly fits of rage because Google only kinda used Java (Dalvik) and cut Oracle out of the game.

        As for peak iOS... no, just no. Apple has rabid and vendor-locked in fans.

        They hover at 20% now, when the next billion smartphones come online (finally replacing featurephones) across the world do you really think very many of them are going to run iOS? So where do you see their growth opportunity? Because when I do the math a stable base of locked in users in a market rapidly growing at the bottom end means a drop in market share. Everybody in the 1st world who wants one already has one, only upgrades are going to be happening for them so plenty of insanely great margins for Apple but no increase in installed base. A few newly rich in China are grabbing them now that they have competitive sized screens but how many are grabbing the super cheap China only handsets for every iPhone sold there? Don't be deceived by the fact 'everybody you know' has an iPhone. You probably live in the 1st world and more than likely in a big city, for example in the U.S. Apple still commands a 40% share and in the big cities probably even more. But the world is a big place and mostly far too poor to buy Apple's luxury goods. Unlike designer fashions, there ain't no bootleg Apple for sale at the flea markets of the world.

        The very last argument for Apple was that developers made more money in the Apple store because of the greater average wealth of their users and their greater willingness to spend money on apps and in-app purchases but even that is no longer true; quantity has a quality of its own. And once apps appear first on Android and eventually make their way to iOS all of the assumptions change.

        • (Score: 1) by Pino P on Saturday June 20 2015, @02:41PM

          by Pino P (4721) on Saturday June 20 2015, @02:41PM (#198687) Journal

          Ask Google how it likes Oracle to cash in on Java APIs.

          A good point but those lawsuits look to have quieted down of late.

          That's because last time I checked Google was waiting to hear whether the Supreme Court of the United States will take up API copyrightability. See the most recent briefs [eff.org]. If not, it has to go back to the district court for a ruling on whether use of an API is a fair use.

          when the next billion smartphones come online (finally replacing featurephones) across the world

          The last time I checked, carriers made it tricky [slashdot.org] to use a smartphone without automatically getting a data plan added at an additional cost of hundreds of USD extra per year. Or is this sort of "cramming" forbidden in the countries where "the next billion smartphones" are expected to come into use?

          Unlike designer fashions, there ain't no bootleg Apple for sale at the flea markets of the world.

          Apple alleges in Apple v. Samsung [wikipedia.org] that Android itself is "bootleg Apple".

          The very last argument for Apple was that developers made more money in the Apple store because of the greater average wealth of their users and their greater willingness to spend money on apps and in-app purchases but even that is no longer true

          The perception started when Google hadn't yet begun to process payments in all countries were devices were being sold [stackexchange.com]. But if a shift in buying habits has occurred since then, I'm interested in reading more about it. When and to what extent has this changed?

    • (Score: 3, Insightful) by prospectacle on Saturday June 20 2015, @06:30AM

      by prospectacle (3422) on Saturday June 20 2015, @06:30AM (#198560) Journal

      Why do we need this? It's pretty simple, in my (extremely biased and simplistic) opinion:

      Web browser based applications are easier for developers to distribute and update; and easier for users to download and run, than anything else. That makes them extremely popular in the literal sense that people use them a lot.

      All the functional limitations, performance issues, security concerns, standardisation problems, and signal to noise ratio of websites is overshadowed by the simple fact of fast, easy access and fast, easy distribution. Someone can run your new program within a few seconds of first reading of its existence, and then decide to bookmark it or forget about it. Nothing else can compete with a web-browser in this area.

      Plugins are not always installed and are usually single-vendor, so native (to the browser) code is more likely to be run.

      Whether users are too trusting of web-apps, or developers are too likely to use a web-app when a different format would be more suited to their purpose, the key point is they're made and used a lot. This will probably continue. I realise many object to this trend, but I don't see it reversing any time soon, quite the opposite.

      So if they can improve the performance of the scripting engine, and allow people to write code in a variety of languages that work equally well on said engine, then millions of users and hundreds of thousands of developers will have an easier time every day.

      --
      If a plan isn't flexible it isn't realistic
      • (Score: 3, Interesting) by jmorris on Saturday June 20 2015, @07:01AM

        by jmorris (4844) on Saturday June 20 2015, @07:01AM (#198565)

        This thing is stated as intending to diverge from stock JS so the only reason it isn't going to require a plugin is that Firefox, Google and Microsoft apparently intend to just bundle the VM into their browser products. Since they account for essentially 100% of the browser market at present this will be a seamless thing if it comes off as planned. But so would bundling a JVM into the browser and not requiring the downloading of a plugin. See the problem I'm having understanding why this is required? Several production grade JVMs already exist, have been tested in the real world for years and have had a multitude of security holes patched. Compared to a new effort with none of that track record.

        And in the end it is all pointless anyway. This is just the Java dream yet again, of write once, run anywhere. Never works. The only way to come close is to assume a least common denominator system and those are always too limiting. Next thing you know you have to platform specific hooks. Some have mice, some touch, some multi-touch. Some have other sensors. iOS exposes different APIs than Android, Windows is different from OSX and from Linux. So more platform specific code or massive frameworks to abstract it away... and those have their own problems. So And so on. Bottom line is that if each platform didn't have something unique to it there probably wouldn't be so many platforms in the first place.

        • (Score: 2) by prospectacle on Saturday June 20 2015, @07:43AM

          by prospectacle (3422) on Saturday June 20 2015, @07:43AM (#198574) Journal

          You may be right about existing VMs. I don't know enough about them.

          Would JVM be able to run javascript efficiently so that the new system could be backwards compatible with old web pages. I'm assuming this proposed WASM engine will, once released, compile javascript to bytecode if the page contains javascript (and maybe do the same for several other languages), or run bytecode directly if the page contains bytecode.

          Regarding the "dream of java" that never works, I would argue that javascript has gotten far closer to realising this ideal than java ever did (or ever will). Of course it's far from perfect.

          Regarding platform differences, they definitely exist, and create all kinds of work, but people work around them. Many web pages will adapt to the resources available and some are even quite good at doing this. The size (and amount, and resolution) of what's displayed on the screen at once, the user input that's required (whether from touch, swipe, type, etc), the framerate, and layout might all be adjusted according to your screen size, effective processor performance, graphics power, input hardware, etc.

          Whether or not we have a bytecode interpreter for browsers is probably not going to affect whether or not the web remains the "Write once, run anywhere" plaform of choice, but it might make the performance better.

          I don't know if existing VMs would be better suited but I suspect one of the reason to make a new one (besides novelty and PR value) is to make sure it can be optimised to support existing javascript, DOM operations, etc.

          --
          If a plan isn't flexible it isn't realistic
        • (Score: 1) by Pino P on Saturday June 20 2015, @02:45PM

          by Pino P (4721) on Saturday June 20 2015, @02:45PM (#198690) Journal

          Firefox, Google and Microsoft apparently intend to just bundle the VM into their browser products. Since they account for essentially 100% of the browser market at present

          Apple accounts for 100 percent of the browser engine market on iOS. Did you mean that iOS itself accounts for "essentially" 0 percent of the overall market?

          Several production grade JVMs already exist

          How much do these JVMs' publishers pay Oracle to license Java?

          • (Score: 2) by jmorris on Saturday June 20 2015, @03:27PM

            by jmorris (4844) on Saturday June 20 2015, @03:27PM (#198709)

            Actually Apple allows browsers into the App Store now. So you can get Chrome, Opera, etc. but not Firefox (yet) on iOS. On the desktop you can get both Chrome and Firefox along with a host of others. You just can't get IE anymore and probably won't have whatever Project Spartan ships as either.

            But yea Apple isn't exactly downstream with WebKit anymore so if they decided to be difficult they could be the turd in the punchbowl so you have a valid point in that I ignored that possibility. However I doubt they would, if everyone else adds support for a new web standard they will follow. If they already have Moz Corp, Microsoft and Google on board expect Apple to start contributing soon.

            As for Java, at this point I suspect Oracle would bust a nut at the offer to get a JVM into every browser even if it didn't include the whole 'Java' API set, only the VM and bindings to the DOM. Perhaps the fatal fight would be Google wanting their alternate VM?

            • (Score: 2, Informative) by Pino P on Saturday June 20 2015, @04:41PM

              by Pino P (4721) on Saturday June 20 2015, @04:41PM (#198731) Journal

              Actually Apple allows browsers into the App Store now.

              The last time I checked, third-party web browsers for iOS, such as Chrome and Opera, were shells around the UIWebView or (since iOS 8) WKWebView class shipped with iOS. Both of these classes are Apple WebKit. You can't get Firefox because Mozilla doesn't want to dilute the "Firefox" brand with a browser not based on Gecko.

              Gecko does not run on iOS because iOS policy prohibits Spidermonkey and all other third-party JIT engines. In iOS, all non-Apple code executes under a strict W^X policy. Only the system executable loader is allowed to populate an executable page. A desktop application can ask the memory manager to flip a page from writable and not executable to executable and not writable, and a JIT engine will flip a page after filling it with code. But this operation is forbidden to non-Apple executables on iOS. This means no HotSpot JVM, no CLR, no Google V8, no Google Native Client, and no Mozilla Spidermonkey.

        • (Score: 2) by JNCF on Saturday June 20 2015, @06:36PM

          by JNCF (4317) on Saturday June 20 2015, @06:36PM (#198765) Journal

          From Eich:

          It’s crucial that wasm and asm stay equivalent for a decent interval, to support polyfilling of wasm support via JS. This remains crucial even as JS and asm.js evolve to sprout shared memory threads and SIMD support. Examples of possible longer-term divergence: zero-cost exceptions, dynamic linking, call/cc. Yes, we are aiming to develop the Web’s polyglot-programming-language object-file format.

          It doesn't need wide acceptance in the beginning, if your browser supports ECMAScript then WebAssembly will work. Not that you could have this with a JVM. [github.io]

  • (Score: 0) by Anonymous Coward on Saturday June 20 2015, @09:31AM

    by Anonymous Coward on Saturday June 20 2015, @09:31AM (#198594)

    Binary representations for asm.js are already in use, commonly called deflate / gzip. Can we reverse compile this bytecode into javascript within the browser in order to inspect it? Otherwise there's no point to it really, is there?

    • (Score: 1) by Pino P on Saturday June 20 2015, @02:52PM

      by Pino P (4721) on Saturday June 20 2015, @02:52PM (#198693) Journal

      Binary representations for asm.js are already in use, commonly called deflate / gzip.

      If this bytecode is as efficient as its proponents intend it to be, it will be smaller after gzip than asm.js after gzip. Consider that ordinary JavaScript code that has been minified and gzipped is smaller than the same JavaScript code that has been gzipped without minification.

      Can we reverse compile this bytecode into javascript within the browser in order to inspect it?

      No, but you can block it as non-free. Richard Stallman of the Free Software Foundation wrote an essay about this a few years ago, titled "The JavaScript Trap" [gnu.org]. The FSF's GNU project has produced LibreJS [gnu.org], a Firefox extension to block execution of scripts that aren't marked in a machine-readable manner [gnu.org] as being available under a free software license.

    • (Score: 0) by Anonymous Coward on Saturday June 20 2015, @04:53PM

      by Anonymous Coward on Saturday June 20 2015, @04:53PM (#198739)

      >Can we reverse compile this bytecode into javascript within the browser in order to inspect it?

      Yes you can. For backward compatibility's sake, there will be a javascript webasm -> ASM.js decoder.

  • (Score: 3, Insightful) by mtrycz on Saturday June 20 2015, @12:13PM

    by mtrycz (60) on Saturday June 20 2015, @12:13PM (#198629)

    Am I just paranoid, or is the the most absurd idea I have come across since, idk.

    --
    In capitalist America, ads view YOU!
  • (Score: 2) by kaszz on Saturday June 20 2015, @07:18PM

    by kaszz (4211) on Saturday June 20 2015, @07:18PM (#198777) Journal

    Adding another virtual machine (VM) that will interpretate bytes as opcodes just like Java. Why would that be a good idea? It will throw everybody a lot of security and obfuscation issues.

    Isn't the intention from start that HTML is used to present content. Javascript fixes things that can't be done with HTML, like making a page respond to user actions without involving the server it came from. And Java VM is there for when you need advanced stuff that is in essence a binary downloaded and run directly in the browser.

    So if Javascript is the problem. Let's not worsen the situation by having yeat another obfuscating binary VM but rather replace Javascript with a decent scripting language that has a better structure like pre-declared variables and works consistent across platforms? and get away from this recursive patching of previously done "oops".

    • (Score: 0) by Anonymous Coward on Saturday June 20 2015, @07:56PM

      by Anonymous Coward on Saturday June 20 2015, @07:56PM (#198785)

      Do you even understand what this is about?

      but rather replace Javascript with a decent scripting language

      How many of the browsers in use are going to run that new scripting language natively with even the same performance as Javascript?

      So you take your new scripting language whatever it is and compile it to Javascript "asm", since that's what browsers are running faster and faster (whether you or I or everyone else likes it or not).

      Enjoy.

  • (Score: 2, Interesting) by jorl17 on Sunday June 21 2015, @03:44AM

    by jorl17 (3747) on Sunday June 21 2015, @03:44AM (#198929)

    So what we're seeing in many software projects, and in particular in Firefox, is the trend towards a microkernel-like system. Now we've got processes, sandboxes, IPC all over the place. It _IS_ "a microkernel", and I find this amazing!