Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Wednesday March 05 2014, @05:34PM   Printer-friendly
from the Seeking-malware-writers-that-don't-use-C dept.

Detective_Thorn writes:

"Researchers from North Carolina State University have developed a new tool to detect and contain malware that attempts root exploits in Android devices. The tool improves on previous techniques by targeting code written in the C programming language which is often used to create root exploit malware, whereas the bulk of Android applications are written in Java.

The new security tool is called Practical Root Exploit Containment (PREC). It refines an existing technique called anomaly detection, which compares the behavior of a downloaded smartphone application (or app), such as Angry Birds, with a database of how the application should be expected to behave."

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by tlezer on Wednesday March 05 2014, @05:50PM

    by tlezer (708) on Wednesday March 05 2014, @05:50PM (#11415)

    I didn't read the article :) but one obvious challenge here is that you need a baseline to compare against, which would take some time. What is normal? How does normal vary considering the wide variety of use cases and player skills? etc.

    • (Score: 1) by ikanreed on Wednesday March 05 2014, @05:56PM

      by ikanreed (3164) Subscriber Badge on Wednesday March 05 2014, @05:56PM (#11418) Journal

      They intend to use exiting popular apps in the app store "to establish a database of normal app behavior."

      That's from the press release, the actual methods and mechanics are probably going to be sold for precisely $arm + $leg.

      • (Score: 4, Insightful) by tlezer on Wednesday March 05 2014, @06:07PM

        by tlezer (708) on Wednesday March 05 2014, @06:07PM (#11425)

        Thanks for clarifying(seriously, I do read some of the articles..your nick is apropos heh)

        I guess I'm still dubious though for a couple of reasons
        * false positives quickly turn off a user bases(jmho, only personal + business experience informing this)
        * assuming a normal distribution, I'd be interested to know how much of the variation they would handle. They would need to get to at least 3 sigma IMO to avoid overwhelming negative feedback

        • (Score: 4, Interesting) by ikanreed on Wednesday March 05 2014, @06:20PM

          by ikanreed (3164) Subscriber Badge on Wednesday March 05 2014, @06:20PM (#11434) Journal

          Answer: Who cares? Big publishers can just talk their way past a false positives, and it's not like google has a shortage of doe-eyed independent developers trying to strike it rich. A coulple dozen walk-aways and angry blog posts aren't going to affect their bottom line for decades.

    • (Score: 3, Informative) by ngarrang on Wednesday March 05 2014, @06:08PM

      by ngarrang (896) on Wednesday March 05 2014, @06:08PM (#11426) Journal

      Maybe you should read TFA.

      The third paragraph states:
      "The new security tool is called Practical Root Exploit Containment (PREC). It refines an existing technique called anomaly detection, which compares the behavior of a downloaded smartphone application (or app), such as Angry Birds, with a database of how the application should be expected to behave."

      They appear to have already done the research for the baseline against which anomalous behavior is compared.

  • (Score: 0) by Anonymous Coward on Wednesday March 05 2014, @06:55PM

    by Anonymous Coward on Wednesday March 05 2014, @06:55PM (#11460)

    Eh? If they are simply compairing to a "known good" source why not just use a checksum? md5 or sfv comes to mind.

    • (Score: 1) by bugamn on Wednesday March 05 2014, @07:39PM

      by bugamn (1017) on Wednesday March 05 2014, @07:39PM (#11480)

      Wouldn't a checksum be too limited for this purpose?

    • (Score: 2, Insightful) by neagix on Wednesday March 05 2014, @09:50PM

      by neagix (25) on Wednesday March 05 2014, @09:50PM (#11542)

      Because profiling the usage is a broader way to tackle the problem and helps them in building heuristics with machine learning (this is my assumption).

      Ideally, they would encode resources accessed by the app, then run some pattern recognition (RLE in the simplest case) and store this fingerprint for comparison.

      The failure of this method is when the app will use different resources in different cases, thus profiling would not be a really good approach. For exampleL file managers, console emulator and generally any advanced app will have highly variable profile of resources usage, but - guess what - these are not the majority of apps, so they might happily sweep them under the carpet in a commercial implementation of this technique.

  • (Score: 2) by Grishnakh on Wednesday March 05 2014, @07:50PM

    by Grishnakh (2831) on Wednesday March 05 2014, @07:50PM (#11485)

    How is this different from AppArmor, which has been used on some Linux systems for ages now?

    • (Score: 3, Interesting) by jt on Wednesday March 05 2014, @10:59PM

      by jt (2890) on Wednesday March 05 2014, @10:59PM (#11574)

      I think it's meant to take the 140 known-good applications and profile these to establish what a typical well-behaved application looks like, rather than to restrict what an application can do or check whether a given installed application has been compromised.

      So it's fine to call maybeDangerousSystemCall() if it fits a heuristically non-weird pattern of behaviour, but not if it appears apropos of nothing in the middle of unrelated code.

      I can see some merit in this approach. If you've ever looked through object code generated by some compiler you get used to how it looks, and any hand-crafted code stands out because it just 'looks wrong'. Like when you can spot which of you colleagues wrote some code just by the style. If some code doesn't fit the usual pattern there's a reasonable chance that it's a) doing something weird as a performance tweak, or b) doing something weird you don't really want it to do.