Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.
posted by LaminatorX on Saturday December 27 2014, @04:25AM   Printer-friendly
from the digital-negative dept.

Android Lollipop will include a feature that techrepublic.com's Jack Wallen believes could be a game changer for certain people. That feature? RAW image support.

When you snap a shot with your Android camera, the internal software compresses the image into a .jpg file. To the untrained, naked eye, that photo usually looks pretty spectacular. The thing is, what you see is what you get. You can't really manipulate that photo on any low level. It's compressed and saved in a read/write format, so the images can be more easily edited with a bitmap editor (such as The Gimp or Photoshop).

With RAW images, the data has been minimally processed from the image sensor. Many consider RAW images to be the digital equivalent of the old school negative. These RAW images will have a wider dynamic color range and they preserve the closest image to what the sensor actually saw.

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0, Offtopic) by Jeremiah Cornelius on Saturday December 27 2014, @04:50AM

    by Jeremiah Cornelius (2785) on Saturday December 27 2014, @04:50AM (#129397) Journal

    Dare I say what you are thinking? If there were raw images from every Android phone, every day would be another fappening....

    --
    You're betting on the pantomime horse...
  • (Score: 5, Interesting) by frojack on Saturday December 27 2014, @06:47AM

    by frojack (1554) on Saturday December 27 2014, @06:47AM (#129408) Journal

    I don't think most android cameras warrant dealing with raw images. There are probably two cameras out there that run android and have the lenses and sensor sizes that are remotely comparable to larger format cameras. No need to put lipstick on the pigs.

    --
    No, you are mistaken. I've always had this sig.
    • (Score: 5, Insightful) by lentilla on Saturday December 27 2014, @07:10AM

      by lentilla (1770) on Saturday December 27 2014, @07:10AM (#129410)

      I don't think most android cameras warrant dealing with raw images.

      True that, however that isn't sufficient justification in my mind for imposing what is essentially a technical limitation on a system.

      A long time ago, a single digital photo might fit on a floppy disk. For practicality's sake, they needed to be compressed to be "usable" for the vast majority of us. That limitation on file size doesn't (automatically) exist anymore - whilst we might want to email a smaller file, most people are going to want to archive the best quality photo possible. After all, computers are only going to get faster and better - but those moments that we commit to a photograph aren't going to ever happen again.

      By providing RAW images, we remove another impediment to our quest for "perfect" photos. The resultant image can only be as good as the lenses and sensors - and conversely, the lenses and sensors are limited by the resultant image. One holds the other back. Or, put another way, if we remove what is essentially an artificial limitation, then the other (in this case the lenses and sensors) have a great deal of headroom for improvement.

      Good enough today doesn't mean we shouldn't innovate further into tomorrow - horses were fast enough for our grandparents but we still invented cars. It's hard enough battling the unyielding laws of physics and economics (true practical limitations) without additionally hobbling ourselves with arbitrary, artificial and unnecessary limitations as well.

      • (Score: 2) by urza9814 on Wednesday December 31 2014, @01:19PM

        by urza9814 (3954) on Wednesday December 31 2014, @01:19PM (#130509) Journal

        A long time ago, a single digital photo might fit on a floppy disk. For practicality's sake, they needed to be compressed to be "usable" for the vast majority of us. That limitation on file size doesn't (automatically) exist anymore

        The problem as I understand it is that RAW isn't really a format at all. We still need to convert it to something to make it useful.

        Years ago, I loaded CHDK (alternative firmware) on my Canon digital camera. One of the features this gave was the ability to save raw files. But it turned out I couldn't find any software to open them, because in order for the editing software to open the raw image, it needs to have a profile of your camera sensor. And I was using some crappy point-and-shoot that wasn't intended to shoot raw photos, so none of these programs knew what to do with them. So the raw photos ended up just cluttering up my memory card until I turned the damn thing off again.

        most people are going to want to archive the best quality photo possible.

        If you want to archive the best quality possible of an Excel spreadsheet, save it as .xlsx. If you want an archive that will actually still be readable when you go to open it in a decade, save as .csv. Same goes for raw. JPEG imay still a better choice for archiving photos.

    • (Score: 5, Insightful) by wonkey_monkey on Saturday December 27 2014, @11:15AM

      by wonkey_monkey (279) on Saturday December 27 2014, @11:15AM (#129440) Homepage

      There's no lower camera spec where RAW stops being better than JPEG.

      --
      systemd is Roko's Basilisk
    • (Score: 5, Interesting) by q.kontinuum on Saturday December 27 2014, @11:20AM

      by q.kontinuum (532) on Saturday December 27 2014, @11:20AM (#129441) Journal

      Might be true for most Android phones. But in general there are some quite good smartphone cameras available. Full disclosure: I work for Nokia. The Nokia 808 with preview had an amazing camera which made, under certain circumstances, better pictures than a dslr. The Lumia 920, 930, 1020, 1030... did as well. We don't produce any smartphones anymore since that branch was sold to Microsoft, and fwiw I'm a Linux fanboy, but I loved my 1020 camera. Not sure which relevant technologies were retained in Nokia New Technologies, but afaik we kept all patents, so I am still hoping to see other devices with similar cameras and other OS. Supporting RAW format might be a crucial enabler for it.

      --
      Registered IRC nick on chat.soylentnews.org: qkontinuum
      • (Score: 2) by kbahey on Saturday December 27 2014, @04:50PM

        by kbahey (1147) on Saturday December 27 2014, @04:50PM (#129484) Homepage

        I have had three Sony Android phones in a row (Sony-Ericsson Xperia X10, Xperia Arc, and now Xperia ZL). The optics and sensor of the camera on all of them are really good, and better than any other Android phone I have seen with family and friends. I only miss optical zoom on it. Other than that, it replaced my using of a camera almost completely.

      • (Score: 1) by schad on Saturday December 27 2014, @06:55PM

        by schad (2398) on Saturday December 27 2014, @06:55PM (#129509)

        The Nokia 808 with preview had an amazing camera which made, under certain circumstances, better pictures than a dslr.

        I don't believe it. A DSLR in the hands of a novice, OK; I can definitely believe that. Most point-and-shoots take better photographs in the hands of a novice than any DSLR would. But even a shitty DSLR in the hands of an expert will beat the pants off any cell phone camera. I don't care how good the camera's optics are. The weak link in photography is not usually the equipment, it's the photographer.

        • (Score: 3, Informative) by q.kontinuum on Saturday December 27 2014, @09:01PM

          by q.kontinuum (532) on Saturday December 27 2014, @09:01PM (#129549) Journal

          I'm not a professional (although I'd like to consider myself not a complete novice either). My DLSR is a Lumix DMC L-10. I made some long exposure shots with it using a tripod, which I couldn't do with the Nokia Pureview 808 or Lumia 1020. For portrait photos I'd also usually use the DLSR. But these are planned images.

          However, for everyday situations (point-and-click with wide-angle lens and autofocus), the autofocus worked faster and more reliable than the one of the DLSR, and the higher resolution of the 1020 definitely paid out. Even for planned panorama images the higher resolution of the 1020 can pay out and lead to better results than the DLSR. Don't overlook that the camera does not only have an insane resolution, but also a bigger sensor (1/4 of the Four/Third of the DLSR, but approximately 4x the size of e.g. the Samsung Galaxy S4, and approximately the same size as the Experia Z3).

          I'm not claiming the 1020 is superior in all situations, but there are some cases. I'm also not saying I'd dump my DLSR for it, but there are plenty of situations where I didn't bring the DLSR, and in these situations the best camera is the one you brought with you :-)

          --
          Registered IRC nick on chat.soylentnews.org: qkontinuum
  • (Score: 5, Insightful) by lars on Saturday December 27 2014, @07:40AM

    by lars (4376) on Saturday December 27 2014, @07:40AM (#129417)

    If you could take long-exposures, you could get some really nice low-light photos from android phones. Also, if you want to do HDR photos, RAW is nice, but you generally want to take multiple photos of the same scene at different exposure values. As someone who does a lot of low-light photography, I'd much rather be able to control ISO and shutter speed then have RAW.

    • (Score: 5, Interesting) by Bytram on Saturday December 27 2014, @08:35AM

      by Bytram (4043) on Saturday December 27 2014, @08:35AM (#129422) Journal

      Question with respect to HDR: could one just take, say, 10 shorter-exposure photos and use software to extend the dynamic range in the final picture? As I understand it, the challenge lies in choosing settings so that a dark portion of the scene receives enough exposure to make out faint details without also causing bright parts of the scene from getting so over-saturated that the details there get washed out. So, if a pixel in the first of the 10 shots is already pretty well "lit up", you know by the time you stack up all 10 images, you'd have a washed out part of the scene. I'm by no means a photographer, so I'm struggling with the wording here, so please bear with me. If the range of values for a single pixel in a single shot were [0,255] and we took 10 images, then we could effectively add all the values for each pixel together and extend the range to be [0,2550] and then use software to normalize things back down to [0,255]. One could even employ a non-linear filter when doing this, so as to provide even more fine-grained control on what part of the scene to emphasize (the "dark" part at a potential loss of detail in the "bright" part, or vice versa.) As I understand it, something along these lines is used in astronomy in trying to resolve faint details during what would require a long exposure to get any detail at all.

      This seems like such an obvious thing to do, so given that high dynamic range imaging is still a problem, I must be missing something. So, what am I missing?

      • (Score: 5, Interesting) by lars on Saturday December 27 2014, @09:14AM

        by lars (4376) on Saturday December 27 2014, @09:14AM (#129430)

        That's a pretty cool idea, I honestly hadn't thought of that. I'm not sure if most of the software used to make the images would support that, but I might give it a shot. To make it work, you would need a way to keep the ISO down (maybe set a really low EV), and you might have to take hundreds of photos without moving the camea depending on the lighting. If the camera is taking 1/250 exposures and you need a 3 minute exposure, that's 750 images, quite a lot of data to process, especially in raw format. Still, if you really want a nice image and that's the only camera you have, it could be worth it.

      • (Score: 5, Informative) by TheRaven on Saturday December 27 2014, @10:07AM

        by TheRaven (270) on Saturday December 27 2014, @10:07AM (#129434) Journal

        That's pretty much what the HDR mode on Android does. It takes 10 (I think, possibly varies by model number) under-exposed photographs and adds them all together. The model that the grandparent discusses, taking different multiple exposures and averaging them, is what earlier HDR systems used and had exactly the problem that you imagine. Any one of the longer-exposure pictures can have a saturated value coming back from the CCD, so you are averaging 255 with a lower number, when the real value you should get from the longer exposure is something greater than what the CCD can handle. The other issue is that, because you need longer exposure shots, your total time for the photo is longer, which sucks if anything is moving.

        --
        sudo mod me up
        • (Score: 0) by Anonymous Coward on Saturday December 27 2014, @10:52AM

          by Anonymous Coward on Saturday December 27 2014, @10:52AM (#129437)

          s/CCD/CMOS/g

        • (Score: 2) by Bytram on Saturday December 27 2014, @12:12PM

          by Bytram (4043) on Saturday December 27 2014, @12:12PM (#129448) Journal
          So, too-long exposures over-saturate bright parts of the image. Must the length of an exposure be of fixed duration? Thinking out loud here, but it seems to me that there should be a way to have an exposure complete when a sensor gets neer its saturation point, and record how long it took? Repeat several times and integrate over all the exposures.

          So, continuing from above, when a sensor gets a reading of, say, 250 the camera detects iminent saturation, calls *that* exposure done, records how long it took, and then cycles through another one until the desired number of exposures or duration has occurred?

          Maybe the problem would then be that the dark parts of the image would not even get to register *any* value and thus result in parts of the image being completely black?

          I suppose one could combine several threshholded images with other images of fixed duration and mask out the known oversaturated pixels?

        • (Score: 2) by wantkitteh on Sunday December 28 2014, @12:53AM

          by wantkitteh (3362) on Sunday December 28 2014, @12:53AM (#129607) Homepage Journal

          Seriously? That's how Android does HDR? That's not even a vague approximation of how it works, I really hope you've got that wrong. Or at least vastly over simplified.

      • (Score: 5, Informative) by wantkitteh on Saturday December 27 2014, @12:52PM

        by wantkitteh (3362) on Saturday December 27 2014, @12:52PM (#129453) Homepage Journal

        Quite simply... no.

        In regular photography, under-exposed areas have a high signal-to-noise ratio and both under- and over-exposed areas use a very small measurement range in which to store detail. Stacking badly captured images will compound the errors, not fix them. In scientific terms, errors in the physical measurement selection range can't be mathematically fixed after the fact while preserving accuracy.

        Astronomy is a practically different situation. The sensors used are far more sensitive and can have their noise characteristics far more accurately determined so noise can be removed from the captured data without guesswork or approximation. Similarly, atmospheric noise can also be measured during capture for accurate removal. Neither of these approaches is practical in regular photography.

      • (Score: 2) by urza9814 on Wednesday December 31 2014, @01:26PM

        by urza9814 (3954) on Wednesday December 31 2014, @01:26PM (#130511) Journal

        This is a pretty standard hack to do HDR photos on standard point-and-shoot cameras. There's a lot of software to do it already. The biggest problem is that most of these cameras don't have a way to programmatically alter the exposure values and reshoot, so you end up with a rather blurry end result from moving the camera while you reset. Usually people doing this will only use 3 photos for that reason. CHDK (alternative firmware for Canon cameras) includes some scripts to automate the exposure bracketing and timing to make it a bit easier, then there's various programs you can use to stack them together (or even something as simple as multiple layers in The Gimp).

  • (Score: 3, Insightful) by wonkey_monkey on Saturday December 27 2014, @11:27AM

    by wonkey_monkey (279) on Saturday December 27 2014, @11:27AM (#129443) Homepage

    It's compressed and saved in a read/write format

    As opposed to what? A read-only format?

    A RAW file:

            Is not really an image file

    Err... what? It's no more or less an image file than a PNG, a JPEG, and so on.

    Has at least 8 bits per color for red, green, and blue and 12-bits per X,Y location

    Those are actually pretty low numbers, and they're not defining features. I'm not sure how he's got from 8-bits per colour (it's more likely to be 12-14) to 12-bits per X,Y location, either.

    Is uncompressed

    I thought most RAW formats were losslessly compressed these days.

    Is read-only

    No it's not. It's perfectly plausible to write a RAW file - I mean, the camera's got to do it in the first place, for one thing. But if someone so wished they could write whatever image data they wanted in a RAW format. It would no longer be raw sensor data, but that's for philosophers to argue over.

    --
    systemd is Roko's Basilisk
  • (Score: 2) by Justin Case on Saturday December 27 2014, @02:04PM

    by Justin Case (4239) on Saturday December 27 2014, @02:04PM (#129459) Journal

    PNG was invented what, before my grandkid's mom was born, so why is anyone still using lossy JPG? And before you answer with "PNG doesn't open in $PROG" please note that you are restating my premise: that JPG is being used where PNG should be instead. My question is WHY?

    Personally, I think the camera makers want to be able to lie and claim $BIGNUM megapixels!!! but figure they can blur the image so bad during compression that you can't tell they're outputting more pixels than the sensor originally captured.

    • (Score: 2, Informative) by Anonymous Coward on Saturday December 27 2014, @05:31PM

      by Anonymous Coward on Saturday December 27 2014, @05:31PM (#129498)

      On a display, one pixel contains three subpixels, one each for red green blue. Image formats that I know of, record red green blue components for each image pixel.

      In a camera, each pixel is one of the three colors only. In the usual Bayer arrangement, each square of four camera pixels contains two green pixels, one red pixel, and one blue pixel.

      Exception: Foveon image sensors have stacked red green blue sensors at each pixel.

    • (Score: 3, Insightful) by Anonymous Coward on Saturday December 27 2014, @10:12PM

      by Anonymous Coward on Saturday December 27 2014, @10:12PM (#129570)

      Because no one would bother to convert it back with a 'proper' level of jpg compression.

      PNGs are for when you need partial transparency, smooth gradients and high contrast edges that don't look ugly, e.g when you're a graphic designer dealing with a client. For general web use they add unnecessary bulk when a good chunk of the country is running on 4gigs of bandwidth a month.

      Jpg is for when you're posting pictures of your food on twitter or other useless crap.
      99% percent of images on the internet don't see any improvement from png format. 95% of images from professional photographers don't either; they just think they do.

    • (Score: 2) by wonkey_monkey on Tuesday December 30 2014, @03:57PM

      by wonkey_monkey (279) on Tuesday December 30 2014, @03:57PM (#130230) Homepage

      Because JPEGs are a lot smaller than PNGs with - for "real world" images, at least - little visual loss. That's pretty much it. They're faster to store, they're faster to open, they're faster to transfer.

      but figure they can blur the image so bad during compression that you can't tell they're outputting more pixels than the sensor originally captured.

      JPEG compression does not simply mean "blurred." In any case, what would stop them blurring a PNG?

      --
      systemd is Roko's Basilisk