Arducam Reveals Hawk-eye, a 64MP Raspberry Pi Camera:
Arducam's latest Raspberry Pi camera module, Hawk-eye, is now available for pre-order, somehow cramming 64 megapixels into a sensor measuring just 7.4mm x 5.55mm. Its lens has full autofocus, a maximum aperture of f/1.8, and sees an angle of view of 84 degrees - the same as a 24mm lens on a full-frame camera.
Of course, all those megapixels mean there's plenty of opportunity to crop into your images or print them on huge pieces of paper - Sony's A7R IV currently takes the crown for the highest resolution full-frame (24mm x 36mm) mirrorless camera with 61MP, while Nikon and Canon top out at around the 45MP mark. Fujifilm will sell you a 102MP camera, but it uses its 32.9mm x 43.8mm medium format sensor.
Arducam's new device uses the same libcamera library, ribbon connector, and dimensions as the official Raspberry Pi camera module 2.1, so it can slot into existing Pi camera setups, and you can use up to four of them with a single board to create a multiplexed depth-mapping system. The camera can capture still images at up to 9152x6944 pixels on a Raspberry Pi 4 or Compute Module 4 (16MP on older boards and Zeros), and video at up to 1080p30 on a Raspberry Pi, though you may be able to take it higher on other boards, up to 9152x6944 at 2.7 frames per second.
(Score: 2) by Freeman on Thursday April 28 2022, @04:33PM (12 children)
Unfortunately, I wasn't able to find a kit that would let me use that as a drop-in camera replacement. Would be kind of cool to create a portable camera with that lens, but it would take a good bit more time and money than I care to invest. Still might pick one up and a cheap tripod mount, just to play with. Assuming they don't all instantly sell out and are sold on e-bay for 3x markup . . .
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 1, Interesting) by Anonymous Coward on Thursday April 28 2022, @04:53PM (11 children)
More interested in the lens distortion and thermal imaging noise from packing so many photosites into a sensor that small.
(Score: 5, Interesting) by Mojibake Tengu on Thursday April 28 2022, @05:31PM (10 children)
Lens distortion including different colors refraction distortion is easy to compensate in software (pre-computed transformations found by proper calibration images), thermal noise by hardware (Peltier device direct cooling).
Though Raspberry Pi thermal design was always horrible in all its generations, I am avoiding this platform exactly because of that.
The edge of 太玄 cannot be defined, for it is beyond every aspect of design
(Score: 1) by fustakrakich on Thursday April 28 2022, @07:07PM (1 child)
Where's our "flat lenses" [ieee.org]?
La politica e i criminali sono la stessa cosa..
(Score: 0) by Anonymous Coward on Thursday April 28 2022, @08:58PM
Flat lens for flat sensor or curved sensor for curved lens.
(Score: 1, Informative) by Anonymous Coward on Thursday April 28 2022, @07:30PM (7 children)
This is true for cooling sensor blocks to prevent overall heat build up but it doesn't change the physics. [wikipedia.org] Some of the problems are mitigated by using less densely populated sensors with larger photosites, then both shot noise and 1/f noise are reduced since the source follower can be larger. So a 7.44x5.55mm sensor is never going to equal the noise performance or usable dynamic range of the 24x36mm Sony sensor mentioned in the summary. The noise performance of this 64MP sensor image downsampled by half would approximate an image from a 16MP sensor of the same physical dimensions. There are massive tradeoffs in terms of usable dynamic range and for motion imaging - readout time. For me, it's these inherent flaws that make this an interesting camera.
(Score: 0) by Anonymous Coward on Thursday April 28 2022, @11:43PM (6 children)
You misunderstand the issue. Yes there is physics at play, but when you increase the number of photosites the increase in noise isn't quadratically related to the increase. You're unlikely to want to blow the picture up so large that the extra pixels are visible. 150dpi or thereabouts remains the maximum that people would likely ever want to blow the images up to, but in this case that's roughly 61x 46" It's rare that anybody would want to blow up an image that far. So, you get a reduction in noise via averaging pixels during the down conversion process. With that many pixels, you'd likely see 4 or more photosites being averaged together, reducing the impact of any given pixel being affected by noise.
If you zoom in to 100% zoom and look around the noise now is probably not that much better than it was 20 years ago, but because of the extra pixels, there's usually no reason to do so. You can zoom in by much less to get an image of the same size as in the past and see a significant reduction in noise due to more photosites covering the same portion of the image as in the past.
If you multiply the amount of cropping by the actual iso rating on the image, you can figure out roughly how much noise you're gaining due to the decreased number of photosites being averaged.
(Score: 2, Interesting) by anubi on Friday April 29 2022, @08:54AM (5 children)
Let me run this up the pole ...
Let's say I want a high resolution image, but I have a low resolution camera...say 480*640. But it takes 60 captures/sec.
Is there some neat math trick I can use to take thousands of low res images to make one high res image?
There was a little thing I did long time ago, measuring characteristics of a wire bonding piezo transducer, using a CA3306 6-bit flash ADC, taking tens of thousands of readings over the course of a weld to integrate the noise out so I could get a good profile of transducer impedance during a weld.
It worked great. The noise was actually my friend by providing dithering for that crude ADC I was using.
I can't help but think someone has done something similar for cameras. That is increasing resolution by using lots of captures. Maybe even to the point of identifying pixels of one object that is moving against a static background, using less critical imaging tech.
I know this is being done... I am trying to build some ultralow power stuff, I have time, I don't have a lot of electrons.
"Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
(Score: 4, Informative) by Freeman on Friday April 29 2022, @01:25PM
Apple does that sort of thing on their new iPhone camera. Except they're using a high res camera and a few shots to compensate.
This is also done on the Pixel phone camera: (Good write up on what happens.) https://www.dpreview.com/reviews/google-pixel-2/3 [dpreview.com]
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 2) by Freeman on Friday April 29 2022, @01:28PM
Also, I'm quite certain that kind of tech has diminishing returns. You can't get what you don't have. You can get better pictures, but only to a certain point. After which, you just need a better sensor. Which may benefit from that technique. Also, a DSLR takes a 360 photo by stitching multiple high resolution photos together. Which has been done for quite some time with Astrophotography.
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 4, Informative) by hubie on Friday April 29 2022, @10:43PM (2 children)
Under certain circumstances, yes you can (nice intuition, by the way). It is called super resolution. One way to think about it, or at least the way I find it useful to think about it, is instead of thinking about a scene being projected onto a CCD, work the opposite direction and think about the CCD being projected onto the scene, so you'd have some grid of squares falling on your scene. The pixel value in your picture is basically the average of the brightness of whatever that pixel falls onto. Now imagine that you have a pixel that lands right on the edge of a black/white boundary where the boundary is right down the middle of the pixel. If an all white pixel would give you a value of 100, and 0 for an all black pixel, you'd get a vale of 50 in this case. Now move your pixel a little bit over so that 3/4th of the white part is in the pixel, so now you'd get a value of 75. You can imagine that if you move your pixel around a bit, you can tease out that edge in finer detail if you took all of your individual pictures together. So if you have a bunch of lower resolution images and they are dithered around a bit, you can pull out a higher resolution image than you can with one picture.
(Score: 3, Informative) by hubie on Friday April 29 2022, @10:45PM (1 child)
Sorry, I meant to put a hyperlink on "super resolution." Here's one resource: https://learnopencv.com/super-resolution-in-opencv/ [learnopencv.com]
(Score: 1) by anubi on Saturday April 30 2022, @03:16AM
Thanks...I was thinking down that line after that little stint I did with that CA3306 6-bit ADC. Yup, 64 states, yet I could tease a reading of better than 1% accuracy for piezo impedance ( yes, a long division was involved ) out of it by summing thousands of samples of a 60KHz weld. 8 MHz sampling freq IIRC. 8000 samples per msec, about 20 to 40 msec worth of samples ). 68HC000 32 bit processor.
Was I ever torn between doing that in analog or digital. 30 years ago!
Got so rusty I know most of you guys would run rings around me these days with this kind of stuff. So I ask first.
Thanks for the link. I do not even know what call what I am trying to do...I called it "integrating the noise out".
And I had a LOT of quantizing noise. So much noise that any sample, by itself, was useless. But tens of thousands of samples gave a useful reading.
"Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
(Score: 0) by Anonymous Coward on Friday April 29 2022, @09:33PM
If you can only get 1080p30. That's pretty damn low resolution for anything interesting, videowise.