PDA

View Full Version : an even brighter future!



arthurking83
17-07-2013, 7:42am
The one aspect of digital photography that stands out over the past few years ... has been the advancing pace of the higher quality capability of elevated ISO values, almost across the entire digital camera board.

I remember even only just a few years back when the likes of a MFT sized sensor really produced very ordinary ISO800 images in terms of noise, and now those same sized sensors in the current gen Fuji cameras are producing profession quality images now at ISO6400!

So what was once the pixel race, finally changed lanes and became the quality race, or more specifically .. the high ISO race. Even tho other cameras also had enormous impact with ever increasing high ISO qualities, I reckon it was the D3 that turned the game from a Mp focused marketing ploy to one that provided much better image quality across into lower light level areas.

In some ways this can actually be seen as a negative in terms of photographic advancement .. but that's not really a point to consider in this thread.

What is a point to consider though, is THIS (http://connect.dpreview.com/post/3175393898/aptina-explains-clarity-plus-clear-pixel-technology) new announcement on what appears to make perfectly good sense in some ways.

Basically, the announcement is that of a new image sensor design, with a utterly simple idea. We all know that a digital sensor has an RGB layout for it's sensor design, where alternating pixels on the sensor are coloured either Red, Green, Blue,(with twice as many green pixels as either of the other two colours) .. and hence the Green pixels are the luminance reference data.

So with this sensor that replaces green with clear pixels, they obviously use software to interpret the colour green in the image.

Using clear pixels is not a new concept for digital imaging, Fuji had used it for a while on their S series pro cameras S3, S5 .. etc .. but their designs(as per the later Sony types) still used Green pixels in the normal manner, but used clear pixels as well. In the case of some of the Fuji cameras, the clear pixels were only used in a very limited and specific manner .. and that was to protect highlight data from the sensor only, not really for better low light/high ISO quality.

But now what this Aptina mob are saying is that you can imitate green in the image artificially, and because the dye is removed from those pixels(ie. half the pixels on the sensor) less amplification is required of the signal data, and hence better SNR is achieved. We simply know this as improved ISO quality, more specifically as better high ISO image quality.

Makes you wonder where it will all end..... at some point in time they will have exhausted all major advancement capability for the Bayer filter design and improvements in ISO quality with each new model will not be as great a leap as it once was with the D3.
If this newly announced advancement is truly what they say it is, then most manufacturer's would surely turn the proposed advantage(of 1Ev better quality) into at least 2Ev better quality before they introduce it to market .... which means that at current technology levels, we'll be seeing ISO102K at pretty high quality in a couple of years time.

My interest is to see which(if any) manufacturer in the current list of big producers will be first to use this sort of idea in their cameras?
The downside to the tech of course is the need to artificially process the colour green into the image, and of course processing is costly in terms of power usage ... so battery life, and probably speed of image creation(ie. fps/buffer fill rates/card write times) will all be impacted in some way, where they otherwise wouldn't be.

Just curious as to what others think ... if this sort of technology is interesting both in terms of reading about it, and wanting it available(cheaply obviously!!) as another option ... a camera with the ability to capture images at something silly like ISO51K or ISO102K, with the same ability as we have now at ISO6400-25K with current gen cameras.

Steve Axford
17-07-2013, 10:56am
Interesting. Perhaps the human visual system can give some leads. We see in colour (usually), but really only at the fovea, that tiny bit in the middle. There are cones to sense colour in the rest of the eye, but only a few, yet we see colour across our whole vision?? The trick here is the back end - commonly known as our brain. New eyes can certainly help, as anyone who has got glasses can attest, and without eyes we simply cannot see, but our brains are what makes sense of it all. I suspect that processor design will start to become more important than sensor design in cameras. As you point out, the benefits from sensors are getting less as the technology matures, but the programming still has a long way to go.

arthurking83
17-07-2013, 9:37pm
Yeah, processing is always as important if not more so.

I remember reading some articles from a few sources back in the early days of the D3 about how part of the D3's massive leap forward in lower noise at higher ISO values was attributed to the faster processing .. or passing data from sensor to ADC to CPU and so on. I can't verify the exact sources or their validity .. this is just a vague recollection of some meaningless info I found interesting at the time.
Apparently it had something to do with the number of data paths from sensor to ADC, which (prior to D3) went from something like 16 or 8 channels to either 2 or 4 .. due to current CPU abilities of the time, and the faster processing ability of the D3 meant that a wider 16 channel data path could be maintained all the way from sensor to ADC to CPU, as the new CPU could maintain the data flow and not overload itself with the massive data input due to the increased pipeline.

That's about the gist of it(or how I understood it) .. Thom Hogan rings a bell on this info.

So not only does the increasing ability of data processing in the camera make for cleaner high ISO images just from the simple and obvious standpoint of in camera NR processing.

if you look at the sample images in the link, while it's obvious that the noise levels are dramatically lessened, the other side of the story is that the green looks a lot less punchy in the lower noise image. I would have liked to see more of the 'rendition' chart in the background too tho (if it was a colour checker type of chart).

Steve Axford
18-07-2013, 11:12am
I think the opportunities go far beyond simple processing speed. As the number of pixels tends to stabilize, but processor speeds keep increasing, there will be excess processor cycles available for clever processing. Perhaps we can learn something from the human visual systems where all the really clever stuff is done in the back end (our brain). It's hard to know what will happen but there are certain to be some big new things that go beyond simple hardware speed. As with the thing that prompted this thread there are also possibilities with new hardware design. I don't know if this is possible, but imagine a sensor array where each pixel could be separately controlled electronically. That would allow each pixel to be dumped when full (or time ran out) rather than the current method where they are all dumped at the same time and only how full they are is measured. Imagine the increase in dynamic range possible by using this method. 20 stops would be quite possible which is similar to our visual system. Incidentally, our eyes can only see less than 4 fstops of dynamic range from one pixel to the next. It manages to "see" 20 fstops by some very clever control. There is no reason why cameras can't do the same in the future.

arthurking83
18-07-2013, 2:44pm
...... I don't know if this is possible, but imagine a sensor array where each pixel could be separately controlled electronically. That would allow each pixel to be dumped when full (or time ran out) rather than the current method where they are all dumped at the same time and only how full they are is measured. Imagine the increase in dynamic range possible by using this method. .......

LOL! I wished for this years ago after having replaced my first set of grad filters.

That sort of technology tho would take the 'fun' out of photography in that the camera could be set to 'truly auto' mode, the next step above 'auto' .... and simply capture the best pixel data for each and every pixel .. full time HDR, in a sense, for any scene type with a single press of a button ... which is also easy to automate :p

I can't imagine why this sort of technology isn't feasible within our lifetime.

Apparently Canon's 5DIII and 7D have a system of pixel operation that is a dual action alternating line method, which partly describes the process you've described there Steve.
Magic Lantern has produced a firmware that accesses this feature(5DIII and 7D only due to the alternating line pixel operation in those cameras) ... so you can do a high/low ISO trick that boosts dynamic range to a degree using variable ISO values. There are caveats in the form of increased aliasing and moire .. but the boost in dynamic range looks to be about 2-3Ev.

We can only hope I guess.

Steve Axford
18-07-2013, 8:10pm
I think the problem is with the huge increase in chip components that is needed. Each pixel would require some logic (like the human eye) and that means unique paths and logic components for each, or each group of pixels. Of course this is possible in the future but it will probably be mobile phones that do it first, just like CMOS sensors started at the bottom end not the top end.