Jump to content

Recommended Posts

Posted
wow I didn't think this thread could get any more off topic.

Apparently I was wrong :D

Andrew.

No sign of Panasung though. Has to be a good thing!!

That's the way!! Keep it going Owen / Mike / MLXXX!! :blink:

  • Replies 267
  • Created
  • Last Reply

Top Posters In This Topic

Posted
No sign of Panasung though. Has to be a good thing!!

I think his head exploded with the number of pixels being thrown around in this thread

It's more than his 38cm b+w CRT could handle :blink:

Andrew.

Posted
I think his head exploded with the number of pixels being thrown around in this thread

It's more than his 38cm b+w CRT could handle :blink:

Andrew.

Oh dear Andy....you're still fuming aren't you...boy, you are precious, LOLOLOL.

Posted
There is a fundamental difference between Digital Still Cameras and professional video cameras.

Professional video cameras use 3 CCDs so there are 1920 sensors for Red, 1920 sensors for Green and 1920 sensors for Blue.

Mike

I think you just explained why they can get a 1920x1080 signal ...... they are non bayer and therefore actualy have 3 times the actual chip resolution.

Posted

You dont seem to understand mate.

3 chips do not magically make Nyquist limits disappear, no mater how much you (or Sony) wish it.

The Bayer sensor is not the issue here.

If 3 chips systems give the sharpest images, why don’t any digital still cameras use them, even the ones that cost more then a pro HD video camera?

For example the Hasselblad 39 Megapixel camera used a single CCD and costs $40,000US without a lens.

The demands of professional still photography are much, much greater then video.

A frame of 1920x1080 video looks like utter crap compared to an image from even a cheap digital still camera.

Posted
The Bayer sensor is not the issue here.

If 3 chips systems give the sharpest images, why don’t any digital still cameras use them, .

Processing time - still cameras don't have to produce a complete frame every 1/50sec.

In Bayer array, the R, G and B values for every pixel have to be CALCULATED based on the intensities in NINE pixels - this has to be done for ALL 10 million (or 40 million) pixels.

Sony can produce a 4MP still image from their Consumer 1080i Handycams.

Mike

Posted
why they can get a 1920x1080 signal

Producing a 1920x1080i signal with useful information in every pixel is not the same as being able to see (after filtering, encoding, decoding etc) the detail in a subject with 1920x1080 of detail.

Mike

Posted
Oh dear Andy....you're still fuming aren't you...boy, you are precious, LOLOLOL.

nope, but you are still a tool.

come on, can't you think of anything better than calling me Andy ?

I mean it's just getting old now.

Andrew.

Posted
Producing a 1920x1080i signal with useful information in every pixel is not the same as being able to see (after filtering, encoding, decoding etc) the detail in a subject with 1920x1080 of detail.

Mike

I don't understand that statement .... surely pretty much all subjects would have far more detail to the naked eye then 1920x1080 ?

Posted
Processing time - still cameras don't have to produce a complete frame every 1/50sec.

In Bayer array, the R, G and B values for every pixel have to be CALCULATED based on the intensities in NINE pixels - this has to be done for ALL 10 million (or 40 million) pixels.

Sony can produce a 4MP still image from their Consumer 1080i Handycams.

Mike

Good point, but with today’s processing power and only 2 Mega pixel frames to work with I would think that processing time would not be an issue these days.

Digital SLR’s shoot several frames per second at huge frame sizes.

As for the 4MP still images, that tells us nothing. What is the visual resolution of the images?

With 1920x1080 image sensors (1 or 3) you cant get 4MP visible resolution, end of story.

Anyway this has nothing to do with the subject of this thread.

The reason I did not want to go into detail as to why video images do not have visual resolution anywhere near what there pixel count would suggest is because I knew this off topic discussion would result.

Posted

Producing a 1920x1080i signal with useful information in every pixel is not the same as being able to see (after filtering, encoding, decoding etc) the detail in a subject with 1920x1080 of detail.

Mike

I don't understand that statement .... surely pretty much all subjects would have far more detail to the naked eye then 1920x1080 ?

If I may clarify, I believe Mike was referring to what you see when you look at a 1920x1080 display.

He was referring to the fact that in the electronic pathway from an original 1920x1080 video camera signal to a 1920x1080 display, there are often three steps to be negotiated:

  • a pre-filtering of the pixel information [to help reduce the bit-rate of the encoded signal]
  • encoding (e.g. MPEG encoding for digital TV, which involves a 'key frame' followed by calculated frames, using motion vectors and other complex techniques)
  • decoding of the previously encoded information

These steps all serve to soften the amount of detail reproduced by the display.

In other words, even if a camera did capture to a very high resolution, an amount of that resolution would be lost before the signal reached the display.

This effect may be less when tranquil scenes are being encoded, that involve little change in the picture content from frame to frame.

_____________________________________________________________

However the above explanation does not explicitly refer to sampling theory, and the fact that as the signal emerges from the camera, the visible resolution contained along the vertical or horizontal axes of the camera's output pixels should, if the camera is well engineered so as to avoid aliasing, never be more than 50% of the pixel count along the relevant axis.

In other words, even if the display were directly connected to the camera output, the display should have no more than 960x540 visible resolution.

This is a very difficult concept to grasp.

A 1920x1080 digital camera is not designed to act like a matrix of 1920x1080 precision sensors with individual lenses focused on 1920x1080 unique square areas in the field of view of the camera so as to record the average intensity for each of those squares independent of the intensity of other squares, and output 1920x1080 pefectly independent readings.

If such a precision simplified camera were manufactured and connected directly to a 1920x1080 display, the camera would be quite effective for generating alias patterns, but it would not make for pleasant viewing of real world scenes. The viewer would become extremely conscious of sampling artefacts.

Posted
A 1920x1080 digital camera is not designed to act like a matrix of 1920x1080 precision sensors with individual lenses focused on 1920x1080 unique square areas in the field of view of the camera so as to record the average intensity for each of those squares independent of the intensity of other squares, and output 1920x1080 pefectly independent readings.

Are you referring only to video cameras here ?

Digital still cameras are certainly designed this way - you can enlarge an image to identify individual pixels to verify that each pixel reflects the image in front of it, though not at 100% modulation.

Mike

Posted
Are you referring only to video cameras here ?

Digital still cameras are certainly designed this way - you can enlarge an image to identify individual pixels to verify that each pixel reflects the image in front of it, though not at 100% modulation.

Mike

No, all digital cameras.

The only information at the single pixel level is noise.

No real detail can occupy a single pixel, and this applies to all digital cameras, be they single sensor or triple sensor.

Posted
Are you referring only to video cameras here ?

Digital still cameras are certainly designed this way - you can enlarge an image to identify individual pixels to verify that each pixel reflects the image in front of it, though not at 100% modulation.

Mike

I am also referring to still digital cameras.

If you examine the output very carefully of a good quality digital still camera you should find that each pixel is a weighted average of what was straight ahead of the sampling position, and what was one pixel removed diagonally and orthogonally, i.e. at the angles 0, 45, 90, 135, 180, 225, 270 and 315 degrees relative to the nominal sampling position. Cheaper digital cameras may rely merely on poor lens quality; rather than on an anti-aliasing filter in front of the lens, or supplementary weighted averaging of the output of the sensor. The poor lens quality will be enough to prevent a single pixel recording only what is directly in front of it in the field of view.

Put another way, if you take a photograph of a minute speck, you fill find that that speck does not merely appear in one pixel of the sampled image.

But let us assume for the sake of argument that it was the goal of still digital camera design to have 100% MTF between adjacent pixels. As you have admitted, you have found there is not 100% modulation.

If it were important, and the goal of good camera design, then the most expensive still cameras would give 100% modulation, or close to 100% modulation (subject to unavoidable limitations such as the wavelength of light, and diffraction effects).

However the extremely high quality still digital camera that Owen referred us to in post #228 above was tested and the result was a raw MTF at Nyquist of only 17.6%. I believe that was an appropriate low figure, reflecting good camera design. It meant that the raw signal was almost pure - very little aliasing. An image sharpening algorithm, set to standard, then boosted that figure to about 40% at Nyquist. I imagine this would have resulted in mild visible aliasing. (Some users would prefer to operate such a camera without the sharpening algorithm switched on.)

However, and this is a most important point, the visible resolution of that camera, even with sharpness boosting at maximum, would still have been no better than Nyquist. This is because whatever is done to boost apparent sharpness, there is a physical limitation that the output of the camera is merely a fixed grid of pixels. Fool around with a fixed grid of pixels as much as you like, but you will never be able to make it show the standard resolution pattern of converging straight lines at any better resolution than half the resolution of the grid itself. This is rather like audio sampling. A 44KHz sampling rate can only capture sine waves up to 22KHz.

The exception to the Nyquist rule is to use an artifical pattern. An example of this would be to place a bar pattern consisting of alternating white and black in front of an image sensing grid, such that the white bars are all in front of the odd numbered sensors, and the black bars are all in front of the even numbered sensors. This only has a practical application when dealing with computer generated text or graphics, not real world images. Real world images do not contain detail exactly one pixel wide, aligned perfectly with the image sensing pixels.

That's probably more than enough from me in this subject area.

Perhaps PanaSung will tell us some more about the excellent performance of low resolution displays when viewing Foxtel. ;-)

No, all digital cameras.

The only information at the single pixel level is noise.

No real detail can occupy a single pixel, and this applies to all digital cameras, be they single sensor or triple sensor.

Exactly.

Posted
This is rather like audio sampling. A 44KHz sampling rate can only capture sine waves up to 22KHz.

That's because you need a pos peak and a neg peak to represent wave. i.e the sampled wave rate is HALF of the sample rate

Fool around with a fixed grid of pixels as much as you like, but you will never be able to make it show the standard resolution pattern of converging straight lines at any better resolution than half the resolution of the grid itself.

That's because resolution is measured in LinePAIRS - a light pixel and a dark pixel. 1920 pixels can resolve 960 linepairs. i.e the linepair resolution is HALF of the number of pixels.

See http://en.wikipedia.org/wiki/Image_resolution

Posted
That's because resolution is measured in LinePAIRS - a light pixel and a dark pixel. 1920 pixels can resolve 960 linepairs. i.e the linepair resolution is HALF of the number of pixels.

See http://en.wikipedia.org/wiki/Image_resolution

Under perfect conditions, 1920 pixels can resolve 960 line pairs with 100% contrast when the line pairs are exactly one pixel wide, and conveniently aligned so they do not straddle the sampling grid. If the sampling grid is then moved exactly one half pixel, so that the grid straddles the white and black lines of each the line pair equally, the result would be an expanse of grey. This is because each sampling position in the grid would see either:

  • half a black line on the left, and half a white line on the right = grey; or
  • half a white line on the left, and half a black line on the left = grey.

More generally, with real life images, 1920 pixels can only resolve 480 line pairs.

Similarly, a traditional resolution wedge of converging straight lines will yield a maximum visible resolution of 480 line pairs, when the sampling grid is 1980 pixels. [The fact the converging lines are not parallel to each other precludes the sampling grid from being jockeyed into a position to show a visible resolution of 960 line pairs.]

  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...
To Top