Posted: Fri May 02, 2008 12:10 am
Just thinking while typing;
If I had a NBTV signal recorded with or without synch pulses, of arbitrary resolution and frame rate, with various noise, dropouts, and whatever other problems, how would I recover the images?
My first thoughts that there are two basic patterns that we're looking for. Firstly, on the highest 'level' we have single frames, consisting of n scanlines each. The characteristic of a frame is that it is, for all intents and purposes, identical to the frame that came before and the frame that comes after. On average. There might be variation in speed over the entire signal, but this becomes unimportant when analysing the entire signal to find the 'frame length'. So, given a whole 'stream' of NBTV signal we should be able to, by simple matching, determine the best 'length' of signal which can be overlapped such that it becomes effectively identical snippets. By identical, I'm talking about something like a least-square difference.
Algorithm: For a digitised signal stream of n bytes, representing an arbitrary number of frames from our 'unknown' NBTV signal, we first start by selecting the smallest frame length. We could start with 100 bytes, doesn't matter... let's just say x. Divide the ENTIRE signal into x-unit blocks, then do a sum of squares difference between each successive block against the previous/next. This total is our 'ranking' for the frame size of 'x'.
Repeat the above using x+1, and again with x+2 etc., right up to some arbitrary maximum for the frame block length. At the end of this process we have some value of x at which there is a minimum of difference FOR ALL FRAMES of assumed size x between successive frames. It doesn't matter if there is warble or noise. It does matter if we have significant frame rate changes (but it doesn't matter if the frame rate average is maintained).
OK, so assuming the above gives us a reasonable frame block length, we can now attempt to analyse how many lines per frame. Assuming some meaningful content (ie: not all one shade/colour, because that's attributable to any number of lines), then the characteristic of a frame is that successive scanlines will be approximately similar. That is, white pixels are generally next to white pixels (though not always). But on average, they will be.
So, we do a similar approach. Assume some arbitrary line length, y. Divide a frame block (x bytes) into y subsections, and perform a sum of squares difference between all adjacent pixels in all scanlines of the frame. That is, (pixel 1 line 1 - pixel 1 line 2)^2 + ... etc. This gives a rating for one frame, given the assumption of y scanlines/frame. Perform this for a large number of frames from the sample, then average -- giving a rating for the assumption of y scanlines per frame.
Repeat this for y+1, giving another rating, then y+2, etc. The final value of y chosen is the one which has the minimum sum of squares calculation. By performing the calculation over multiple frames we may be removing some of the effects of noise and other factors that may making a calculation on an individual frame basis difficult.
After the above process, we should have an x (frame size) and y (#scanlines) values which can then be used to reconstruct a first-pass of the original.
At this point we have an 'average' reconstruction. That is, assuming constant frame rate (we can assume that the lines per frame is constant!). But it's highly likely that the frame rate is variable, not constant, so our reconstruction will show drift and phase errors.
But since we have a given lines/frame known now, it should be possible to determine that the difference-square value from frame to frame are fairly poor (certainly not equivalent to the averaged value used to determine the actual average frame rate). On a localised basis we can determine if the difference-square value improves if we increase the assumed frame rate (that is, decrease the frame block size) or decrease the frame rate (increase the frame block size). We would expect that the actual frame rate from frame to frame is varying not too much.
So I'm thinking that we might be able to walk through the frames (in either direction) from a known 'good frame sequence' -- that is, where we have consecutive frames where the squared-difference is minimal -- and adjust the frame block size as required to make the successive/preceeding frames minimise their squared-difference.
Well that was a bit of a ramble, just wanted to get something down in writing and I'll come back tomorrow and think that I really should have gone to bed a few hours ago
Cheers
A
If I had a NBTV signal recorded with or without synch pulses, of arbitrary resolution and frame rate, with various noise, dropouts, and whatever other problems, how would I recover the images?
My first thoughts that there are two basic patterns that we're looking for. Firstly, on the highest 'level' we have single frames, consisting of n scanlines each. The characteristic of a frame is that it is, for all intents and purposes, identical to the frame that came before and the frame that comes after. On average. There might be variation in speed over the entire signal, but this becomes unimportant when analysing the entire signal to find the 'frame length'. So, given a whole 'stream' of NBTV signal we should be able to, by simple matching, determine the best 'length' of signal which can be overlapped such that it becomes effectively identical snippets. By identical, I'm talking about something like a least-square difference.
Algorithm: For a digitised signal stream of n bytes, representing an arbitrary number of frames from our 'unknown' NBTV signal, we first start by selecting the smallest frame length. We could start with 100 bytes, doesn't matter... let's just say x. Divide the ENTIRE signal into x-unit blocks, then do a sum of squares difference between each successive block against the previous/next. This total is our 'ranking' for the frame size of 'x'.
Repeat the above using x+1, and again with x+2 etc., right up to some arbitrary maximum for the frame block length. At the end of this process we have some value of x at which there is a minimum of difference FOR ALL FRAMES of assumed size x between successive frames. It doesn't matter if there is warble or noise. It does matter if we have significant frame rate changes (but it doesn't matter if the frame rate average is maintained).
OK, so assuming the above gives us a reasonable frame block length, we can now attempt to analyse how many lines per frame. Assuming some meaningful content (ie: not all one shade/colour, because that's attributable to any number of lines), then the characteristic of a frame is that successive scanlines will be approximately similar. That is, white pixels are generally next to white pixels (though not always). But on average, they will be.
So, we do a similar approach. Assume some arbitrary line length, y. Divide a frame block (x bytes) into y subsections, and perform a sum of squares difference between all adjacent pixels in all scanlines of the frame. That is, (pixel 1 line 1 - pixel 1 line 2)^2 + ... etc. This gives a rating for one frame, given the assumption of y scanlines/frame. Perform this for a large number of frames from the sample, then average -- giving a rating for the assumption of y scanlines per frame.
Repeat this for y+1, giving another rating, then y+2, etc. The final value of y chosen is the one which has the minimum sum of squares calculation. By performing the calculation over multiple frames we may be removing some of the effects of noise and other factors that may making a calculation on an individual frame basis difficult.
After the above process, we should have an x (frame size) and y (#scanlines) values which can then be used to reconstruct a first-pass of the original.
At this point we have an 'average' reconstruction. That is, assuming constant frame rate (we can assume that the lines per frame is constant!). But it's highly likely that the frame rate is variable, not constant, so our reconstruction will show drift and phase errors.
But since we have a given lines/frame known now, it should be possible to determine that the difference-square value from frame to frame are fairly poor (certainly not equivalent to the averaged value used to determine the actual average frame rate). On a localised basis we can determine if the difference-square value improves if we increase the assumed frame rate (that is, decrease the frame block size) or decrease the frame rate (increase the frame block size). We would expect that the actual frame rate from frame to frame is varying not too much.
So I'm thinking that we might be able to walk through the frames (in either direction) from a known 'good frame sequence' -- that is, where we have consecutive frames where the squared-difference is minimal -- and adjust the frame block size as required to make the successive/preceeding frames minimise their squared-difference.
Well that was a bit of a ramble, just wanted to get something down in writing and I'll come back tomorrow and think that I really should have gone to bed a few hours ago
Cheers
A