Post processing of NBTV

Forum for discussion of narrow-bandwidth mechanical television

Moderators: Dave Moll, Andrew Davie, Steve Anderson

Postby Andrew Davie » Fri May 02, 2008 12:10 am

Just thinking while typing;

If I had a NBTV signal recorded with or without synch pulses, of arbitrary resolution and frame rate, with various noise, dropouts, and whatever other problems, how would I recover the images?

My first thoughts that there are two basic patterns that we're looking for. Firstly, on the highest 'level' we have single frames, consisting of n scanlines each. The characteristic of a frame is that it is, for all intents and purposes, identical to the frame that came before and the frame that comes after. On average. There might be variation in speed over the entire signal, but this becomes unimportant when analysing the entire signal to find the 'frame length'. So, given a whole 'stream' of NBTV signal we should be able to, by simple matching, determine the best 'length' of signal which can be overlapped such that it becomes effectively identical snippets. By identical, I'm talking about something like a least-square difference.

Algorithm: For a digitised signal stream of n bytes, representing an arbitrary number of frames from our 'unknown' NBTV signal, we first start by selecting the smallest frame length. We could start with 100 bytes, doesn't matter... let's just say x. Divide the ENTIRE signal into x-unit blocks, then do a sum of squares difference between each successive block against the previous/next. This total is our 'ranking' for the frame size of 'x'.

Repeat the above using x+1, and again with x+2 etc., right up to some arbitrary maximum for the frame block length. At the end of this process we have some value of x at which there is a minimum of difference FOR ALL FRAMES of assumed size x between successive frames. It doesn't matter if there is warble or noise. It does matter if we have significant frame rate changes (but it doesn't matter if the frame rate average is maintained).

OK, so assuming the above gives us a reasonable frame block length, we can now attempt to analyse how many lines per frame. Assuming some meaningful content (ie: not all one shade/colour, because that's attributable to any number of lines), then the characteristic of a frame is that successive scanlines will be approximately similar. That is, white pixels are generally next to white pixels (though not always). But on average, they will be.

So, we do a similar approach. Assume some arbitrary line length, y. Divide a frame block (x bytes) into y subsections, and perform a sum of squares difference between all adjacent pixels in all scanlines of the frame. That is, (pixel 1 line 1 - pixel 1 line 2)^2 + ... etc. This gives a rating for one frame, given the assumption of y scanlines/frame. Perform this for a large number of frames from the sample, then average -- giving a rating for the assumption of y scanlines per frame.

Repeat this for y+1, giving another rating, then y+2, etc. The final value of y chosen is the one which has the minimum sum of squares calculation. By performing the calculation over multiple frames we may be removing some of the effects of noise and other factors that may making a calculation on an individual frame basis difficult.

After the above process, we should have an x (frame size) and y (#scanlines) values which can then be used to reconstruct a first-pass of the original.

At this point we have an 'average' reconstruction. That is, assuming constant frame rate (we can assume that the lines per frame is constant!). But it's highly likely that the frame rate is variable, not constant, so our reconstruction will show drift and phase errors.

But since we have a given lines/frame known now, it should be possible to determine that the difference-square value from frame to frame are fairly poor (certainly not equivalent to the averaged value used to determine the actual average frame rate). On a localised basis we can determine if the difference-square value improves if we increase the assumed frame rate (that is, decrease the frame block size) or decrease the frame rate (increase the frame block size). We would expect that the actual frame rate from frame to frame is varying not too much.

So I'm thinking that we might be able to walk through the frames (in either direction) from a known 'good frame sequence' -- that is, where we have consecutive frames where the squared-difference is minimal -- and adjust the frame block size as required to make the successive/preceeding frames minimise their squared-difference.

Well that was a bit of a ramble, just wanted to get something down in writing and I'll come back tomorrow and think that I really should have gone to bed a few hours ago :)

Cheers
A
User avatar
Andrew Davie
"Gomez!", "Oh Morticia."
 
Posts: 1590
Joined: Wed Jan 24, 2007 4:42 pm
Location: Queensland, Australia

Postby gary » Fri May 02, 2008 2:10 pm

Additional thoughts.

Variation in motor speed should result in the frequency modulation of the NBTV signal.

This FM applies constantly across all frequencies in the signal.

FM can be thought of as pitch variation, or non-constant sampling intervals in the digital domain.

If this can be reasonably estimated it can be reversed.

When creating/recording NBTV it is difficult to excluded mains hum altogether. If it is present in an NBTV signal and can be extracted (autocorrelation?) and tracked it should be possible to determine the FM and thus correct the entire signal, although only to the extent of the stability of the mains frequency.

Also, if the original signal came from tape shouldn't there be a biasing frequency present? Maybe that can be used.
gary
 

Postby AncientBrit » Fri May 02, 2008 5:41 pm

>Gary
"Line blanking interval"

A gap in the video between successive lines.

Oops! that assumes that there is one.
Electronically derived pix can have them, mechanical pix don't.

Correlation might still work though.

I think any correction method is going to need the intervention of a Mk1 Eyeball, ie operator input at appropriate moments in the correction process.

One method we might consider is a form of "in-betweening".

The operator increments/decrements the pix on screen a few samples at a time until a coherent frame is displayed.

This value is marked by the prog.

Nominal line length in samples and lines per frame again need to be manually input at the start of the process and this has to be done by inspection of the image.

Step forward a few frames (where frame = nominal samples per line*lines per frame) and repeat the inc/dec by a few samples to pin another frame corner.

Again this value is stored by the program.

From these two "knowns" could the program make any useful deductions?

Granted the pix on the screen will display line timing variations which a form of DSP will have to correct.

As regards tape bias being present on tape playback, this does not normally make it through the replay chain.

Surprisingly I have seen this earlier in the chain closer to the replay head but at very low level.


Regards,


Graham
AncientBrit
Green padded cells are quite homely.
 
Posts: 858
Joined: Mon Mar 26, 2007 10:15 pm
Location: Billericay, UK

Postby gary » Fri May 02, 2008 8:27 pm

AncientBrit wrote:
A gap in the video between successive lines.

Oops! that assumes that there is one.
Electronically derived pix can have them, mechanical pix don't.


Yes that is what I thought you meant, as you say we don't have them (well at least in the material I am using).

AncientBrit wrote:
...I think any correction method is going to need the intervention of a Mk1 Eyeball, ie operator input at appropriate moments in the correction process.

One method we might consider is a form of "in-betweening".

The operator increments/decrements the pix on screen a few samples at a time until a coherent frame is displayed....


Great minds think alike, I think you'll find I discussed a very similar method in earlier on in this thread - or perhaps another. In fact I have had a hacked up utility to do much of this for some time now (I'll post some pics later on).

Three problems have presented themselves:

1) It requires massive oversampling to get the fine variation needed to obtain a "corrected frame" - I have oversampled by 10 times, but even that is not enough as my pics should demonstrate.

2) The method itself reveals that the picture is continually varying such that even if you perfectly correct one frame, the next can be substantially different.

3) To "eye-ball" a frame is quite difficult because there are generally no clear edges to it

AncientBrit wrote:From these two "knowns" could the program make any useful deductions?


Maybe, but at this stage it seems to me that I need to know the rate of variation to a much finer degree.

Granted the pix on the screen will display line timing variations which a form of DSP will have to correct.

AncientBrit wrote:As regards tape bias being present on tape playback, this does not normally make it through the replay chain.

I was hoping that any kind of lowpass filtering would only reduce this to a very low level and that autocorrelation might be able to extract it, but it is probably a long-shot.

Cheers,

Gary
gary
 

Postby gary » Fri May 02, 2008 8:41 pm

Here's a quick pic of the best result I have obtained so far. I hope it comes out OK, and is not too big.

Apologies to Chris (that's him) it was the first video I had to hand I hope that's OK...


Note that the line length is 1271. That represents a line length of 127.1 samples (pixels) at the original sample rate.

If I showed the same frames at 1270 or 1272 you would see that they are skewed to a much greater extent.
Attachments
Image1.jpg
Several frames that are close in length
(99.06 KiB) Downloaded 1733 times
gary
 

Postby AncientBrit » Fri May 02, 2008 10:29 pm

>Gary,

That's looking good.

Just another thought (which will probably nose dive from 10,000 feet).

Any chance of applying Fourier analysis to the signal?

There will be a very strong component at the sample rate.

But would there be useful side band indications related to frame/line rate of actual content?

As I say just a thought. I haven't a clue whether you could extract and apply such info.

And to close, has anyone used DSP to apply aperture correction (AK) to an NBTV signal to compensate for sampling loss, or finite spot size in a camera?

I have used (analogue) AK in my disc camera and it produces a marked improvement as Chris indicated.

Regards,



Graham
AncientBrit
Green padded cells are quite homely.
 
Posts: 858
Joined: Mon Mar 26, 2007 10:15 pm
Location: Billericay, UK

Postby Klaas Robers » Fri May 02, 2008 10:53 pm

For the software time base correction of the NBTV signals for the CDs 2 and 3 I used an oversampling of a factor of 8. Nice interpolation according to sin(x)/x, find the most accurate position of the sync edge and skip samples or duplicate samples along the line.
Then the signal was simply subsamples by a factor of 8.
In fact this was an oversampling of 16, because CD goed to 20 kHz, where 10 kHz is in principle enough for 32 line NBTV. So position imperfections of 1/16 pixel were introduced, but at semi random positions. It is impossible to see. A new sync pulse was inserted.

The most difficult task was to insert sync in the sync less signals of Eddie Greenhoughs camera-monitor. If anybody wants to repeat this, the original sync less signals ar recorded at the end of CD number 3, track 34.
User avatar
Klaas Robers
"Gomez!", "Oh Morticia."
 
Posts: 1656
Joined: Wed Jan 24, 2007 8:42 pm
Location: Valkenswaard, the Netherlands

Postby gary » Sat May 03, 2008 12:28 am

AncientBrit wrote:Any chance of applying Fourier analysis to the signal?


Yes, I have been doing that to investigate DC restoration. It showed me that the drooping effect of AC coupling is due to the loss of the low frequencies near DC not the actual DC component itself. The app allows me to add and remove DC and low frequencies to see the effect on the signal.

The attached images indicate what I have been doing but they are a little difficult to interpret from what you can see here but the detail is all available to the application. The first trace shows a standard 32 line nbtv with sync. The application allows the DC component to be removed to highlight the other components. This one is a 1024 sample FFT and if you counted the frequency bins along the x axis you can calculate the main peak there at 400 Hz. In the second example there is a 48 line syncless signal and the peak component is at 600 Hz. Each frequency bin is half the sample rate/512 in width so it is quite coarse. Easy enough to increase that tho'

Not quite sure if there is much that can be derived from this for the timebase problem tho' since that is more a temporal than frequency thing. If it showed the frequency modulation it would be useful. Maybe the phase (not shown here) could be useful.


AncientBrit wrote:And to close, has anyone used DSP to apply aperture correction (AK) to an NBTV signal to compensate for sampling loss, or finite spot size in a camera?


Yes, I was wanting to look into that. I don't know a lot about Aperture Correction (i.e. I didn't know it was abbreviated to AK). I know Neville Thiele (I think you used his circuit) but haven't had a chance to ask him about it, but I did tell him that his circuit had been used for NBTV - small world.

If you have the parameters I would like to try to implement a software version of it - can't see why not.

Cheers,

Gary
Attachments
Image2.jpg
Image2.jpg (64.27 KiB) Viewed 17408 times
Last edited by gary on Sat May 03, 2008 12:38 am, edited 1 time in total.
gary
 

Postby gary » Sat May 03, 2008 12:35 am

Klaas Robers wrote:For the software time base correction of the NBTV signals for the CDs 2 and 3 I used an oversampling of a factor of 8. Nice interpolation according to sin(x)/x, find the most accurate position of the sync edge and skip samples or duplicate samples along the line.
Then the signal was simply subsamples by a factor of 8.
In fact this was an oversampling of 16, because CD goed to 20 kHz, where 10 kHz is in principle enough for 32 line NBTV. So position imperfections of 1/16 pixel were introduced, but at semi random positions. It is impossible to see. A new sync pulse was inserted.


Amazing coincidence that I had just started to change to 16 times oversampling. I originally used 10 because it was an order of magnitude and so easy to see the relationship of the actual to the ideal, but 16 times has computational advantages - somehow I think I will need to go higher though.

Klaas Robers wrote:The most difficult task was to insert sync in the sync less signals of Eddie Greenhoughs camera-monitor. If anybody wants to repeat this, the original sync less signals ar recorded at the end of CD number 3, track 34.


Thanks for pointing that out, it should be a good benchmark.

Cheers,

Gary
gary
 

Postby AncientBrit » Sat May 03, 2008 1:56 am

>Gary,

Thanks for that.

I'm off for a few days, will pick up the topics again next week,

Cheers,

Graham
AncientBrit
Green padded cells are quite homely.
 
Posts: 858
Joined: Mon Mar 26, 2007 10:15 pm
Location: Billericay, UK

Postby gary » Thu May 15, 2008 5:09 pm

Ok, here's a report on what I have been doing along these lines.

Using the spectrum analyser (shown a few posts back) it was clear that most NBTV shows a clear component at the line frequency of the video, I considered that if I tracked that I could resample each line to a constant length (and hence constant frequency).

I found that it was much simpler to find the required frequency if the input signal was first resampled to a little higher than twice the frequency of line rate (i.e. when doing a FFT of fixed length the accuracy of the obtained spectrum is much higher when the sample rate is lowered towards the required frequency), this was chosen as 1kHz.

Using this to analyse the video it is easy to pick out the fundamental line frequency and thus specify a tolerance about this frequency. Using this frequency the programme then searches through the signal a frame at a time (frame length determined by user specified line count - usually 32 - times the estimated line sample length based on the highest magnitude component found in the spectrum about the initially observed line frequency by the tolerance specified). In this manner the change in line frequency is tracked reliably unless the line frequency gets lost in the noise, for example when the picture is all black or all white - perhaps autocorrelation would help here.

When the line sample length is seen to differ from the correct value (given by sample rate / (frame rate * line count), the signal is windowed-sinc interpolated/decimated to the correct value.

I attach two examples of the output of this process, the first (toby.avi) is as applied to the original syncless signal of Eddie Greenhough's Toby Jug from track 34 of CD-3 as Klaas previously mentions in this thread. The second is a longish video from Chris Long (who might like to explain his camera apparatus in more detail to readers of this thread). This was from a Baird-style flying spot camera using a bank of thermionic photocells, and, I think, was originally recorded on tape (can you confirm that Chris?)

In order to compress the video/audio I have converted them to AVI's but can make the NBTV wav files available to interested parties.

I hope the AVI's are not too big, but the original wavefile for Chris' video was 78 MBytes! so it is not a bad reduction.

In CHRISLEONIE&MANGAN1984MECHFSS32LINE.avi there are just two points in the video that the system loses track, it picks up again immediately but then the frame is offset and needs to be knocked back into alignment, you can clearly see and hear this happening. There is a third adjustment because I didn't correct the first one quite perfectly.

In summary, these first results appear quite promising and may be a useful tool in viewing and archiving taped and syncless NBTV material.

Oh, and I didn't spend too much time trying to get the brightness and contrast right.
Attachments
toby.avi
(3.56 MiB) Downloaded 1283 times
CHRISLEONIE&MANGAN1984MECHFSS32LINE.avi
(10.54 MiB) Downloaded 1202 times
Last edited by gary on Thu May 15, 2008 5:29 pm, edited 1 time in total.
gary
 

Postby AncientBrit » Thu May 15, 2008 5:26 pm

Amazing work Gary, well done,

Regards,

Graham
AncientBrit
Green padded cells are quite homely.
 
Posts: 858
Joined: Mon Mar 26, 2007 10:15 pm
Location: Billericay, UK

Postby chris_vk3aml » Thu May 15, 2008 10:53 pm

Thanks for that Gary,

It's the first time I've seen that video (now an .avi) of myself, friend Mangan Ryan and my (then) girlfriend Leonie Stitz since it was recorded in the early 1980s. And nice to be able to view it without having a hand on the line speed pot, tracking the motor speed changes by hand. Now that's VERY impressive.

The video was indeed recorded on audio tape - sound on one channel (the whir of the scanning disc can be heard in the background) and 32 line video with no sync pulses on the other channel. The mild shakes in the vertical direction is due to slight tape flutter from the Revox A77 tape recorder I was then using, at 7.5 inch/sec speed. It sounds like you've speeded the tape up a little to bring it up to 12.5 frames/second. My repetition rates were then a bit slow, so that we could radiate them on 160 metres if the opportunity arose - conserving bandwidth.

This particular video came about because my "Baird" equipment built in the early 1970s used miniature valves and various contrivances not contemporary with the 1930 era. Challenged by an "unbeliever" to show an AUTHENTIC reproduction of Baird's gear, I then set about building scanners, video amps and display gear PRECISELY contemporary with 1930. So I did some searching through library archives, found Tony Bridgewater's September 1930 article on the Long Acre gear and amplifiers, and quite simply built them EXACTLY as specified. Collected Battery valves, block capacitors, necessary wood, paper bakelite for panels and brass terminals for DC supplies.

The result was a flying spot Nipkow scanner camera using a 250 watt incandescent source, projecting a flying light spot onto the person being televised in a darkened room (my bedroom, in fact). A bank of 22 gas-filled caesium photocells bought as a job lot from a closing ham radio warehouse picked up the reflected light, then the signal was aperture corrected in exactly the way Bridgewater specified, using hopelessly microphonic BATTERY VALVES! The result was roughly as you see in that .avi. I attach a picture of part of my bedroom/workshop at the time that tape was made (1984) showing the photocell box just above the fss projector lens on the extreme right of the black and white picture below.

My only mild criticism of the .avi is that the definition is not quite what it could be - I attach some photographic stills taken off my CRT screen from the tape in 1984, and they're noticeably sharper. Some form of hf video pre-emphasis prior to conversion to .avi might be "the go" here.

Anyway Gary, congratulations. As far as the content goes, I'm becoming aware of that old biblical curse "beware, your past will find you out!"

Congratulations!

Chris Long VK3AML.
Attachments
Leonie_32Line_1984.jpg
(7.56 KiB) Downloaded 1554 times
VK3AML_Workshop_1984.jpg
(42.01 KiB) Downloaded 1546 times
chris_vk3aml
 

Restoring sync to sync-less video

Postby kareno » Sat Oct 16, 2010 12:58 am

Putting my DSP hat on here:

I'll bet that the autocorrelation function of an NBTV signal shows a peak in the region of 2.5msec and another at 80msec. By locating these peaks in a wide-ish band of delays one could determine the line and frame rates of an unstable or out-of-spec source.

Oh my, another project to add to the pile :(
kareno
 

Postby Klaas Robers » Sat Oct 16, 2010 8:26 pm

Yes Karen, you are right, but that is just a small part of the problem. To place line sync pulses is a syncless video signal the phase is extremely important. When the signal is digitalised into a wav-file, it is very easy to see the line frequency in a wave editor and count the number of samples, especially because in certain parts of the picture adjacent lines contain comparable information. So I did a lot in the human sight correlation method.

I found out that first watching the syncless signal on a free running nipkow monitor is very informative. Then you see what you may expect in the waveform. A scene bright at one edge and dark at the other makes life easier. That is not visible in a frequency spectrum. However in many old recordings vignetting of the optics give the brightest parts always in the center of the line and frame. I have observed that too in color television cameras in the past.

The most difficult is a panning or tilting camera. This looks just like loss of sync, so difficult to refind the sync moments. My ultimate sync finding program was adapted very much to the picture contents and characteristics of the disc camera of Eddy Greenhough. I was happy that I could restore the whole slowly moving "shot" in one run and it was not needed to divide it in parts.

When I recorded it on CD-R at the Convention I wasn't aware that there were no sync pulses. Grant Dixon told me afterwards that Eddy worried about that, but Grant said him that I would know what to do, at least I would encounter it.

Gary, as far as I remember, for the toby jug I did also a run of gamma correction on the signal, as Eddies camera/monitor as a closed system was simply linear. That is not done on the original signal of course.
User avatar
Klaas Robers
"Gomez!", "Oh Morticia."
 
Posts: 1656
Joined: Wed Jan 24, 2007 8:42 pm
Location: Valkenswaard, the Netherlands

PreviousNext

Return to Mechanical NBTV

Who is online

Users browsing this forum: No registered users and 27 guests

cron