Just something I've been thinking about --- we're playing a video signal from a CD, and the pulses embedded in the signal are used to synchronise the video such that the Nipkow disc is rotating 750rpm. We get a very flickery picture at 12.5Hz, which is roughly half what is needed to give a 'flicker free' image.
So I was thinking -- what would be the consequences of modifying the input waveform (ie: preprocessing with some software) to increase the frame rate. If the waveform were 'scrunched', for example, I would assume that up to a point the 'standard' club circuits would cope. They'd (at a guess) be happy working at 15Hz. Maybe 18.... what's the limit?
Of course, just 'scrunching' the waveform would also increase the video speed. One would need to write some software that would effectively transpose the 12.5 fps to 25fps (ie: keep the same speed, increase the frame rate).
I do suspect that one of the limitations will be the synchronisation with the timing holes on the Nipkow disc. I recall reading that some sort of timer was used, which kept getting reset *except* for the missing hole, and this was used to generate the frame pulse. Of course, if the next hole comes along too soon, we miss that pulse. But leaving that problem aside...
I am also aware that we would be significantly reducing our vertical resolution. But this is so 'good' on my monitor, I'm wondering how it would look at half resolution but double the frame rate. I would assume that brightness would not suffer, as although we're seeing each hole whizz by in half the time (so half the total light seen), we are also seeing each hole twice as often (so the overall effect is same brighness).
So my question is -- abandoning the ideal of the club standard 12.5Hz, just how much better would images look at higher frame rates and lower vertical resolution? Am I missing something in my assumptions?