Here's the current video pipeline, as I understand it...
I have a 16-bit sample from a WAV file (at, say, 22050 Hz).
I have to convert this into an 8-bit value representing brightness of a "nixel" (a Nipkow pixel - I just coined that, so (TM))
So first I downshift (in the current case, by 6 bits) - effectively the same as dividing by 64. Just because that gets my range 0-255 for the videos I am testing.
Then I lookup the gamma-corrected value and send that to the LED array.
Now to get brightness and contrast...
I believe that brightness is just an offset of the 0-255 value, with the resultant value capped to 0-255 range.
So, I have a signed brightness (-128 to 127) which is added to the 0-255 brightness, it's capped, and then that's the new value.
I'm unsure if the brightness should be applied to the gamma-corrected value, or before gamma correction.
I'm pretty sure the analog NBTV solution is to pre-apply. But I'm inclined to think a post-application would be better.
So I've been thinking about contrast and how that works.
I now think that it's a multiplier of the 0-255 value - effectively stretching or shrinking the spacing bewteen values.
SO, let's say we had nixels of the value 0 to 100, and our contrast was *2 we would now have values from 0 to 200.
Again, the resultant value would be capped to 0-255 range.
The pipeline could apply brightness, contrast, gamma in any order and I'm not entirely sure which would be advisable.
Really quite easy to test, I suppose.