I've been doing some thinking and experimentation regarding bit depth for digitally stored NBTV signals. For example Klass's and Vic's colour encoding method uses 00h to 6Fh values I believe, or 112 (decimal) values for the luminance channel which seems fine. Even 8-bit for the luminance might seem overkill, 00h to FFh (256 decimal).
The first picture below is 480x640 with an 8-bit depth, the second is with a 4-bit depth, contouring is visible on her cheek, the final one is with a 2-bit depth. Even though it's really course I still would know who it was if I had never seen the picture before, but it's not very nice. I used Tiff files so there are no compression artifacts.
No dithering has been used in these pictures, it's straight truncation.
The last two have been converted to the NBTV format of 32x48, the first is with a bit depth of 8-bits, the second with just 4-bits. You'll need some software to zoom in.
The interesting thing is there is not that much difference, I'm not suggesting we use 4-bit, don't worry! But with the limited resolution we have it's almost impossible to create contouring.
Where am I going with this? I'm not sure, thought I just might mention it and see what others have to say. What I have noticed is that six bits (64 decimal values) is where I start to notice the discrete levels on CRT displays, but it's only just. Five bits are clearly discernable.
Steve A.
I was hoping to upload these as .tif files, but for some reason I couldn't and .bmp isn't allowed. So I've had to resort to .gif. 16 Colours 640 is much better in .tif format for some reason.