I don't feel the need to, umm, defend what some implementation with CHAR_BIT of 16 or 32 actually does, but I do notice that if sizeof(int) == 1, then I think that (unsigned char)EOF == UCHAR_MAX. I think that unless sizeof(int) > 1, there is an unavoidable problem distinguishing between EOF and some valid character as written by fputc. Just changing the example to try to write (UCHAR_MAX - 1) at least avoids this possible 'excuse' for the behavior. good luck with the standards committee.