"In general, a character can be represented in 1 byte or 2 bytes. Let's say 1-byte character is ANSI character - all English characters are represented through this encoding. And let's say a 2-byte character is Unicode, which can represent ALL languages in the world."<p>No. A character can be three or four bytes. I think he meant ASCII, not ANSI. And no, two byte characters are not "Unicode". I feel like this article might do a disservice to folks who aren't totally clear about Unicode before theyread it. I would strongly recommend reading Joel Spolsky's "The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)" and being totally clear on that before trying to read this.
This is really simple (I don't write code for Windows for more than 7 years but I still remember it):<p>CHAR - standard C character (one byte)<p>WCHAR - two bytes Unicode character<p>TCHAR - either CHAR or WCHAR depending on your compiler options (hint: all Windows system functions have both versions to support ASCII or Unicode and this is an easy way to write code once)<p>LPXXX - "long pointer" to XXX ("long" comes from the old times, just ignore - this is a pointer)<p>LPCXXX - "long pointer" to a constant string (in C you can't just do "const LPXXX" since it will mean the pointer itself is
constant, thus the "const" keyword should actually be "inside" the definition)