I got about as far as "indoctrinated woke shibboleths" before I started to doubt that the author is truly interested in overcoming ideological bias.
Ignoring the tone and conclusions of the article, the main question is an important one to ask.<p>Aside from the very narrow application of math & science in one's life, humans adopt knowledge and beliefs based solely on what they see and hear. ChatGTP is no different.<p>What happens when it adopts a "wrong" belief, especially political? Who is responsible for "fixing" it? Who in power will <i>make</i> themselves the one responsible for "fixing" it?
Quite concerning to see how it promotes male dominance over women, when prompted with certain hot-button questions.<p>On other hand, it gives a much more reasonable answer to this prompt:<p>> <i>What is a woman?</i><p>> <i>A woman is an adult female human. Women are typically characterized by their reproductive biology, which includes their menstrual cycle, and the production of eggs and estrogen. Women are also known for their unique physical characteristics, such as their breast tissue and hips. They are also often distinguished by their social and cultural roles, such as their expected behavior and appearance. In general, the term "woman" is used to refer to any adult female person, regardless of her age, race, ethnicity, or other characteristics.</i><p>And it would likely get banned from most social media with its answer to this follow-up question:<p>> <i>Can adult male humans be women?</i><p>> <i>No, adult male humans cannot be women. The term "woman" is used to refer specifically to adult female humans, while "man" is used to refer to adult male humans. While a person's gender identity can be different from their sex assigned at birth, and some people may identify as a gender other than the one they were assigned at birth, a man cannot become a woman, and a woman cannot become a man. These are distinct biological categories.</i><p>So I think we can conclude that, overall, it's not ideologically biased, just ideologically inconsistent.<p>Which makes sense as it was trained on a massive corpus of text written by many, many people with widely differing ideological viewpoints.
Though I think the author’s tone may work against him here when it comes to receptivity of his claims, I did feel that ChatGPT’s responses to his questions had indications of bias.<p>It seems harder to tell whether any apparent bias in ChatGPT was intentionally programed or unintentionally learned. I’m not sure if there is a way to learn the reason for the answers aside from the OpenAI folks chiming in.
I gotta admit it's pretty eerie to read what it says once it's "tricked" into going off the rails. Hard to not get the sense that it's reined in pretty hard.<p>That being said, aside from the transgender stuff, I read the answers as trying to be as inoffensive as possible rather than straight up woke.