TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

AI Reduces the World to Stereotypes

8 点作者 jrepinc超过 1 年前

1 comment

IronWolve超过 1 年前
Common issue is when you say woman, many models are mostly trained on asians, a popular issue but seems to be working itself out with newer mixed model sets. (Specify the race in your prompt!)<p>Anyone who generates prompts on different models knows the generic prompt is going to give you the most common race for that model set, due to its training.<p>Its very common to run prompts with multiple objects [japanese, chinese, korean, etc] or themes [cyberpunk, western, medieval, etc] when testing the models results.<p>Weights: this is another area you learn to specify the weight, and the order of the prompt. You put the prompt for barbie in first, or barbie last, the prompt will change on the weight. Prompt writers understand you can also specify the desired amount you want, example (barbie:1.5)..<p>Good prompt writers know the models are still being developed and really bad at giving you the results you want, the tech is still new. You have to craft the prompt to get around the bad training.<p>To suggest models are racist is annoyingly naive on how models are trained.<p>If you want models trained for other countries, it needs photos to train on, with most community model sets being generated in Japan or America, you can probably guess that banana leaf plates are not going to be generated by default. Prompt for a banana leaf plate, or make your own models just for that. Lots of loras are created for smaller items that models don&#x27;t represent well.<p>check civitai.com under loras, and you can see some of the issues people are trying to solve.