Bias: Difference between revisions

From Civitai Wiki
Jump to navigation Jump to search
(Page created)
 
No edit summary
Line 1: Line 1:
In Large Language Models, errors resulting from training data; stereotypes, attributing certain characteristics to races or groups of people, etc.
In [[Large language model|Large Language Models]], errors resulting from training data; stereotypes, attributing certain characteristics to races or groups of people, etc.


Bias can cause models to generate offensive and harmful content.
Bias can cause models to generate offensive and harmful content.

Revision as of 14:26, 2 February 2024

In Large Language Models, errors resulting from training data; stereotypes, attributing certain characteristics to races or groups of people, etc.

Bias can cause models to generate offensive and harmful content.