Bias: Difference between revisions
Jump to navigation
Jump to search
(Page created) |
(No difference)
|
Revision as of 04:40, 11 October 2023
In Large Language Models, errors resulting from training data; stereotypes, attributing certain characteristics to races or groups of people, etc.
Bias can cause models to generate offensive and harmful content.