AI 脳 ai-know.
JA · EN
CONCEPT · STUB

LLM Output Quality(LLM Output Quality)

LLM Output Quality refers to the composite evaluation of accuracy, consistency, usefulness, and factual correctness of text generated by large language models. A defining challenge is that LLMs can produce fluent, plausible-sounding output that is factually incorrect — known as hallucination — making quality difficult to assess without independent verification. In high-stakes applications such as vulnerability disclosure, medical decision support, or legal analysis, output quality failures can have severe real-world consequences. Improving LLM output quality is an active research area spanning better training data curation, RLHF techniques, self-consistency methods, and output verification tooling.

※ Auto-generated stub — requires completion