Large Language models are strong but still prone to hallucinations when queried about recent or new knowledge. Currently dominating strategies to combat hallucination depend significantly on web search, which tends to copy errors and biases from the returned sources. We describe a reputation-based knowledge validation system that mitigates these risks. At our system's center is a golden dataset of verifiable and unbiased question-answer sets used as an objective reference. We integrate several proprietary and open-source models with our own semantic web searcher and currently available web search extensions of major large language models; to bring back confirming content. Each source is matched against the golden set and model outputs to give a reputation score based on rewarding proper semantic similarity and penalizing inconsistency. Making this validation useful to end users, we supplement a confidence index aggregating the reputation of cited sources with their semantic alignment with the generated answer. This index gives a clear estimate of reliability, e.g. 70 percent confidence, improving trust and usability. Our system balances preexisting high-reputation web sites with new sources, allowing both stability and adaptability.