---

LLM Guard: Open-Source Toolkit for Securing Large Language Models

LLM Guard provides extensive evaluators for both inputs and outputs of LLMs, offering sanitization, detection of harmful language and data leakage. Learn more here.

Get the Free Newsletter!

Subscribe to Developer Insider for top news, trends, & analysis