Exploring NeMo Guardrails’ practical use cases

Image generated by DALL-E 3 by the author

On the topic of LLM security, we have explored OWASP top 10 for LLM applications, Llama Guard, and Lighthouz AI so far from different angles. Today, we are going to explore NeMo Guardrails, an open-source toolkit developed by NVIDIA for easily adding programmable guardrails to LLM-based conversational systems.

How is NeMo Guardrails different from Llama Guard, which we dived into in a previous article? Let’s put them side by side and compare their features.

Table by author

As we can see, Llama Guard and NeMo Guardrails are fundamentally different:

  • Llama Guard is a large language model, finetuned from Llama 2, and an input-output safeguard model. It comes with six unsafe categories, and developers can customize those categories by adding additional unsafe categories to tailor to their use cases for input-output moderation.
  • NeMo Guardrails is a much more comprehensive LLM security toolset, offering a broader set of programmable guardrails to control and guide LLM inputs and outputs, including content moderation, topic guidance, which steers conversations towards specific topics, hallucination prevention, which reduces the generation of factually incorrect or nonsensical content, and response shaping.
Image source: NeMo Guardrails GitHub Repo README

Let’s dive into the implementation details on how to add NeMo Guardrails to an RAG pipeline built with RecursiveRetrieverSmallToBigPack, an advanced retrieval pack from LlamaIndex. How does this pack work? It takes our document and breaks it down, starting with the larger sections (parent chunks) and chopping them up into smaller pieces (child chunks). It links each child chunk to its parent…