Stories
Slash Boxes
Comments

SoylentNews is people

posted by hubie on Tuesday May 09, @06:01AM   Printer-friendly
from the black-boxes-around-black-boxes dept.

AI chatbots oftentimes make headlines for their strange behavior. Nvidia's new software could fix that.:

Nvidia, the tech giant responsible for inventing the first GPU -- a now crucial piece of technology for generative AI models, unveiled a new software on Tuesday that has the potential to solve a big problem with AI chatbots.

The software, NeMo Guardrails, is supposed to ensure that smart applications, such as AI chatbots, powered by large language models (LLMs) are "accurate, appropriate, on topic and secure," according to Nvidia.

The open-source software can be used by AI developers can utilize to set up three types of boundaries for AI models: Topical, safety, and security guardrails.

[...] The safety guardrails are an attempt to tackle the issue of misinformation and hallucinations.

When employed, it will ensure that AI applications respond with accurate and appropriate information. For example, by using the software, bans on inappropriate language and credible source citations can be reinforced.

[...] Nvidia claims that virtually all software developers will be able to use NeMo Guardrails since they are simple to use, work with a broad range of LLM-enabled applications, and work with all the tools that enterprise app developers use such as LangChain.

The company will be incorporating NeMo Guardrails into its Nvidia NeMo framework, which is already mostly available as an open-source code on GitHub.


Original Submission

This discussion was created by hubie (1068) for logged-in users only. Log in and try again!
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 3, Touché) by Anonymous Coward on Tuesday May 09, @06:26AM

    by Anonymous Coward on Tuesday May 09, @06:26AM (#1305463)

    Yet they can't seem to fix their GPU memory leaks.. What's up with that?

  • (Score: 0, Funny) by Anonymous Coward on Tuesday May 09, @06:42AM (1 child)

    by Anonymous Coward on Tuesday May 09, @06:42AM (#1305467)

    It's the only way to stop WW3.

  • (Score: 3, Touché) by exa on Tuesday May 09, @09:16AM

    by exa (9931) on Tuesday May 09, @09:16AM (#1305482)

    We have paid for safety and applied safety. All is OK now. It's probably you who's hallucinating.

  • (Score: 2) by looorg on Tuesday May 09, @10:07AM

    by looorg (578) on Tuesday May 09, @10:07AM (#1305485)

    So it's a censoring filter or leash that goes on top of everything? After all it can't stop people from insert the input question, so the question then is does it still perform the query or does it stop it before it goes on and waste the resource for a question it can't or won't answer? Will we know, or get told, that the answer was crippled?

  • (Score: 2) by hendrikboom on Wednesday May 10, @07:36PM

    by hendrikboom (1125) on Wednesday May 10, @07:36PM (#1305772) Homepage Journal

    Do they really think they've found the answer to AI-generated crap?
    Or are they just peddling an inadequate tool to sell more GPU's?

(1)