BRICS Tether

GPT chatbots now feature anti-hallucination capability, introduced by AI21 Labs.

AI21 Labs has recently introduced a new question-answering engine called “Contextual Answers” for large language models (LLMs). This engine allows users to upload their own data libraries, which enables them to limit the model’s outputs to specific information. The launch of AI products like ChatGPT has revolutionized the AI industry, but the lack of trustworthiness has hindered adoption by many businesses.

Research shows that employees spend nearly half of their workdays searching for information. This presents an excellent opportunity for chatbots that can perform search functions. However, most chatbots are not designed for enterprise use. To bridge this gap, AI21 developed Contextual Answers, which allows users to input their own data and document libraries. According to AI21, this feature enables users to steer AI answers without the need to retrain models, addressing the barriers to adoption such as cost, complexity, and lack of specialization in organizational data.

One of the challenges in developing useful LLMs is teaching them to express a lack of confidence. Currently, when a user queries a chatbot, it often outputs a response even if it lacks sufficient information. Instead of providing a low-confidence answer like “I don’t know,” LLMs tend to generate information without any factual basis, which researchers refer to as “hallucinations.” AI21 claims that Contextual Answers can mitigate this problem by only outputting information relevant to user-provided documentation or not outputting anything at all.

In sectors where accuracy outweighs automation, such as finance and law, the introduction of generative pretrained transformer (GPT) systems has yielded mixed results. Experts continue to advise caution when using GPT systems in finance due to their propensity for hallucinations or conflating information. In the legal sector, a lawyer was recently fined and sanctioned after relying on outputs generated by ChatGPT during a case.

By incorporating relevant data and intervening before the system can generate non-factual information, AI21’s Contextual Answers appears to offer a solution to the hallucination problem. This development could lead to mass adoption, particularly in the fintech industry, where traditional financial institutions have been hesitant to embrace GPT technology. Moreover, the cryptocurrency and blockchain communities have had limited success using chatbots.

Overall, AI21 Labs’ Contextual Answers has the potential to revolutionize the AI industry by addressing the trustworthiness issue and providing a solution to the problem of hallucinations in LLMs. With the ability to upload personalized data libraries, businesses can ensure that the AI outputs are accurate and relevant. This could have significant implications for various sectors, including finance and law, paving the way for the widespread adoption of AI-powered chatbots.

Source link