<img src="https://secure.item0self.com/192096.png" alt="" style="display:none;">

      Kore.ai Technical Blog

      One-stop for everything related to AI-first
      experience automation

      Chain Of Natural Language Inference (CoNLI)

      Hallucination is categorised into subcategories of Context-Free Hallucination, Ungrounded Hallucination &...

      Read More

      Small & Medium-Sized Enterprises Enter The Conversational AI Arena

      Introduction

      Considering digital innovation, small and medium-sized enterprises (SMEs) should be increasingly embracing...

      Read More

      Large Language Model Hallucination Mitigation Techniques

      This recently released study is a comprehensive survey of 32+ mitigation techniques to address hallucination.

      ...

      Read More

      Validating Low-Confidence LLM Generation

      The basic approach has a hallucination detection phase and a mitigation phase.
      Source

      Read More

      Random Chain-Of-Thought For LLMs & Distilling Self-Evaluation Capability

      Here I discuss the five emerging architectural principles for LLM implementations & how curation & enrichment of...

      Read More

      Prompt Pipelines

      LLM-based applications can take the form of autonomous agents, prompt chaining or prompt pipelines. These approaches...

      Read More

      Active Prompting with Chain-of-Thought for Large Language Models

      By using AI accelerated human annotation this framework removes uncertainty and introduces reliability via a...

      Read More

      Teaching LLMs To Say, “I don’t know”

      Instead of stating that it does not know, LLMs hallucinate. Hallucination can best be described as highly plausible,...

      Read More

      Improving Text Embeddings with LLM Generated Synthetic Data

      Value Discovered

      Text embeddings are really playing a pivotal role in retrieving semantically similar text for RAG...

      Read More

      LLM Performance Over Time & LLM Task Contamination

      A recent study revealed that LLMs perform significantly better on datasets released before their training data creation...

      Read More

      Leave Comment