<img src="https://secure.item0self.com/192096.png" alt="" style="display:none;">

      Kore.ai Technical Blog

      One-stop for everything related to AI-first
      experience automation

      Meta Taxonomy Of Large Language Model Correction & Refinement

      Introduction

      This study surfaces a very insightful and comprehensive taxonomy for improving LLM output. The taxonomy...

      Read More

      Considering Large Language Model Reasoning Step Length

      When using Chain of Thought, what is the optimal number of steps to use?

      Read More

      Chain Of Natural Language Inference (CoNLI)

      Hallucination is categorised into subcategories of Context-Free Hallucination, Ungrounded Hallucination &...

      Read More

      Small & Medium-Sized Enterprises Enter The Conversational AI Arena

      Introduction

      Considering digital innovation, small and medium-sized enterprises (SMEs) should be increasingly embracing...

      Read More

      Large Language Model Hallucination Mitigation Techniques

      This recently released study is a comprehensive survey of 32+ mitigation techniques to address hallucination.

      ...

      Read More

      Validating Low-Confidence LLM Generation

      The basic approach has a hallucination detection phase and a mitigation phase.
      Source

      Read More

      Random Chain-Of-Thought For LLMs & Distilling Self-Evaluation Capability

      Here I discuss the five emerging architectural principles for LLM implementations & how curation & enrichment of...

      Read More

      Prompt Pipelines

      LLM-based applications can take the form of autonomous agents, prompt chaining or prompt pipelines. These approaches...

      Read More

      Active Prompting with Chain-of-Thought for Large Language Models

      By using AI accelerated human annotation this framework removes uncertainty and introduces reliability via a...

      Read More

      Teaching LLMs To Say, “I don’t know”

      Instead of stating that it does not know, LLMs hallucinate. Hallucination can best be described as highly plausible,...

      Read More

      Leave Comment