<img src="https://secure.item0self.com/192096.png" alt="" style="display:none;">

      Kore.ai Technical Blog

      One-stop for everything related to AI-first
      experience automation

      A Study Comparing RAG & Fine-Tuning For Knowledge Base Use-Cases

      The selection of technology should be driven primarily by the requirements and goals of a particular use-case or...

      Read More

      RAT — Retrieval Augmented Thoughts

      Let me first start with a few general observations…

      There is a tension between achieving efficiency within...

      Read More

      A Short History Of RAG

      One of the most popular themes currently around Large Language Models is the idea of Retrieval Augmented Generation...

      Read More

      Chain-of-Instructions (CoI) Fine-Tuning

      This approach draws inspiration from Chain-of-Thought (CoT) prompting which generates step-by-step rationales from...

      Read More

      Performing Multiple LLM Calls & Voting On The Best Result Are Subject To Scaling Laws

      More LLM calls enhance performance on easy queries but diminish it on hard ones.
      So what scaling laws can be...

      Read More

      Exploring the Purpose, Power & Potential of Small Language Models (SLMs)

      A number of very capable Small Language Models (SLMs) have been open-sourced recently. In this article you will find...

      Read More

      Self-Reflective Retrieval-Augmented Generation (SELF-RAG)

      The SELF-RAG framework trains a single arbitrary language model to adaptively retrieve passages on-demand.To generate...

      Read More

      Prompt-RAG

      Vector Embedding Free Retrieval-Augmented Generation

      While Prompt-RAG does come with its limitations, it can...

      Read More

      Please Stop Saying Long Context Windows Will Replace RAG

      And I’m curious to know if anyone has innovative approaches to using long context windows efficiently?

      Long Context...

      Read More

      LLMs Training SLMs

      Research from Microsoft Research shows how they used LLMs to train SLMs without any RLHF…

      The primary focus in training...

      Read More

      Leave Comment