<img src="https://secure.item0self.com/192096.png" alt="" style="display:none;">

      Kore.ai Technical Blog

      One-stop for everything related to AI-first
      experience automation

      LLMs Training SLMs

      Microsoft Research shows how they used LLMs to train SLMs without any RLHF…

      Read More

      Large Impact: The Rise of Small Language Models

      The growth of products and services related to Large Language Models (LLMs) has been remarkable. And it’s evident that...

      Read More

      RAG, Data Privacy, Attack Methods & Safe-Prompts

      A study explored RAG security vulnerabilities and ways in which private data can be accessed via retrieval datasets....

      Read More

      MultiHop-RAG

      This direction in RAG architecture to establish a wider context via a process of orchestration and chains over...

      Read More

      Time-Aware Adaptive RAG (TA-ARE)

      Understanding Time-Aware Adaptive Retrieval (TA-ARE)

      Introduction

      With the advent of Large Language Models (LLMs)...

      Read More

      Proxy Fine-Tuning LLMs

      Proxy fine-tuning achieves the results of directly tuning a LLM, but by accessing only its prediction…

      Read More

      Demonstrate, Search, Predict (DSP) for LLMs

      This study which is just over a year old from Stanford, makes for interesting reading and illustrates how far we have...

      Read More

      T-RAG = RAG + Fine-Tuning + Entity Detection

      The T-RAG approach is premised on combining RAG architecture with an open-source fine-tuned LLM and an entities tree...

      Read More

      Beyond Chain-of-Thought LLM Reasoning

      This approach can be implemented on a prompt level and does not require any dedicated frameworks or pre-processing.

      Read More

      Comparing Human, LLM & LLM-RAG Responses

      A fairly recent study, focused on the healthcare & preoperative medicine compared expert human feedback with LLM...

      Read More

      Leave Comment