Skip to main content

RAG Chat

Get AI-generated responses grounded in your knowledge base content. The chat endpoint retrieves relevant context and generates a response using an LLM.

Quick Method

const result = await jabrod.rag.chat({
  kbId: 'kb_xxx',
  message: 'Summarize the key points',
  model: 'gpt-4o-mini'
});

console.log(result.message);
console.log(result.sources);
const result = await jabrod.rag
  .chatBuilder()
  .withMessage('Summarize the key points')
  .withKnowledgeBase('kb_xxx')
  .withModel('gpt-4o-mini')
  .withSystemPrompt('You are a helpful assistant.')
  .withTopK(5)
  .execute();

Builder Methods

MethodDescription
.withMessage(msg)Set the user message (required)
.withKnowledgeBase(kbId)Set the knowledge base ID (required)
.withModel(model)Set the LLM model
.withSystemPrompt(prompt)Custom system prompt
.withTopK(n)Number of context chunks (default: 5)
.execute()Execute the chat request

Available Models

ModelProviderSpeedQuality
gpt-4o-miniOpenAIFastGood
gpt-4oOpenAIMediumExcellent
claude-3-haikuAnthropicFastGood
claude-3-sonnetAnthropicMediumExcellent