Knowledge Bots B2B Guide (EN)
  • Overview
    • πŸ€–Welcome to Blockbrain!
    • πŸ—“οΈBook a Demo
    • πŸ“Working with Knowledgebots
    • πŸ“šGlossary
  • Use Cases
    • πŸ‘©β€πŸ’»Company GPT
    • ⏳Sales Automation
    • 🧩Employee Support
    • 🧐Hotline Helper
    • πŸ“˜Risk & Compliance Management
    • πŸ“ Machinery
    • πŸ“‚Tender Analysis
    • πŸ’‘More Use Cases
  • FOR USERS
    • πŸ‘¨β€πŸ’»Account Setup
    • ✍️Prompt Guide for Better Answers from AI
    • πŸš€How to use Knowledge Bots
      • Chat with Knowledge Bots
      • Manage your Databases in Knowledge Management
      • Use Insights Efficiently
    • 🧰Guide on Advanced Knowledge Bot Features
      • Maximize Agent Use
      • Maximize Workflow Use
      • Maximize Intent Agent Use
      • Maximize Insight Use
      • Maximize Contribute Knowledge Use
    • ⛏️Pick your LLM
    • πŸ”«Troubleshooting
  • FOR BUILDERS
    • βš’οΈHow to build a Knowledgebot
    • πŸ–‹οΈPrompting for Agents and Workflows
    • Build and Apply Advanced Features
      • More on Knowledgebot Settings & Management
      • Create an Agent
      • Set Up Workflows
      • Prepare Intent Agent
      • Use Insights
      • Optimize Contributed Knowledge
    • πŸ”Manage Access
    • πŸ”«Troubleshooting
  • FOR ADMINS
    • πŸ‘¨β€πŸ¦±Manage your Domain
    • πŸ”΅Integrations
    • πŸ₯½Google SSO
    • πŸ•ΈοΈWeb Component
    • πŸ€΅β€β™‚οΈLegal & Compliance
    • πŸš‘Admin Support
  • MOBILE APP
    • πŸ“±Save Knowledgebots as an App
    • ⬇️Download Blockbrain App
    • How it works
    • Mobile Use Cases
  • NEWS
    • πŸ“ˆFeatures & Updates
    • ⏲️Coming soon
Powered by GitBook
On this page
  • Knowledge Bot Settings
  • Large Language Models (LLMs)
  • Tips on how to Pick an LLM Model
  • FAQs
  • Model Modifiers
  • Tips on Model Modifier Settings
  • Use Case Templates for Model Modifiers
  • FAQs
  • Embedding Models
  • Why Embedding Models Matter
  • Choosing the Right Model
  • FAQs
  1. FOR BUILDERS
  2. Build and Apply Advanced Features

More on Knowledgebot Settings & Management

Knowledge Bot Settings

In the upper left corner of the Knowledge Bot, you'll find the gear icon, which opens the bot settings. Here, you can customize and enable various features to tailor the bot to your needs.

  1. Capabilities & Skills: Adjust search methods and activate advanced features

  2. LLM Models: Choose the language model that powers your bot's responses, balancing performance, accuracy, and cost

  3. Database: Select the default database for all data rooms

  4. Agents: Set up or activate customizable prompts for more efficient use

  5. Automations: Set up workflows for more efficient prompting

  6. Action Settings: Adds quick-access buttons to the Knowledge Bot navigation and enables advanced features like Intent Agent and System Agent for enhanced search and automation

Note: If you don't see any bot settings, you may not have access to them. Inquire to the admin of the bot regarding its access.


Large Language Models (LLMs)

Blockbrain allows users to select from a variety of Large Language Models (LLMs) based on their specific needs. Each model has unique strengths in terms of response quality, speed, cost efficiency, and specialized capabilities. Some other factors to consider are: Context window size, Performance and Hosting location.

Below is a comprehensive guide to help you choose the best LLM for your workflows:

Model

Best For

Unique Selling Point (USP)

Claude 3.5 Sonnet (Smartest, US)

Complex problem-solving, advanced AI agents, software engineering

Best reasoning & problem-solving Claude

Claude 3.5 Sonnet (Recommended, EU)

Same as Smartest, but EU-hosted for compliance-focused users

Best EU-hosted Claude model for enterprise

Claude 3.5 Sonnet (Creative, EU)

Creative writing, brainstorming, content generation

Optimized for storytelling & marketing

Claude 3.5 Haiku (Efficient, US)

Fast responses, chatbots, lightweight AI tasks

Fastest and cheapest Claude model

GPT-4 Omni (Logical, EU)

Text structuring, logic-heavy tasks, multimodal AI

Best for structured, multimodal tasks

Mistral Large (Coding, EU)

Advanced AI-assisted coding, software development

State-of-the-art coding capabilities

Mistral Nemo (Coding, EU)

Developers, AI-assisted programming, automation

Lightweight but powerful coding model

Gemini 1.5 Pro (Large-context, EU)

Large-scale AI processing, multimodal analysis, research

1M context window for deep analysis

Gemini 1.5 Flash (Fast, EU)

High-volume AI tasks, fast processing at scale

Optimized for speed & cost-efficiency

Llama 3.2 90B (US)

General-purpose AI, knowledge-based reasoning

Latest Llama model with improved performance

Llama 3.1 405B (US)

Enterprise-scale AI, knowledge retrieval

Flagship Meta LLM with broad capabilities

Llama 3.1 70B (US)

Standard AI workloads, NLP & automation

Balanced performance vs. compute power

Llama 3.1 8B (US)

Lightweight AI tasks, cost-effective inference

Best Llama model for smaller AI projects

GPT-4 Omni (Structured, US)

Multimodal input, knowledge organization, AI structuring

Best GPT model for complex logic tasks

GPT-4 Turbo (Legacy, US)

General AI tasks, balanced intelligence & speed

More affordable GPT-4 variant

GPT-4 Vision (US)

AI-assisted image understanding, multimodal tasks

Best for visual processing in GPT models

GPT-4o Mini (Efficient, US)

Affordable AI workloads, cost-effective intelligence

Optimized for budget-conscious users

GPT-3.5 Turbo (Legacy, EU)

Simple text tasks, lightweight AI applications

Most affordable GPT model

GPT-4 Turbo (Legacy, EU)

Advanced reasoning, enterprise-grade AI

Reliable GPT-4 model for structured tasks

Claude 3 Opus (Creative, US)

Creative content, high-level AI assistance

Most powerful Claude for creative work

Claude 3 Haiku (Efficient, EU)

Fast AI interactions, real-time responsiveness

Speed-optimized Claude model

Claude 3 Sonnet (Balanced, US)

Business applications, enterprise AI use

Great balance of speed & intelligence

Gemini 1.0 Pro (Legacy, EU)

General AI use cases, multimodal processing

Balanced Gemini model for various tasks

Mistral Codestral (Coding, EU)

Code completion, AI-powered development tools

Cutting-edge model for coding workflows

Gemma 2 (Google, US)

Small AI tasks, lightweight AI workloads

Google’s compact AI model for efficiency

Tips on how to Pick an LLM Model

  1. Take into account the Context Window: refers to the amount of text (in tokens or characters) that an LLM can process at once while maintaining coherence and context

    • Short context window (e.g., 16K tokens): means the model can only consider a limited portion of text at a time, making it best suited for shorter prompts and direct answers.

    • Large context window (e.g., 1M tokens): allows the model to process longer documents, complex discussions, and in-depth analysis without losing previous context.

  2. Choosing based on your Region: For better accuracy and compliance, it’s recommended to select an LLM model hosted in your region

  3. Choose the default: If you’re unsure which model to pick, you may use the default LLM Models, which offer a balanced approach, ensuring high-quality responses while maintaining cost efficiency

    1. Claude 3.5 Sonnet v2: Well-rounded for reasoning, accuracy, and general tasks

    2. Azure GPT-4 Omni: Ideal for structured responses, logical reasoning, and complex queries

FAQs

  1. How does the LLM affect the accuracy of AI-generated responses?

  • A more advanced LLM generally provides better accuracy and reasoning, but performance also depends on:

    • The size of the context window (larger windows retain more information).

    • The quality of the input prompt (clearer prompts lead to better responses).

    • The selected embedding model (affects how well data is retrieved).


Model Modifiers

Model modifiers allow you to fine-tune your Knowledge Bot’s behavior by adjusting key parameters that influence how it processes and generates responses. These settings help balance creativity, precision, and relevance based on the nature of your task.

Why Use Model Modifiers?

By adjusting model modifiers, you can:

  • Ensure responses align with specific business objectives

  • Optimize results for creative, technical, or research-based tasks

  • Improve efficiency and consistency in AI-generated outputs

Tips on Model Modifier Settings

It's best to stay close to the default settings when adjusting Model Modifiers to maintain balanced AI performance. For example, Creative Freedom should not be set too high, as excessive creativity may lead to unpredictable or overly abstract responses. Similarly, Search Range should remain within 5 to 8 to ensure the AI retrieves relevant information without unnecessary noise. Adjust settings gradually to fine-tune the AI’s behavior while maintaining accuracy and consistency.

Use Case Templates for Model Modifiers

To get the best results from AI-generated responses, it's important to fine-tune model modifiers based on your specific use case. These settings act as a starting point, allowing you to tailor the AI’s behavior to meet your needs.

Use Case

Recommended Settings

General Use: Balanced settings for everyday tasks.

  • Creative Freedom: Low β†’ Allows for engaging but still logical responses.

  • Vocabulary Range: Medium β†’ Ensures diverse yet relevant wording.

  • Topic Variety: Medium β†’ Encourages AI to introduce new ideas while maintaining coherence.

  • Word Variety: Medium β†’ Keeps wording fresh without sacrificing clarity.

  • Search Range: Medium β†’ Provides a balance between precision and breadth of information.

Sales & Company Analysis: Slight creativity with a strong focus on structured insights.

  • Creative Freedom: Medium β†’ Keeps responses logical while allowing for slight adaptability.

  • Vocabulary Range: Medium β†’ Uses varied vocabulary for engaging business communication.

  • Topic Variety: Medium β†’ Ensures coverage of related business topics without excessive divergence.

  • Word Variety: Medium β†’ Encourages compelling, clear business writing.

  • Search Range: High β†’ Retrieves a broad set of insights to support decision-making.

Technical Analysis & Reports: Prioritizes accuracy and consistency over creativity.

  • Creative Freedom: Lowest β†’ Ensures predictable, fact-based responses.

  • Vocabulary Range: Low β†’ Uses precise technical language with minimal variation.

  • Topic Variety: Low β†’ Keeps the discussion on a single, focused subject.

  • Word Variety: Low β†’ Ensures terminological consistency across technical documentation.

  • Search Range: Medium β†’ Pulls reliable data while minimizing irrelevant information.

Data-Driven Insights: Uses structured retrieval to extract key information.

  • Creative Freedom: Lowest β†’ Keeps AI responses structured and factual.

  • Vocabulary Range: Medium β†’ Uses varied language to articulate different insights clearly.

  • Topic Variety: Medium β†’ Covers related concepts while staying focused.

  • Word Variety: Medium β†’ Balances consistency with fresh phrasing.

  • Search Range: High β†’ Ensures that AI scans a broader dataset for useful insights.

Creative Writing: Maximizes AI’s creativity for expressive, imaginative output.

  • Creative Freedom: High β†’ Encourages original, engaging, and sometimes unexpected responses.

  • Vocabulary Range: Highβ†’ Expands word choice for a more colorful and engaging tone.

  • Topic Variety: High β†’ Allows AI to introduce fresh concepts and ideas.

  • Word Variety: High β†’ Enhances writing flow and prevents repetition.

  • Search Range: Low β†’ Prioritizes relevance over broad, factual accuracy.

Note: While these recommendations provide a strong foundation, it’s best to test different settings based on your specific workflow. Avoid maxing out valuesβ€”going too high or too low can lead to unexpected or ineffective outputs. Use the tooltip descriptions in the settings panel to understand the limits of each modifier and experiment within the suggested range for balanced, high-quality results.

FAQs

  1. How do I know if I have optimized my Model Modifiers correctly?

    • Test your settings by running AI queries and checking if responses meet your expectations. If responses are too rigid or repetitive, increase Creative Freedom and Word Variety. If they’re too broad or inconsistent, lower Topic Variety and Vocabulary Range.

  2. Should I max out any of the Model Modifiers?

    • No, maxing out values (e.g., setting Creative Freedom to 10) can lead to unpredictable or inaccurate outputs. It’s best to stay within the recommended range provided in the tooltips inside the settings panel.


Embedding Models

When creating a database in Blockbrain, you are prompted to choose an embedding model. These models convert text (such as documents, files, or data) into numerical representations called embeddings. This allows the system to search, compare, and retrieve relevant content based on meaning, not just keywords.

Model

Best For

Unique Selling Point (USP)

Text Embedding 3 Large (EU)

Complex, large-scale text embedding tasks

Best for high-performance, deep text understanding

Text Embedding 3 Large (US)

Same as EU-hosted version, but US-hosted

Newest OpenAI embedding model for large-scale tasks

Text Embedding Ada 002 (US)

Basic text embedding with high efficiency

Most optimized for low-cost embeddings

Text Embedding Ada 002 (EU)

General text embeddings, suitable for diverse applications

EU-hosted version for compliance-sensitive tasks

Text Embedding 3 Small (US)

Resource-sensitive tasks with complex embeddings

Best balance between efficiency and performance

BGE Embedding (EU)

Flexible embedding for various ML applications

Ideal for self-hosted, customizable deployments

English Embedding 4 (EU)

Embedding tasks for English-language content

Best embedding model for pure English text

Multilingual Embedding 2 (EU)

Embedding for multilingual content processing

Best for handling mixed-language embeddings

Why Embedding Models Matter

Embedding models power semantic search, which helps your knowledge bot:

  • Understand context and meaning across large sets of documents

  • Retrieve more accurate and relevant answers

  • Match user queries with similar content, even if phrased differently

Choosing the Right Model

When picking a model, consider:

  • Scale of your data: Larger models handle complex text better, but cost more

  • Hosting requirements: Choose EU- or US-hosted depending on compliance needs

  • Languages: Use multilingual models if your data spans multiple languages

Embedding Model Recommendations

  • Complexity & Scale:

    • For large-scale or complex tasks, use: β†’ Text Embedding 3 Large

  • Efficiency & Cost:

    • For general-purpose tasks with low latency and cost: β†’ Text Embedding Ada 002

  • Language Support:

    • English-only β†’ English Embedding 4

    • Multilingual β†’ Multilingual Embedding 2

  • Hosting Requirements:

    • EU-hosted models for GDPR-sensitive data

    • US-hosted models for US-based infrastructure

FAQs

  1. What is the difference between small and large embedding models?

    • Larger models (e.g., Text Embedding 3 Large) capture more context and nuances, making them better for complex queries.

    • Smaller models (e.g., Text Embedding 3 Small) prioritize efficiency and are better for lightweight applications.

PreviousBuild and Apply Advanced FeaturesNextCreate an Agent

Last updated 4 days ago