10 KiB
Integrating Ollama with LangChain for Django AI Analyst
This guide provides step-by-step instructions for integrating Ollama with LangChain in your Django AI Analyst application, with specific focus on Arabic language support.
Prerequisites
- Ollama installed on your system
- An Ollama model with Arabic support (preferably Jais-13B as recommended)
- Django project with the AI Analyst application
Installation Steps
1. Install Required Python Packages
pip install langchain langchain-community
2. Configure Django Settings
Add the following to your Django settings.py file:
# Ollama and LangChain settings
OLLAMA_BASE_URL = "http://localhost:11434" # Default Ollama API URL
OLLAMA_MODEL = "jais:13b" # Or your preferred model
OLLAMA_TIMEOUT = 120 # Seconds
3. Create a LangChain Utility Module
Create a new file ai_analyst/langchain_utils.py:
from langchain.llms import Ollama
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
from django.conf import settings
import logging
logger = logging.getLogger(__name__)
def get_ollama_llm():
"""
Initialize and return an Ollama LLM instance configured for Arabic support.
"""
try:
# Get settings from Django settings or use defaults
base_url = getattr(settings, 'OLLAMA_BASE_URL', 'http://localhost:11434')
model = getattr(settings, 'OLLAMA_MODEL', 'jais:13b')
timeout = getattr(settings, 'OLLAMA_TIMEOUT', 120)
# Configure Ollama with appropriate parameters for Arabic
return Ollama(
base_url=base_url,
model=model,
timeout=timeout,
# Parameters to improve Arabic language generation
parameters={
"temperature": 0.7,
"top_p": 0.9,
"top_k": 40,
"num_ctx": 2048, # Context window size
}
)
except Exception as e:
logger.error(f"Error initializing Ollama LLM: {str(e)}")
return None
def create_prompt_analyzer_chain(language='ar'):
"""
Create a LangChain for analyzing prompts in Arabic or English.
"""
llm = get_ollama_llm()
if not llm:
return None
# Define the prompt template based on language
if language == 'ar':
template = """
قم بتحليل الاستعلام التالي وتحديد نوع التحليل المطلوب ونماذج البيانات المستهدفة وأي معلمات استعلام.
الاستعلام: {prompt}
قم بتقديم إجابتك بتنسيق JSON كما يلي:
{{
"analysis_type": "count" أو "relationship" أو "performance" أو "statistics" أو "general",
"target_models": ["ModelName1", "ModelName2"],
"query_params": {{"field1": "value1", "field2": "value2"}}
}}
"""
else:
template = """
Analyze the following prompt and determine the type of analysis required, target data models, and any query parameters.
Prompt: {prompt}
Provide your answer in JSON format as follows:
{
"analysis_type": "count" or "relationship" or "performance" or "statistics" or "general",
"target_models": ["ModelName1", "ModelName2"],
"query_params": {"field1": "value1", "field2": "value2"}
}
"""
# Create the prompt template
prompt_template = PromptTemplate(
input_variables=["prompt"],
template=template
)
# Create and return the LLM chain
return LLMChain(llm=llm, prompt=prompt_template)
4. Update Your View to Use LangChain
Modify your ModelAnalystView class to use the LangChain utilities:
from .langchain_utils import create_prompt_analyzer_chain
import json
import re
class ModelAnalystView(View):
# ... existing code ...
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# We'll initialize chains on demand to avoid startup issues
self.prompt_analyzer_chains = {}
def _get_prompt_analyzer_chain(self, language='ar'):
"""
Get or create a prompt analyzer chain for the specified language.
"""
if language not in self.prompt_analyzer_chains:
self.prompt_analyzer_chains[language] = create_prompt_analyzer_chain(language)
return self.prompt_analyzer_chains[language]
def _analyze_prompt_with_llm(self, prompt, language='ar'):
"""
Use LangChain and Ollama to analyze the prompt.
"""
try:
# Get the appropriate chain for the language
chain = self._get_prompt_analyzer_chain(language)
if not chain:
# Fallback to rule-based analysis if chain creation failed
return self._analyze_prompt_rule_based(prompt, language)
# Run the chain
result = chain.run(prompt=prompt)
# Parse the JSON result
# Find JSON content within the response (in case the LLM adds extra text)
json_match = re.search(r'({.*})', result.replace('\n', ' '), re.DOTALL)
if json_match:
json_str = json_match.group(1)
return json.loads(json_str)
else:
# Fallback to rule-based analysis
return self._analyze_prompt_rule_based(prompt, language)
except Exception as e:
logger.error(f"Error in LLM prompt analysis: {str(e)}")
# Fallback to rule-based analysis
return self._analyze_prompt_rule_based(prompt, language)
def _analyze_prompt_rule_based(self, prompt, language='ar'):
"""
Rule-based fallback for prompt analysis.
"""
analysis_type, target_models, query_params = self._analyze_prompt(prompt, language)
return {
"analysis_type": analysis_type,
"target_models": target_models,
"query_params": query_params
}
def _process_prompt(self, prompt, user, dealer_id, language='ar'):
"""
Process the natural language prompt and generate insights.
"""
# ... existing code ...
# Use LLM for prompt analysis
analysis_result = self._analyze_prompt_with_llm(prompt, language)
analysis_type = analysis_result.get('analysis_type', 'general')
target_models = analysis_result.get('target_models', [])
query_params = analysis_result.get('query_params', {})
# ... rest of the method ...
Testing the Integration
Create a test script to verify the Ollama and LangChain integration:
# test_ollama.py
import os
import sys
import django
# Set up Django environment
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'your_project.settings')
django.setup()
from ai_analyst.langchain_utils import get_ollama_llm, create_prompt_analyzer_chain
def test_ollama_connection():
"""Test basic Ollama connection and response."""
llm = get_ollama_llm()
if not llm:
print("Failed to initialize Ollama LLM")
return
# Test with Arabic prompt
arabic_prompt = "مرحبا، كيف حالك؟"
print(f"Testing Arabic prompt: {arabic_prompt}")
try:
response = llm.invoke(arabic_prompt)
print(f"Response: {response}")
print("Ollama connection successful!")
except Exception as e:
print(f"Error: {str(e)}")
def test_prompt_analysis():
"""Test the prompt analyzer chain."""
chain = create_prompt_analyzer_chain('ar')
if not chain:
print("Failed to create prompt analyzer chain")
return
# Test with an Arabic analysis prompt
analysis_prompt = "كم عدد السيارات التي لدينا؟"
print(f"Testing analysis prompt: {analysis_prompt}")
try:
result = chain.run(prompt=analysis_prompt)
print(f"Analysis result: {result}")
except Exception as e:
print(f"Error: {str(e)}")
if __name__ == "__main__":
print("Testing Ollama and LangChain integration...")
test_ollama_connection()
print("\n---\n")
test_prompt_analysis()
Run the test script:
python test_ollama.py
Troubleshooting
Common Issues and Solutions
-
Ollama Connection Error
- Ensure Ollama is running:
ollama serve - Check if the model is downloaded:
ollama list - Verify the base URL in settings
- Ensure Ollama is running:
-
Model Not Found
- Download the model:
ollama pull jais:13b - Check model name spelling in settings
- Download the model:
-
Timeout Errors
- Increase the timeout setting for complex queries
- Consider using a smaller model if your hardware is limited
-
Poor Arabic Analysis
- Ensure you're using an Arabic-capable model like Jais-13B
- Check that your prompts are properly formatted in Arabic
- Adjust temperature and other parameters for better results
-
JSON Parsing Errors
- Improve the prompt template to emphasize strict JSON formatting
- Implement more robust JSON extraction from LLM responses
Performance Optimization
For production use, consider these optimizations:
-
Caching LLM Responses
- Implement Redis or another caching system for LLM responses
- Cache common analysis patterns to reduce API calls
-
Batch Processing
- For bulk analysis, use batch processing to reduce overhead
-
Model Quantization
- If performance is slow, consider using a quantized version of the model
- Example:
ollama pull jais:13b-q4_0for a 4-bit quantized version
-
Asynchronous Processing
- For long-running analyses, implement asynchronous processing with Celery
Advanced Usage: Fine-tuning for Domain-Specific Analysis
For improved performance on your specific domain:
- Create a dataset of example prompts and expected analyses
- Use Ollama's fine-tuning capabilities to adapt the model
- Update your application to use the fine-tuned model
Conclusion
This integration enables your Django AI Analyst to leverage Ollama's powerful language models through LangChain, with specific optimizations for Arabic language support. The fallback to rule-based analysis ensures robustness, while the LLM-based approach provides more natural language understanding capabilities.