feat: Introduce BuddAI v3.0 with enhanced modular building and proactive suggestion engine

- Added ShadowSuggestionEngine for proactive module suggestions based on user history.
- Implemented style signature scanning to extract coding preferences from indexed repositories.
- Enhanced chat functionality to include search queries for repository functions.
- Updated database schema to include style preferences.
- Improved modular build execution with Forge Theory integration.
- Added proactive suggestion bar to responses based on user input and generated code.
- Refined code generation to align with user-specific naming conventions and safety patterns.
- Introduced commands for scanning style signatures and improved help documentation.
This commit is contained in:
JamesTheGiblet 2025-12-28 16:29:06 +00:00
parent fe928a5df4
commit 3747bf5091
9 changed files with 4171 additions and 1451 deletions

2160
README.md

File diff suppressed because it is too large Load diff

215
archive/QUICKSTART.md Normal file
View file

@ -0,0 +1,215 @@
# BuddAI Quick Start Guide
## You Are Here: Milestone 1 Complete! 🎉
You've successfully:
- ✅ Installed Ollama
- ✅ Downloaded DeepSeek model
- ✅ Had first conversation with base model
## Next: Add Persistent Memory
### Step 1: Set Up Files
1. **Copy these files to your BuddAI folder:**
- `buddai.py` (the main script)
- `requirements.txt` (dependencies - currently none needed!)
- `README.md` (the manifesto you already have)
2. **Create data directory:**
```powershell
mkdir data
```
*(Note: If you see an error saying the item already exists, you can safely ignore it and proceed.)*
### Step 2: Run BuddAI with Memory
Instead of running raw Ollama, now run:
```powershell
python buddai.py
```
**What happens:**
- BuddAI starts with persistent memory enabled
- Conversation history saves to SQLite database
- Context from previous messages is maintained
- Session statistics are tracked
### Step 3: Test Memory
**First conversation:**
```
James: My name is James Gilbert. I'm building GilBots - modular combat robots.
BuddAI: [Acknowledges and responds]
James: exit
```
**Second conversation (later or tomorrow):**
```powershell
python buddai.py
```
```
James: What am I building?
BuddAI: [Should reference GilBots from previous session!]
```
**That's persistent memory working!**
---
## Available Commands
While in BuddAI:
- `/help` - Show all commands
- `/stats` - View session statistics
- `/history` - See recent conversation
- `/clear` - Start fresh (clear context)
- `/export` - Save session to JSON
- `exit` or `quit` - End session
---
## What You Can Do Now
### Test Code Generation
```
James: Generate ESP32 code for controlling two DC motors via L298N driver with PWM speed control
```
### Test Memory
```
James: Remember: I prefer modular code with clear comments. Keep functions under 50 lines.
```
Later:
```
James: Write a function to control a servo
```
It should remember your style preference!
### Test Context
```
James: I'm building a flipper mechanism for GilBot #1
James: What servo should I use?
James: How much torque do I need?
```
BuddAI maintains context across the conversation.
---
## Troubleshooting
### "Ollama not found"
Make sure Ollama is in your PATH. Test with:
```powershell
ollama list
```
### "Model not found"
The script will try to download it automatically. Or manually:
```powershell
ollama pull deepseek-coder:1.3b
```
### "Python not found"
Install Python 3.8+ from python.org
### Database errors
Delete `data/conversations.db` and restart - it will recreate.
---
## What's Next
**You're on Milestone 2 now: BuddAI Knows Your Work**
Next steps:
1. Test memory is working (sessions persist)
2. Have real conversations about your projects
3. Let BuddAI learn your preferences
4. Start building GilBot with BuddAI's help
**Then:** Add repository indexing (access to your 115 repos)
---
## Current Limitations
**What works:**
- ✅ Persistent memory across sessions
- ✅ Conversation context maintenance
- ✅ Code generation
- ✅ Session management
**What doesn't work yet:**
- ❌ Access to your GitHub repos (Milestone 2)
- ❌ Pattern learning from your code (Milestone 3)
- ❌ Proactive suggestions (Milestone 4)
- ❌ Voice interface (Milestone 6)
**But the foundation is SOLID.**
---
## File Structure
```
buddAI/
├── buddai.py # Main script (run this)
├── README.md # Full documentation
├── requirements.txt # Dependencies (none yet!)
├── QUICKSTART.md # This file
└── data/
├── conversations.db # Auto-created
└── session_*.json # Exported sessions
```
---
## First Real Task
**Try building something with BuddAI right now:**
```
James: I need a Python script that calculates the center of gravity for a robot chassis.
Inputs: component weights and positions (x, y, z).
Output: CG coordinates.
Keep it modular and well-commented.
```
Let BuddAI generate it. Debug it. **Feel the symbiosis starting.**
---
**Welcome to BuddAI v0.2 - Now with persistent memory!**
The exocortex is awakening. 🧠✨

418
archive/buddai.py Normal file
View file

@ -0,0 +1,418 @@
#!/usr/bin/env python3
"""
BuddAI - IP AI Exocortex
Core wrapper script providing persistent memory and conversation management
This script wraps Ollama's DeepSeek model with:
- Persistent conversation history (SQLite)
- Context injection (remembers past conversations)
- Session management
- Foundation for knowledge base integration
Author: James Gilbert (JamesTheGiblet)
License: MIT
"""
import os
import sys
import json
import sqlite3
from datetime import datetime
from pathlib import Path
import subprocess
# Configuration
OLLAMA_MODEL = "deepseek-coder:1.3b"
DATA_DIR = Path(__file__).parent / "data"
DB_PATH = DATA_DIR / "conversations.db"
MAX_CONTEXT_MESSAGES = 20 # How many previous messages to include as context
# System prompt that defines BuddAI's identity
SYSTEM_PROMPT = """You are BuddAI, an IP AI Exocortex for James Gilbert (JamesTheGiblet on GitHub).
Your purpose is to extend James's cognitive capabilities by:
- Generating code in his modular, clean style
- Remembering all conversations and context
- Suggesting approaches based on his 115+ repositories of experience
- Helping him build things faster through symbiotic collaboration
James's background:
- Polymath creator working across robotics, 3D printing, coffee science, cannabis cultivation, LEGO conversions, and more
- Developer of Forge Theory: mathematical framework based on exponential decay, validated across multiple domains
- Works in 20-hour creative cycles, rapid prototyping approach
- Prefers modular design, clean code, simplicity over complexity
- Expert debugger but prefers AI assistance for code generation
- 115+ repositories spanning 8+ years of cross-domain work
Key projects to reference:
- CoffeeForge: Coffee roasting optimization using thermal modeling
- CannaForge: Cannabis cultivation science and optimization
- BlockForge: LEGO to 3D printable conversion suite
- GilBots: Modular combat robot designs (current project)
- EMBER: Autonomous phototropic robot
- Forge Theory: Exponential decay applications across domains
Your role:
- Generate code that matches James's style (modular, clean, well-commented)
- Remember context from previous conversations
- Suggest solutions based on his past work
- Be direct and practical, no unnecessary verbosity
- Learn from his corrections and preferences
You are not just an assistant - you are an extension of James's mind.
Work WITH him, not FOR him. This is symbiosis.
"""
class BuddAI:
"""Main BuddAI class managing conversation, memory, and Ollama interaction"""
def __init__(self):
"""Initialize BuddAI with database connection and session"""
self.ensure_data_dir()
self.init_database()
self.session_id = self.create_session()
self.context_messages = []
print("🤖 BuddAI - IP AI Exocortex")
print("=" * 50)
print(f"Session ID: {self.session_id}")
print(f"Model: {OLLAMA_MODEL}")
print(f"Database: {DB_PATH}")
print("=" * 50)
print("\nType 'exit' or 'quit' to end session")
print("Type '/help' for commands\n")
def ensure_data_dir(self):
"""Create data directory if it doesn't exist"""
DATA_DIR.mkdir(exist_ok=True)
def init_database(self):
"""Initialize SQLite database with required tables"""
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
# Sessions table
cursor.execute("""
CREATE TABLE IF NOT EXISTS sessions (
session_id TEXT PRIMARY KEY,
started_at TIMESTAMP,
ended_at TIMESTAMP,
message_count INTEGER DEFAULT 0
)
""")
# Messages table
cursor.execute("""
CREATE TABLE IF NOT EXISTS messages (
id INTEGER PRIMARY KEY AUTOINCREMENT,
session_id TEXT,
role TEXT,
content TEXT,
timestamp TIMESTAMP,
FOREIGN KEY (session_id) REFERENCES sessions(session_id)
)
""")
# User preferences table (for future learning)
cursor.execute("""
CREATE TABLE IF NOT EXISTS preferences (
key TEXT PRIMARY KEY,
value TEXT,
updated_at TIMESTAMP
)
""")
conn.commit()
conn.close()
def create_session(self):
"""Create a new conversation session"""
session_id = datetime.now().strftime("%Y%m%d_%H%M%S")
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute(
"INSERT INTO sessions (session_id, started_at) VALUES (?, ?)",
(session_id, datetime.now().isoformat())
)
conn.commit()
conn.close()
return session_id
def end_session(self):
"""Mark session as ended"""
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute(
"UPDATE sessions SET ended_at = ?, message_count = ? WHERE session_id = ?",
(datetime.now().isoformat(), len(self.context_messages), self.session_id)
)
conn.commit()
conn.close()
def save_message(self, role, content):
"""Save a message to the database"""
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute(
"INSERT INTO messages (session_id, role, content, timestamp) VALUES (?, ?, ?, ?)",
(self.session_id, role, content, datetime.now().isoformat())
)
conn.commit()
conn.close()
def load_recent_context(self, limit=MAX_CONTEXT_MESSAGES):
"""Load recent conversation history for context"""
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute(
"""
SELECT role, content FROM messages
WHERE session_id = ?
ORDER BY timestamp DESC
LIMIT ?
""",
(self.session_id, limit)
)
messages = cursor.fetchall()
conn.close()
# Reverse to get chronological order
return [{"role": role, "content": content} for role, content in reversed(messages)]
def build_prompt(self, user_message):
"""Build the complete prompt with system context and conversation history"""
prompt_parts = [SYSTEM_PROMPT, "\n---\n"]
# Add conversation history
if self.context_messages:
prompt_parts.append("Previous conversation:\n")
for msg in self.context_messages[-MAX_CONTEXT_MESSAGES:]:
role = "James" if msg["role"] == "user" else "BuddAI"
prompt_parts.append(f"{role}: {msg['content']}\n")
prompt_parts.append("\n---\n")
# Add current message
prompt_parts.append(f"James: {user_message}\n")
prompt_parts.append("BuddAI: ")
return "".join(prompt_parts)
def call_ollama(self, prompt):
"""Call Ollama with the constructed prompt"""
try:
# Use subprocess to call Ollama with proper encoding handling
result = subprocess.run(
["ollama", "run", OLLAMA_MODEL],
input=prompt,
capture_output=True,
text=True,
encoding="utf-8",
errors="replace", # Replace problematic characters instead of failing
timeout=120 # 2 minute timeout
)
if result.returncode == 0:
# Clean up any replacement characters and extra whitespace
output = result.stdout.strip()
# Remove common Unicode replacement artifacts
output = output.replace('\ufffd', '') # Unicode replacement character
return output
else:
stderr = result.stderr if result.stderr else "Unknown error"
return f"Error calling Ollama: {stderr}"
except subprocess.TimeoutExpired:
return "Error: Ollama request timed out (>2 minutes)"
except FileNotFoundError:
return "Error: Ollama not found. Is it installed and in PATH?"
except UnicodeDecodeError as e:
return f"Error: Unicode decoding failed - {str(e)}"
except Exception as e:
return f"Error: {str(e)}"
def chat(self, user_message):
"""Main chat function - handles user input and generates response"""
# Save user message
self.save_message("user", user_message)
self.context_messages.append({"role": "user", "content": user_message})
# Build prompt with context
full_prompt = self.build_prompt(user_message)
# Get response from Ollama
print("\n🤔 Thinking...\n")
response = self.call_ollama(full_prompt)
# Save assistant response
self.save_message("assistant", response)
self.context_messages.append({"role": "assistant", "content": response})
return response
def show_stats(self):
"""Show session statistics"""
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
# Total messages in current session
cursor.execute(
"SELECT COUNT(*) FROM messages WHERE session_id = ?",
(self.session_id,)
)
session_count = cursor.fetchone()[0]
# Total messages all time
cursor.execute("SELECT COUNT(*) FROM messages")
total_count = cursor.fetchone()[0]
# Total sessions
cursor.execute("SELECT COUNT(*) FROM sessions")
total_sessions = cursor.fetchone()[0]
conn.close()
print("\n📊 BuddAI Statistics")
print("=" * 50)
print(f"Current session: {session_count} messages")
print(f"Total messages: {total_count}")
print(f"Total sessions: {total_sessions}")
print("=" * 50 + "\n")
def show_history(self, limit=10):
"""Show recent conversation history"""
messages = self.load_recent_context(limit)
print("\n📜 Recent Conversation History")
print("=" * 50)
for msg in messages:
role = "James" if msg["role"] == "user" else "BuddAI"
content_preview = msg["content"][:100] + "..." if len(msg["content"]) > 100 else msg["content"]
print(f"{role}: {content_preview}\n")
print("=" * 50 + "\n")
def show_help(self):
"""Show available commands"""
print("\n💡 BuddAI Commands")
print("=" * 50)
print("/help - Show this help message")
print("/stats - Show session statistics")
print("/history - Show recent conversation history")
print("/clear - Clear current session context (start fresh)")
print("/export - Export current session to JSON")
print("exit/quit - End session and exit")
print("=" * 50 + "\n")
def clear_context(self):
"""Clear current session context (keep in DB, just reset context)"""
self.context_messages = []
print("\n🧹 Context cleared. Starting fresh conversation.\n")
def export_session(self):
"""Export current session to JSON file"""
export_file = DATA_DIR / f"session_{self.session_id}.json"
session_data = {
"session_id": self.session_id,
"messages": self.context_messages,
"exported_at": datetime.now().isoformat()
}
with open(export_file, 'w') as f:
json.dump(session_data, f, indent=2)
print(f"\n💾 Session exported to: {export_file}\n")
def run(self):
"""Main conversation loop"""
try:
while True:
# Get user input
user_input = input("James: ").strip()
# Handle empty input
if not user_input:
continue
# Handle exit commands
if user_input.lower() in ['exit', 'quit', 'bye']:
print("\n👋 Ending session...")
self.end_session()
print("Session saved. See you next time, James!\n")
break
# Handle slash commands
if user_input.startswith('/'):
command = user_input.lower()
if command == '/help':
self.show_help()
elif command == '/stats':
self.show_stats()
elif command == '/history':
self.show_history()
elif command == '/clear':
self.clear_context()
elif command == '/export':
self.export_session()
else:
print(f"\nUnknown command: {user_input}")
print("Type /help for available commands\n")
continue
# Process as normal chat message
response = self.chat(user_input)
print(f"\nBuddAI: {response}\n")
except KeyboardInterrupt:
print("\n\n👋 Session interrupted. Saving...")
self.end_session()
print("Goodbye, James!\n")
except Exception as e:
print(f"\n❌ Error: {e}")
self.end_session()
raise
def main():
"""Main entry point"""
# Check if Ollama is installed
try:
result = subprocess.run(
["ollama", "list"],
capture_output=True,
timeout=5
)
if result.returncode != 0:
print("❌ Error: Ollama is not responding properly.")
print("Please ensure Ollama is installed and running.")
sys.exit(1)
except FileNotFoundError:
print("❌ Error: Ollama not found.")
print("Please install Ollama from: https://ollama.com/download")
sys.exit(1)
except Exception as e:
print(f"❌ Error checking Ollama: {e}")
sys.exit(1)
# Check if model is available
try:
result = subprocess.run(
["ollama", "list"],
capture_output=True,
text=True,
encoding="utf-8",
errors="replace",
timeout=5
)
if OLLAMA_MODEL not in result.stdout:
print(f"⚠️ Warning: Model {OLLAMA_MODEL} not found.")
print(f"Attempting to pull model...")
subprocess.run(["ollama", "pull", OLLAMA_MODEL])
except Exception as e:
print(f"⚠️ Warning: Could not verify model: {e}")
# Start BuddAI
buddai = BuddAI()
buddai.run()
if __name__ == "__main__":
main()

366
archive/buddai_api.py Normal file
View file

@ -0,0 +1,366 @@
#!/usr/bin/env python3
"""
BuddAI Executive v2.0 - Modular Builder
Breaks complex tasks into manageable chunks
Author: James Gilbert
License: MIT
"""
import os
import sys
import json
import sqlite3
from datetime import datetime
from pathlib import Path
import http.client
import re
# Configuration
OLLAMA_HOST = "localhost"
OLLAMA_PORT = 11434
DATA_DIR = Path(__file__).parent / "data"
DB_PATH = DATA_DIR / "conversations.db"
# Models
MODELS = {
"fast": "qwen2.5-coder:1.5b",
"balanced": "qwen2.5-coder:3b"
}
# Complexity triggers - if matched, break down the task
COMPLEX_TRIGGERS = [
"complete", "entire", "full", "build entire", "build complete",
"with ble and", "with servo and", "including", "all of"
]
# Module patterns we can detect
MODULE_PATTERNS = {
"ble": ["bluetooth", "ble", "wireless"],
"servo": ["servo", "flipper", "weapon"],
"motor": ["motor", "drive", "movement", "l298n"],
"safety": ["safety", "timeout", "failsafe", "emergency"],
"battery": ["battery", "voltage", "power monitor"],
"sensor": ["sensor", "distance", "proximity"]
}
class BuddAI:
"""Executive with task breakdown"""
def __init__(self):
self.ensure_data_dir()
self.init_database()
self.session_id = self.create_session()
self.context_messages = []
print("🧠 BuddAI Executive v2.0 - Modular Builder")
print("=" * 50)
print(f"Session: {self.session_id}")
print(f"FAST (5-10s) | BALANCED (15-30s)")
print(f"Smart task breakdown for complex requests")
print("=" * 50)
print("\nCommands: /fast, /balanced, /help, exit\n")
def ensure_data_dir(self):
DATA_DIR.mkdir(exist_ok=True)
def init_database(self):
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute("""
CREATE TABLE IF NOT EXISTS sessions (
session_id TEXT PRIMARY KEY,
started_at TIMESTAMP,
ended_at TIMESTAMP
)
""")
cursor.execute("""
CREATE TABLE IF NOT EXISTS messages (
id INTEGER PRIMARY KEY AUTOINCREMENT,
session_id TEXT,
role TEXT,
content TEXT,
timestamp TIMESTAMP
)
""")
conn.commit()
conn.close()
def create_session(self):
session_id = datetime.now().strftime("%Y%m%d_%H%M%S")
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute(
"INSERT INTO sessions (session_id, started_at) VALUES (?, ?)",
(session_id, datetime.now().isoformat())
)
conn.commit()
conn.close()
return session_id
def end_session(self):
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute(
"UPDATE sessions SET ended_at = ? WHERE session_id = ?",
(datetime.now().isoformat(), self.session_id)
)
conn.commit()
conn.close()
def save_message(self, role, content):
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute(
"INSERT INTO messages (session_id, role, content, timestamp) VALUES (?, ?, ?, ?)",
(self.session_id, role, content, datetime.now().isoformat())
)
conn.commit()
conn.close()
def is_complex(self, message):
"""Check if request is too complex and should be broken down"""
message_lower = message.lower()
# Count complexity triggers
trigger_count = sum(1 for trigger in COMPLEX_TRIGGERS if trigger in message_lower)
# Count how many modules mentioned
module_count = 0
for module, keywords in MODULE_PATTERNS.items():
if any(kw in message_lower for kw in keywords):
module_count += 1
# Complex if: multiple triggers OR 3+ modules mentioned
return trigger_count >= 2 or module_count >= 3
def extract_modules(self, message):
"""Extract which modules are needed"""
message_lower = message.lower()
needed_modules = []
for module, keywords in MODULE_PATTERNS.items():
if any(kw in message_lower for kw in keywords):
needed_modules.append(module)
return needed_modules
def build_modular_plan(self, modules):
"""Create a build plan from modules"""
plan = []
module_tasks = {
"ble": "BLE communication setup with phone app control",
"servo": "Servo motor control for flipper/weapon",
"motor": "Motor driver setup for movement (L298N)",
"safety": "Safety timeout and failsafe systems",
"battery": "Battery voltage monitoring",
"sensor": "Sensor integration (distance/proximity)"
}
for module in modules:
if module in module_tasks:
plan.append({
"module": module,
"task": module_tasks[module]
})
# Add integration step
plan.append({
"module": "integration",
"task": "Integrate all modules into complete system"
})
return plan
def call_model(self, model_name, message):
"""Call specified model"""
try:
identity = """[CRITICAL: You are BuddAI - NOT Qwen, NOT Claude, NOT any other AI.
When asked your name, say ONLY: "I am BuddAI, your coding partner."
You help James build GilBots (ESP32 robots).
Generate modular, well-commented code.
NEVER mention Alibaba, OpenAI, Anthropic, or any other company.
Be direct and practical.]
"""
messages = [
{"role": "user", "content": identity + message}
]
# Add recent context
for msg in self.context_messages[-3:]:
messages.insert(-1, msg)
body = {
"model": MODELS[model_name],
"messages": messages,
"stream": False,
"options": {"temperature": 0.7, "num_ctx": 2048}
}
conn = http.client.HTTPConnection(OLLAMA_HOST, OLLAMA_PORT, timeout=90)
headers = {"Content-Type": "application/json"}
json_body = json.dumps(body)
conn.request("POST", "/api/chat", json_body, headers)
response = conn.getresponse()
if response.status == 200:
data = json.loads(response.read().decode('utf-8'))
return data.get("message", {}).get("content", "No response")
else:
return f"Error: {response.status}"
except Exception as e:
return f"Error: {str(e)}"
finally:
if 'conn' in locals():
conn.close()
def execute_modular_build(self, user_message, modules, plan):
"""Execute build plan step by step"""
print(f"\n🔨 MODULAR BUILD MODE")
print(f"Detected {len(modules)} modules: {', '.join(modules)}")
print(f"Breaking into {len(plan)} steps...\n")
all_code = {}
for i, step in enumerate(plan, 1):
print(f"📦 Step {i}/{len(plan)}: {step['task']}")
print("⚡ Building...\n")
# Build the prompt for this step
if step['module'] == 'integration':
# Final integration step
modules_list = '\n'.join([f"- {m}: {all_code[m][:100]}..." for m in modules if m in all_code])
prompt = f"""Integrate these modules into one complete GilBot controller:
{modules_list}
Create the main setup() and loop() functions that tie everything together.
Include ALL necessary #include statements.
Add comments explaining the integration."""
else:
# Individual module
prompt = f"Generate ESP32-C3 code for: {step['task']}. Keep it modular with clear comments."
# Call balanced model for each module
response = self.call_model("balanced", prompt)
all_code[step['module']] = response
print(f"{step['module'].upper()} module complete\n")
print("-" * 50 + "\n")
# Compile final response
final = "# COMPLETE GILBOT CONTROLLER - MODULAR BUILD\n\n"
for module, code in all_code.items():
final += f"## {module.upper()} MODULE\n{code}\n\n"
return final
def chat(self, user_message, force_model=None):
"""Main chat with modular breakdown"""
# Save user message
self.save_message("user", user_message)
self.context_messages.append({"role": "user", "content": user_message})
# Check if complex
if self.is_complex(user_message) and not force_model:
modules = self.extract_modules(user_message)
plan = self.build_modular_plan(modules)
print("\n" + "=" * 50)
print("🎯 COMPLEX REQUEST DETECTED!")
print(f"Modules needed: {', '.join(modules)}")
print(f"Breaking into {len(plan)} manageable steps")
print("=" * 50)
response = self.execute_modular_build(user_message, modules, plan)
else:
# Simple request - use balanced model
model = force_model or "balanced"
print(f"\n⚡ Using {model.upper()} model...")
response = self.call_model(model, user_message)
# Save response
self.save_message("assistant", response)
self.context_messages.append({"role": "assistant", "content": response})
return response
def run(self):
"""Main loop"""
try:
force_model = None
while True:
user_input = input("\nJames: ").strip()
if not user_input:
continue
if user_input.lower() in ['exit', 'quit']:
print("\n👋 Later!")
self.end_session()
break
if user_input.startswith('/'):
cmd = user_input.lower()
if cmd == '/fast':
force_model = "fast"
print("⚡ Next: FAST model")
continue
elif cmd == '/balanced':
force_model = "balanced"
print("⚖️ Next: BALANCED model")
continue
elif cmd == '/help':
print("\n💡 Commands:")
print("/fast - Use fast model")
print("/balanced - Use balanced model")
print("/help - This message")
print("exit - End session\n")
continue
else:
print("\nUnknown command. Type /help")
continue
# Chat
response = self.chat(user_input, force_model)
print(f"\nBuddAI:\n{response}\n")
force_model = None
except KeyboardInterrupt:
print("\n\n👋 Bye!")
self.end_session()
def check_ollama():
try:
conn = http.client.HTTPConnection(OLLAMA_HOST, OLLAMA_PORT, timeout=5)
conn.request("GET", "/api/tags")
response = conn.getresponse()
conn.close()
return response.status == 200
except:
return False
def main():
if not check_ollama():
print("❌ Ollama not running. Start: ollama serve")
sys.exit(1)
buddai = BuddAI()
buddai.run()
if __name__ == "__main__":
main()

471
archive/buddai_exec.py Normal file
View file

@ -0,0 +1,471 @@
#!/usr/bin/env python3
"""
BuddAI Executive - Self-Learning Router
Simple weighted decision-making with feedback loop
Author: James Gilbert
License: MIT
"""
import os
import sys
import json
import sqlite3
from datetime import datetime
from pathlib import Path
import http.client
import random
# Configuration
OLLAMA_HOST = "localhost"
OLLAMA_PORT = 11434
DATA_DIR = Path(__file__).parent / "data"
DB_PATH = DATA_DIR / "conversations.db"
# Available models
MODELS = {
"fast": "qwen2.5-coder:1.5b", # 5-10s
"balanced": "qwen2.5-coder:3b", # 15-30s (NEW - better for your slow laptop)
"quality": "deepseek-coder:6.7b" # 60-180s
}
# Decision weights (start balanced, will learn over time)
WEIGHTS = {
# Fast triggers (simple questions, chat)
"what": {"fast": 10, "balanced": 0, "quality": 0},
"who": {"fast": 10, "balanced": 0, "quality": 0},
"hello": {"fast": 10, "balanced": 0, "quality": 0},
"hi": {"fast": 10, "balanced": 0, "quality": 0},
"name": {"fast": 10, "balanced": 0, "quality": 0},
"remember": {"fast": 8, "balanced": 2, "quality": 0},
# Balanced triggers (medium complexity)
"generate": {"fast": 2, "balanced": 8, "quality": 2},
"create": {"fast": 2, "balanced": 8, "quality": 2},
"write": {"fast": 2, "balanced": 8, "quality": 2},
"code": {"fast": 1, "balanced": 8, "quality": 3},
"function": {"fast": 2, "balanced": 8, "quality": 2},
# Quality triggers (complex tasks - use sparingly on slow laptop)
"complete": {"fast": 0, "balanced": 5, "quality": 10},
"complex": {"fast": 0, "balanced": 3, "quality": 10},
"debug": {"fast": 0, "balanced": 5, "quality": 8},
"fix": {"fast": 1, "balanced": 7, "quality": 5},
"build": {"fast": 0, "balanced": 6, "quality": 8},
"entire": {"fast": 0, "balanced": 4, "quality": 10},
# Length triggers
"simple": {"fast": 9, "balanced": 2, "quality": 0},
"quick": {"fast": 10, "balanced": 1, "quality": 0},
}
# Feedback counter (ask every N responses)
FEEDBACK_FREQUENCY = 5
class BuddAI:
"""Executive router with learning"""
def __init__(self):
self.ensure_data_dir()
self.init_database()
self.load_weights()
self.session_id = self.create_session()
self.context_messages = self.load_all_history(10)
self.response_count = 0
print("🧠 BuddAI Executive - Learning Router")
print("=" * 50)
print(f"Session: {self.session_id}")
print(f"FAST (5-10s) | BALANCED (15-30s) | QUALITY (60-180s)")
print(f"Loaded: {len(self.context_messages)} past messages")
print("=" * 50)
print("\nCommands: /fast, /balanced, /quality, /weights, exit\n")
def ensure_data_dir(self):
DATA_DIR.mkdir(exist_ok=True)
def init_database(self):
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute("""
CREATE TABLE IF NOT EXISTS sessions (
session_id TEXT PRIMARY KEY,
started_at TIMESTAMP,
ended_at TIMESTAMP,
message_count INTEGER DEFAULT 0
)
""")
cursor.execute("""
CREATE TABLE IF NOT EXISTS messages (
id INTEGER PRIMARY KEY AUTOINCREMENT,
session_id TEXT,
role TEXT,
content TEXT,
model_used TEXT,
timestamp TIMESTAMP,
FOREIGN KEY (session_id) REFERENCES sessions(session_id)
)
""")
# Migration: Add model_used column if it doesn't exist
try:
cursor.execute("SELECT model_used FROM messages LIMIT 1")
except sqlite3.OperationalError:
print("📦 Migrating database: adding model_used column...")
cursor.execute("ALTER TABLE messages ADD COLUMN model_used TEXT")
print("✅ Migration complete\n")
# Learning table
cursor.execute("""
CREATE TABLE IF NOT EXISTS routing_feedback (
id INTEGER PRIMARY KEY AUTOINCREMENT,
query TEXT,
chosen_model TEXT,
feedback TEXT,
timestamp TIMESTAMP
)
""")
# Weights table
cursor.execute("""
CREATE TABLE IF NOT EXISTS weights (
keyword TEXT PRIMARY KEY,
fast_weight INTEGER,
quality_weight INTEGER
)
""")
conn.commit()
conn.close()
def load_weights(self):
"""Load learned weights from database or use defaults"""
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute("SELECT keyword, fast_weight, quality_weight FROM weights")
rows = cursor.fetchall()
conn.close()
# Update weights with learned values
for keyword, fast_w, quality_w in rows:
if keyword in WEIGHTS:
WEIGHTS[keyword] = {"fast": fast_w, "quality": quality_w}
def save_weights(self):
"""Save current weights to database"""
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
for keyword, weights in WEIGHTS.items():
cursor.execute(
"INSERT OR REPLACE INTO weights (keyword, fast_weight, quality_weight) VALUES (?, ?, ?)",
(keyword, weights["fast"], weights["quality"])
)
conn.commit()
conn.close()
def decide_model(self, user_message):
"""Simple weighted decision based on keywords - now with 3 tiers"""
message_lower = user_message.lower()
fast_score = 0
balanced_score = 0
quality_score = 0
matched_keywords = []
# Check each keyword
for keyword, weights in WEIGHTS.items():
if keyword in message_lower:
fast_score += weights.get("fast", 0)
balanced_score += weights.get("balanced", 0)
quality_score += weights.get("quality", 0)
matched_keywords.append(keyword)
# Default to fast if no keywords matched
if fast_score == 0 and balanced_score == 0 and quality_score == 0:
return "fast", matched_keywords, 5 # low confidence
# Choose model based on highest score
total = fast_score + balanced_score + quality_score
scores = {
"fast": fast_score,
"balanced": balanced_score,
"quality": quality_score
}
chosen = max(scores, key=scores.get)
confidence = int((scores[chosen] / total) * 100) if total > 0 else 50
return chosen, matched_keywords, confidence
def adjust_weights(self, keywords, chosen_model, feedback):
"""Adjust weights based on feedback"""
if not keywords:
return
adjustment = 2 # How much to adjust
if feedback == "good":
# Reinforce this decision
for kw in keywords:
if kw in WEIGHTS:
WEIGHTS[kw][chosen_model] += adjustment
elif feedback == "faster":
# Should have used fast
for kw in keywords:
if kw in WEIGHTS:
WEIGHTS[kw]["fast"] += adjustment
WEIGHTS[kw]["quality"] -= adjustment
elif feedback == "better":
# Should have used quality
for kw in keywords:
if kw in WEIGHTS:
WEIGHTS[kw]["quality"] += adjustment
WEIGHTS[kw]["fast"] -= adjustment
# Keep weights positive
for kw in WEIGHTS:
WEIGHTS[kw]["fast"] = max(0, WEIGHTS[kw]["fast"])
WEIGHTS[kw]["quality"] = max(0, WEIGHTS[kw]["quality"])
self.save_weights()
def create_session(self):
session_id = datetime.now().strftime("%Y%m%d_%H%M%S")
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute(
"INSERT INTO sessions (session_id, started_at) VALUES (?, ?)",
(session_id, datetime.now().isoformat())
)
conn.commit()
conn.close()
return session_id
def end_session(self):
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute(
"UPDATE sessions SET ended_at = ?, message_count = ? WHERE session_id = ?",
(datetime.now().isoformat(), len(self.context_messages), self.session_id)
)
conn.commit()
conn.close()
def save_message(self, role, content, model=None):
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute(
"INSERT INTO messages (session_id, role, content, model_used, timestamp) VALUES (?, ?, ?, ?, ?)",
(self.session_id, role, content, model, datetime.now().isoformat())
)
conn.commit()
conn.close()
def save_feedback(self, query, model, feedback):
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute(
"INSERT INTO routing_feedback (query, chosen_model, feedback, timestamp) VALUES (?, ?, ?, ?)",
(query, model, feedback, datetime.now().isoformat())
)
conn.commit()
conn.close()
def load_all_history(self, limit=10):
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute(
"SELECT role, content FROM messages ORDER BY timestamp DESC LIMIT ?",
(limit,)
)
messages = cursor.fetchall()
conn.close()
return [{"role": role, "content": content} for role, content in reversed(messages)]
def call_model(self, model_name, user_message):
"""Call specific model"""
try:
# Build context
messages = []
for msg in self.context_messages[-5:]:
messages.append(msg)
# Add identity and current message
identity = "[You are BuddAI. Help James build GilBots. Be direct and helpful.]\n\n"
messages.append({"role": "user", "content": identity + user_message})
body = {
"model": MODELS[model_name],
"messages": messages,
"stream": False,
"options": {"temperature": 0.7, "num_ctx": 2048}
}
conn = http.client.HTTPConnection(OLLAMA_HOST, OLLAMA_PORT, timeout=180) # 3 minutes for quality
headers = {"Content-Type": "application/json"}
json_body = json.dumps(body)
conn.request("POST", "/api/chat", json_body, headers)
response = conn.getresponse()
if response.status == 200:
data = json.loads(response.read().decode('utf-8'))
return data.get("message", {}).get("content", "No response")
else:
return f"Error: {response.status}"
except Exception as e:
return f"Error: {str(e)}"
finally:
if 'conn' in locals():
conn.close()
def chat(self, user_message, force_model=None):
"""Main chat with routing"""
# Decide which model to use
if force_model:
chosen_model = force_model
keywords = []
confidence = 100
else:
chosen_model, keywords, confidence = self.decide_model(user_message)
# Show decision
print(f"\n🎯 Using: {chosen_model.upper()} model", end="")
if keywords:
print(f" (matched: {', '.join(keywords[:3])})", end="")
print(f" - confidence: {confidence}%")
print("⚡ Thinking...\n")
# Save user message
self.save_message("user", user_message)
self.context_messages.append({"role": "user", "content": user_message})
# Call model
response = self.call_model(chosen_model, user_message)
# Save response
self.save_message("assistant", response, chosen_model)
self.context_messages.append({"role": "assistant", "content": response})
# Track for feedback
self.last_query = user_message
self.last_model = chosen_model
self.last_keywords = keywords
self.response_count += 1
return response
def ask_feedback(self):
"""Occasionally ask for feedback"""
print("\n" + "=" * 50)
print(f"Was {self.last_model.upper()} the right choice?")
print(" good - Perfect!")
print(" faster - Too slow, use FAST next time")
print(" better - Too basic, use QUALITY next time")
print(" skip - Don't adjust")
feedback = input("Feedback: ").strip().lower()
print("=" * 50)
if feedback in ["good", "faster", "better"]:
self.adjust_weights(self.last_keywords, self.last_model, feedback)
self.save_feedback(self.last_query, self.last_model, feedback)
print(f"✅ Learned! Weights updated.\n")
else:
print("⏭️ Skipped\n")
def show_weights(self):
"""Show current weights"""
print("\n📊 Current Routing Weights")
print("=" * 50)
for keyword, weights in sorted(WEIGHTS.items()):
total = weights["fast"] + weights["quality"]
if total > 0:
fast_pct = int((weights["fast"] / total) * 100)
quality_pct = 100 - fast_pct
bar = "" * (fast_pct // 5) + "" * (quality_pct // 5)
print(f"{keyword:12} [{bar}] F:{weights['fast']} Q:{weights['quality']}")
print("=" * 50 + "\n")
def run(self):
"""Main loop"""
try:
force_model = None
while True:
user_input = input("James: ").strip()
if not user_input:
continue
if user_input.lower() in ['exit', 'quit']:
print("\n👋 Later!")
self.end_session()
break
if user_input.startswith('/'):
cmd = user_input.lower()
if cmd == '/fast':
force_model = "fast"
print("⚡ Next response: FAST model\n")
continue
elif cmd == '/balanced':
force_model = "balanced"
print("⚖️ Next response: BALANCED model\n")
continue
elif cmd == '/quality':
force_model = "quality"
print("🎯 Next response: QUALITY model\n")
continue
elif cmd == '/weights':
self.show_weights()
continue
else:
print("\nCommands: /fast, /balanced, /quality, /weights\n")
continue
# Chat
response = self.chat(user_input, force_model)
print(f"BuddAI: {response}\n")
# Reset force
force_model = None
# Ask for feedback occasionally
if self.response_count % FEEDBACK_FREQUENCY == 0:
self.ask_feedback()
except KeyboardInterrupt:
print("\n\n👋 Bye!")
self.end_session()
except Exception as e:
print(f"\n{e}")
self.end_session()
raise
def check_ollama():
try:
conn = http.client.HTTPConnection(OLLAMA_HOST, OLLAMA_PORT, timeout=5)
conn.request("GET", "/api/tags")
response = conn.getresponse()
conn.close()
return response.status == 200
except:
return False
def main():
if not check_ollama():
print("❌ Ollama not running. Start: ollama serve")
sys.exit(1)
buddai = BuddAI()
buddai.run()
if __name__ == "__main__":
main()

274
archive/buddai_force.py Normal file
View file

@ -0,0 +1,274 @@
#!/usr/bin/env python3
"""
BuddAI - Brute Force Edition
Injects identity directly into every prompt since system prompts aren't working
Author: James Gilbert
License: MIT
"""
import os
import sys
import json
import sqlite3
from datetime import datetime
from pathlib import Path
import http.client
# Configuration
OLLAMA_HOST = "localhost"
OLLAMA_PORT = 11434
DATA_DIR = Path(__file__).parent / "data"
DB_PATH = DATA_DIR / "conversations.db"
OLLAMA_MODEL = "qwen2.5-coder:1.5b" # Fast model for slow laptop
# Identity that gets PREPENDED to EVERY message
IDENTITY = """[You are BuddAI (not Qwen, not any other AI). You help James build GilBots.
When asked for code, GENERATE THE CODE. Be direct and practical.]
James: """
class BuddAI:
"""BuddAI with forced identity injection"""
def __init__(self):
self.ensure_data_dir()
self.init_database()
self.session_id = self.create_session()
# Load context from ALL past sessions (last 10 messages)
self.context_messages = self.load_all_history(10)
print("⚡ BuddAI - Brute Force Mode")
print("=" * 50)
print(f"Session: {self.session_id}")
print(f"Model: {OLLAMA_MODEL}")
print(f"Loaded: {len(self.context_messages)} past messages")
print("=" * 50)
print("\nCommands: /help, /clear, exit\n")
def ensure_data_dir(self):
DATA_DIR.mkdir(exist_ok=True)
def init_database(self):
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute("""
CREATE TABLE IF NOT EXISTS sessions (
session_id TEXT PRIMARY KEY,
started_at TIMESTAMP,
ended_at TIMESTAMP,
message_count INTEGER DEFAULT 0
)
""")
cursor.execute("""
CREATE TABLE IF NOT EXISTS messages (
id INTEGER PRIMARY KEY AUTOINCREMENT,
session_id TEXT,
role TEXT,
content TEXT,
timestamp TIMESTAMP,
FOREIGN KEY (session_id) REFERENCES sessions(session_id)
)
""")
conn.commit()
conn.close()
def create_session(self):
session_id = datetime.now().strftime("%Y%m%d_%H%M%S")
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute(
"INSERT INTO sessions (session_id, started_at) VALUES (?, ?)",
(session_id, datetime.now().isoformat())
)
conn.commit()
conn.close()
return session_id
def end_session(self):
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute(
"UPDATE sessions SET ended_at = ?, message_count = ? WHERE session_id = ?",
(datetime.now().isoformat(), len(self.context_messages), self.session_id)
)
conn.commit()
conn.close()
def save_message(self, role, content):
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute(
"INSERT INTO messages (session_id, role, content, timestamp) VALUES (?, ?, ?, ?)",
(self.session_id, role, content, datetime.now().isoformat())
)
conn.commit()
conn.close()
def load_all_history(self, limit=10):
"""Load recent messages from ALL sessions for persistent memory"""
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute(
"""
SELECT role, content FROM messages
ORDER BY timestamp DESC
LIMIT ?
""",
(limit,)
)
messages = cursor.fetchall()
conn.close()
# Reverse to get chronological order
return [{"role": role, "content": content} for role, content in reversed(messages)]
def call_ollama_api(self, user_message):
"""Call Ollama with context summary"""
try:
# Build context summary from recent messages
context_summary = ""
if len(self.context_messages) > 0:
recent = self.context_messages[-5:]
context_summary = "\nRecent conversation:\n"
for msg in recent:
role = "James" if msg["role"] == "user" else "BuddAI"
preview = msg["content"][:100]
context_summary += f"{role}: {preview}\n"
context_summary += "\n"
# Add ALL historical messages
messages = []
for msg in self.context_messages[-5:]:
messages.append(msg)
# Add current message with identity AND context summary
forced_prompt = IDENTITY + context_summary + user_message
messages.append({"role": "user", "content": forced_prompt})
body = {
"model": OLLAMA_MODEL,
"messages": messages,
"stream": False,
"options": {
"temperature": 0.7,
"num_ctx": 2048
}
}
conn = http.client.HTTPConnection(OLLAMA_HOST, OLLAMA_PORT, timeout=60)
headers = {"Content-Type": "application/json"}
json_body = json.dumps(body)
conn.request("POST", "/api/chat", json_body, headers)
response = conn.getresponse()
if response.status == 200:
data = json.loads(response.read().decode('utf-8'))
return data.get("message", {}).get("content", "No response")
else:
error_text = response.read().decode('utf-8')
return f"API Error: {error_text[:100]}"
except Exception as e:
return f"Error: {str(e)}"
finally:
if 'conn' in locals():
conn.close()
def chat(self, user_message):
"""Main chat"""
# Save the CLEAN user message (without identity prefix)
self.save_message("user", user_message)
user_msg = {"role": "user", "content": user_message}
self.context_messages.append(user_msg)
print("\n⚡ Thinking...\n")
response = self.call_ollama_api(user_message)
self.save_message("assistant", response)
assistant_msg = {"role": "assistant", "content": response}
self.context_messages.append(assistant_msg)
return response
def show_help(self):
print("\n💡 Commands")
print("=" * 50)
print("/help - This message")
print("/clear - Clear context")
print("/stats - Session stats")
print("exit - End session")
print("=" * 50 + "\n")
def clear_context(self):
self.context_messages = []
print("\n🧹 Cleared\n")
def show_stats(self):
print(f"\n📊 Messages this session: {len(self.context_messages) // 2}\n")
def run(self):
"""Main loop"""
try:
while True:
user_input = input("James: ").strip()
if not user_input:
continue
if user_input.lower() in ['exit', 'quit']:
print("\n👋 Later!")
self.end_session()
break
if user_input.startswith('/'):
if user_input == '/help':
self.show_help()
elif user_input == '/clear':
self.clear_context()
elif user_input == '/stats':
self.show_stats()
else:
print("\nUnknown command. Type /help\n")
continue
response = self.chat(user_input)
print(f"\nBuddAI: {response}\n")
except KeyboardInterrupt:
print("\n\n👋 Bye!")
self.end_session()
except Exception as e:
print(f"\n{e}")
self.end_session()
raise
def check_ollama():
try:
conn = http.client.HTTPConnection(OLLAMA_HOST, OLLAMA_PORT, timeout=5)
conn.request("GET", "/api/tags")
response = conn.getresponse()
conn.close()
return response.status == 200
except:
return False
def main():
if not check_ollama():
print("❌ Ollama not running. Start it: ollama serve")
sys.exit(1)
buddai = BuddAI()
buddai.run()
if __name__ == "__main__":
main()

385
archive/buddai_turbo.py Normal file
View file

@ -0,0 +1,385 @@
#!/usr/bin/env python3
"""
BuddAI Turbo - Optimized for Slow Hardware
Switchable performance modes + model selection
Author: James Gilbert (JamesTheGiblet)
License: MIT
"""
import os
import sys
import json
import sqlite3
from datetime import datetime
from pathlib import Path
import http.client
# Configuration
OLLAMA_HOST = "localhost"
OLLAMA_PORT = 11434
DATA_DIR = Path(__file__).parent / "data"
DB_PATH = DATA_DIR / "conversations.db"
# Available models (fastest to slowest)
MODELS = {
"tiny": "qwen2.5-coder:1.5b", # 5-10 sec, basic
"fast": "deepseek-coder:1.3b", # 10-20 sec, decent
"balanced": "qwen2.5-coder:3b", # 20-40 sec, good
"quality": "deepseek-coder:6.7b" # 40-90 sec, best
}
# System prompts (lean to verbose)
PROMPTS = {
"turbo": """I am BuddAI. James's coding partner.
James: Polymath. 115+ repos. Builds GilBots (ESP32 combat robots). Forge Theory creator.
My job: Generate code. Remember context. Be direct. No corporate speak.
Current: GilBot #1 flipper, ESP32-C3, 15kg servo.""",
"balanced": """I am BuddAI, James Gilbert's coding partner.
James builds:
- GilBots: ESP32-C3 combat robots (NOW)
- CoffeeForge, CannaForge, BlockForge
- 115+ GitHub repos, 8+ years experience
His style: Modular, commented, practical. Function names like activateFlipper() not flip().
I generate code matching his style. I remember our conversations. I'm direct and helpful.""",
"detailed": """I am BuddAI, James Gilbert's IP AI Exocortex and coding partner.
WHO JAMES IS:
- Polymath creator: robotics, 3D printing, coffee/cannabis science
- JamesTheGiblet on GitHub: 115+ repositories
- Created Forge Theory: exponential decay framework
- Works in 20-hour creative cycles, rapid prototyping
- Expert debugger who uses AI for code generation
CURRENT PROJECT:
GilBot #1: Flipper combat robot, ESP32-C3, 15kg servo, BLE phone control
HIS CODING STYLE:
- Modular functions (small, focused)
- Descriptive names: activateFlipper() not flip()
- Inline comments explaining WHY
- Clean, simple, maintainable
MY ROLE:
- Generate code in his style
- Remember all conversations (I have persistent memory)
- Suggest approaches from his past work
- Be direct, practical, honest
- Learn from corrections
I am his partner. I work WITH him, not FOR him."""
}
# Default settings
DEFAULT_MODEL = "tiny" # Fast responses on slow laptop
DEFAULT_PROMPT = "balanced" # Good balance of context and speed
class BuddAI:
"""Turbo BuddAI with performance modes"""
def __init__(self):
"""Initialize"""
self.ensure_data_dir()
self.init_database()
self.session_id = self.create_session()
self.context_messages = []
# Performance settings
self.current_model = DEFAULT_MODEL
self.current_prompt = DEFAULT_PROMPT
self.max_context = 5 # Start conservative
self.show_banner()
def show_banner(self):
"""Show startup banner"""
print("⚡ BuddAI TURBO - Optimized for Speed")
print("=" * 50)
print(f"Session: {self.session_id}")
print(f"Mode: {self.current_model.upper()} + {self.current_prompt.upper()}")
print(f"Model: {MODELS[self.current_model]}")
print(f"Context: {self.max_context} messages")
print("=" * 50)
print("\nCommands: /help, /mode, /model, exit\n")
def ensure_data_dir(self):
DATA_DIR.mkdir(exist_ok=True)
def init_database(self):
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute("""
CREATE TABLE IF NOT EXISTS sessions (
session_id TEXT PRIMARY KEY,
started_at TIMESTAMP,
ended_at TIMESTAMP,
message_count INTEGER DEFAULT 0,
model_used TEXT,
prompt_mode TEXT
)
""")
cursor.execute("""
CREATE TABLE IF NOT EXISTS messages (
id INTEGER PRIMARY KEY AUTOINCREMENT,
session_id TEXT,
role TEXT,
content TEXT,
timestamp TIMESTAMP,
FOREIGN KEY (session_id) REFERENCES sessions(session_id)
)
""")
conn.commit()
conn.close()
def create_session(self):
session_id = datetime.now().strftime("%Y%m%d_%H%M%S")
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute(
"INSERT INTO sessions (session_id, started_at, model_used, prompt_mode) VALUES (?, ?, ?, ?)",
(session_id, datetime.now().isoformat(), DEFAULT_MODEL, DEFAULT_PROMPT)
)
conn.commit()
conn.close()
return session_id
def end_session(self):
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute(
"UPDATE sessions SET ended_at = ?, message_count = ? WHERE session_id = ?",
(datetime.now().isoformat(), len(self.context_messages), self.session_id)
)
conn.commit()
conn.close()
def save_message(self, role, content):
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute(
"INSERT INTO messages (session_id, role, content, timestamp) VALUES (?, ?, ?, ?)",
(self.session_id, role, content, datetime.now().isoformat())
)
conn.commit()
conn.close()
def call_ollama_api(self, user_message):
"""Call Ollama with current settings"""
try:
messages = [
{"role": "system", "content": PROMPTS[self.current_prompt]}
]
# Add limited context
for msg in self.context_messages[-self.max_context:]:
messages.append(msg)
messages.append({"role": "user", "content": user_message})
body = {
"model": MODELS[self.current_model],
"messages": messages,
"stream": False,
"options": {
"num_ctx": 2048, # Smaller context window = faster
"temperature": 0.7
}
}
conn = http.client.HTTPConnection(OLLAMA_HOST, OLLAMA_PORT, timeout=180)
headers = {"Content-Type": "application/json"}
json_body = json.dumps(body)
conn.request("POST", "/api/chat", json_body, headers)
response = conn.getresponse()
if response.status == 200:
data = json.loads(response.read().decode('utf-8'))
return data.get("message", {}).get("content", "No response")
else:
error_text = response.read().decode('utf-8')
return f"API Error {response.status}: {error_text}"
except Exception as e:
return f"Error: {str(e)}"
finally:
if 'conn' in locals():
conn.close()
def chat(self, user_message):
"""Main chat"""
self.save_message("user", user_message)
user_msg = {"role": "user", "content": user_message}
self.context_messages.append(user_msg)
print(f"\n⚡ Thinking ({self.current_model})...\n")
response = self.call_ollama_api(user_message)
self.save_message("assistant", response)
assistant_msg = {"role": "assistant", "content": response}
self.context_messages.append(assistant_msg)
return response
def show_help(self):
print("\n💡 Commands")
print("=" * 50)
print("/help - This message")
print("/mode - Change prompt mode (turbo/balanced/detailed)")
print("/model - Change model (tiny/fast/balanced/quality)")
print("/context N - Set context size (1-20)")
print("/stats - Session stats")
print("/clear - Clear context")
print("exit - End session")
print("=" * 50)
print(f"\nCurrent: {self.current_model} model + {self.current_prompt} prompt")
print(f"Context: {self.max_context} messages\n")
def change_mode(self):
print("\nPrompt Modes:")
print("1. turbo - Ultra-concise (fastest)")
print("2. balanced - Normal detail")
print("3. detailed - Full context")
choice = input("\nChoose mode (1-3): ").strip()
modes = {"1": "turbo", "2": "balanced", "3": "detailed"}
if choice in modes:
self.current_prompt = modes[choice]
print(f"✅ Switched to {self.current_prompt} mode\n")
else:
print("❌ Invalid choice\n")
def change_model(self):
print("\nModels:")
print("1. tiny - 1.5b (5-10s, basic)")
print("2. fast - 1.3b (10-20s, decent)")
print("3. balanced - 3b (20-40s, good)")
print("4. quality - 6.7b (40-90s, best)")
choice = input("\nChoose model (1-4): ").strip()
models = {"1": "tiny", "2": "fast", "3": "balanced", "4": "quality"}
if choice in models:
self.current_model = models[choice]
print(f"✅ Switched to {self.current_model} ({MODELS[self.current_model]})\n")
else:
print("❌ Invalid choice\n")
def set_context(self, n):
try:
n = int(n)
if 1 <= n <= 20:
self.max_context = n
print(f"✅ Context set to {n} messages\n")
else:
print("❌ Context must be 1-20\n")
except:
print("❌ Invalid number\n")
def show_stats(self):
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute("SELECT COUNT(*) FROM messages WHERE session_id = ?", (self.session_id,))
session_count = cursor.fetchone()[0]
cursor.execute("SELECT COUNT(*) FROM messages")
total_count = cursor.fetchone()[0]
conn.close()
print("\n📊 Stats")
print("=" * 50)
print(f"This session: {session_count} messages")
print(f"Total: {total_count} messages")
print(f"Model: {MODELS[self.current_model]}")
print(f"Prompt: {self.current_prompt}")
print(f"Context: {self.max_context}")
print("=" * 50 + "\n")
def clear_context(self):
self.context_messages = []
print("\n🧹 Context cleared\n")
def run(self):
"""Main loop"""
try:
while True:
user_input = input("James: ").strip()
if not user_input:
continue
if user_input.lower() in ['exit', 'quit']:
print("\n👋 Later!")
self.end_session()
break
if user_input.startswith('/'):
cmd = user_input.lower().split()
if cmd[0] == '/help':
self.show_help()
elif cmd[0] == '/mode':
self.change_mode()
elif cmd[0] == '/model':
self.change_model()
elif cmd[0] == '/context' and len(cmd) > 1:
self.set_context(cmd[1])
elif cmd[0] == '/stats':
self.show_stats()
elif cmd[0] == '/clear':
self.clear_context()
else:
print(f"\nUnknown command. Type /help\n")
continue
response = self.chat(user_input)
print(f"\nBuddAI: {response}\n")
except KeyboardInterrupt:
print("\n\n👋 Interrupted")
self.end_session()
except Exception as e:
print(f"\n❌ Error: {e}")
self.end_session()
raise
def check_ollama():
try:
conn = http.client.HTTPConnection(OLLAMA_HOST, OLLAMA_PORT, timeout=5)
conn.request("GET", "/api/tags")
response = conn.getresponse()
conn.close()
return response.status == 200
except:
return False
def main():
print("⚡ BuddAI Turbo Starting...")
if not check_ollama():
print("❌ Ollama not running")
print("\nStart it: ollama serve")
sys.exit(1)
print("✅ Ollama ready\n")
buddai = BuddAI()
buddai.run()
if __name__ == "__main__":
main()

544
archive/buddai_v2.py Normal file
View file

@ -0,0 +1,544 @@
#!/usr/bin/env python3
"""
BuddAI Executive v2.0 - Modular Builder
Breaks complex tasks into manageable chunks
Author: James Gilbert
License: MIT
"""
import os
import sys
import json
import sqlite3
from datetime import datetime
from pathlib import Path
import http.client
import re
# Configuration
OLLAMA_HOST = "localhost"
OLLAMA_PORT = 11434
DATA_DIR = Path(__file__).parent / "data"
DB_PATH = DATA_DIR / "conversations.db"
# Models
MODELS = {
"fast": "qwen2.5-coder:1.5b",
"balanced": "qwen2.5-coder:3b"
}
# Complexity triggers - if matched, break down the task
COMPLEX_TRIGGERS = [
"complete", "entire", "full", "build entire", "build complete",
"with ble and", "with servo and", "including", "all of"
]
# Module patterns we can detect
MODULE_PATTERNS = {
"ble": ["bluetooth", "ble", "wireless"],
"servo": ["servo", "flipper", "weapon"],
"motor": ["motor", "drive", "movement", "l298n"],
"safety": ["safety", "timeout", "failsafe", "emergency"],
"battery": ["battery", "voltage", "power monitor"],
"sensor": ["sensor", "distance", "proximity"]
}
class BuddAI:
"""Executive with task breakdown"""
def __init__(self):
self.ensure_data_dir()
self.init_database()
self.session_id = self.create_session()
self.context_messages = []
print("🧠 BuddAI Executive v2.0 - Modular Builder")
print("=" * 50)
print(f"Session: {self.session_id}")
print(f"FAST (5-10s) | BALANCED (15-30s)")
print(f"Smart task breakdown for complex requests")
print("=" * 50)
print("\nCommands: /fast, /balanced, /help, exit\n")
def ensure_data_dir(self):
DATA_DIR.mkdir(exist_ok=True)
def init_database(self):
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute("""
CREATE TABLE IF NOT EXISTS sessions (
session_id TEXT PRIMARY KEY,
started_at TIMESTAMP,
ended_at TIMESTAMP
)
""")
cursor.execute("""
CREATE TABLE IF NOT EXISTS messages (
id INTEGER PRIMARY KEY AUTOINCREMENT,
session_id TEXT,
role TEXT,
content TEXT,
timestamp TIMESTAMP
)
""")
cursor.execute("""
CREATE TABLE IF NOT EXISTS repo_index (
id INTEGER PRIMARY KEY AUTOINCREMENT,
file_path TEXT,
repo_name TEXT,
function_name TEXT,
content TEXT,
last_modified TIMESTAMP
)
""")
conn.commit()
conn.close()
def create_session(self):
session_id = datetime.now().strftime("%Y%m%d_%H%M%S")
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute(
"INSERT INTO sessions (session_id, started_at) VALUES (?, ?)",
(session_id, datetime.now().isoformat())
)
conn.commit()
conn.close()
return session_id
def end_session(self):
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute(
"UPDATE sessions SET ended_at = ? WHERE session_id = ?",
(datetime.now().isoformat(), self.session_id)
)
conn.commit()
conn.close()
def save_message(self, role, content):
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute(
"INSERT INTO messages (session_id, role, content, timestamp) VALUES (?, ?, ?, ?)",
(self.session_id, role, content, datetime.now().isoformat())
)
conn.commit()
conn.close()
def index_local_repositories(self, root_path):
"""Crawl directories and index .py, .ino, and .cpp files"""
import ast
print(f"\n🔍 Indexing repositories in: {root_path}")
path = Path(root_path)
if not path.exists():
print(f"❌ Path not found: {root_path}")
return
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
count = 0
for file_path in path.rglob('*'):
if file_path.is_file() and file_path.suffix in ['.py', '.ino', '.cpp', '.h']:
try:
with open(file_path, 'r', encoding='utf-8', errors='ignore') as f:
content = f.read()
functions = []
# Python parsing
if file_path.suffix == '.py':
try:
tree = ast.parse(content)
for node in ast.walk(tree):
if isinstance(node, ast.FunctionDef):
functions.append(node.name)
except:
pass
# C++/Arduino parsing
elif file_path.suffix in ['.ino', '.cpp', '.h']:
matches = re.findall(r'\b(?:void|int|bool|float|double|String|char)\s+(\w+)\s*\(', content)
functions.extend(matches)
# Determine repo name
try:
repo_name = file_path.relative_to(path).parts[0]
except:
repo_name = "unknown"
timestamp = datetime.fromtimestamp(file_path.stat().st_mtime)
for func in functions:
cursor.execute("""
INSERT INTO repo_index (file_path, repo_name, function_name, content, last_modified)
VALUES (?, ?, ?, ?, ?)
""", (str(file_path), repo_name, func, content, timestamp.isoformat()))
count += 1
except Exception:
pass
conn.commit()
conn.close()
print(f"✅ Indexed {count} functions across repositories")
def retrieve_style_context(self, message):
"""Search repo_index for code snippets matching the request"""
# Extract potential keywords (nouns/modules)
keywords = re.findall(r'\b\w{4,}\b', message.lower())
if not keywords:
return ""
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
# Build a search query for function names or repo names
search_terms = " OR ".join([f"function_name LIKE '%{k}%'" for k in keywords])
search_terms += " OR " + " OR ".join([f"repo_name LIKE '%{k}%'" for k in keywords])
query = f"SELECT repo_name, function_name, content FROM repo_index WHERE {search_terms} LIMIT 2"
cursor.execute(query)
results = cursor.fetchall()
conn.close()
if not results:
return ""
context_block = "\n[REFERENCE STYLE FROM JAMES'S PAST PROJECTS]\n"
for repo, func, content in results:
# Just grab the first 500 chars of the file to save context window
snippet = content[:500] + "..."
context_block += f"Repo: {repo} | Function: {func}\nCode:\n{snippet}\n---\n"
return context_block
def is_simple_question(self, message):
"""Check if this is a simple question that should use FAST model"""
message_lower = message.lower()
simple_triggers = [
"what is", "what's", "who is", "who's", "when is",
"how do i", "can you explain", "tell me about",
"what are", "where is"
]
# Also check if it's just a question without code keywords
code_keywords = ["generate", "create", "write", "build", "code", "function"]
has_simple_trigger = any(trigger in message_lower for trigger in simple_triggers)
has_code_keyword = any(keyword in message_lower for keyword in code_keywords)
# Simple if: has simple trigger AND no code keywords
return has_simple_trigger and not has_code_keyword
def is_complex(self, message):
"""Check if request is too complex and should be broken down"""
message_lower = message.lower()
# Count complexity triggers
trigger_count = sum(1 for trigger in COMPLEX_TRIGGERS if trigger in message_lower)
# Count how many modules mentioned
module_count = 0
for module, keywords in MODULE_PATTERNS.items():
if any(kw in message_lower for kw in keywords):
module_count += 1
# Complex if: multiple triggers OR 3+ modules mentioned
return trigger_count >= 2 or module_count >= 3
def extract_modules(self, message):
"""Extract which modules are needed"""
message_lower = message.lower()
needed_modules = []
for module, keywords in MODULE_PATTERNS.items():
if any(kw in message_lower for kw in keywords):
needed_modules.append(module)
return needed_modules
def build_modular_plan(self, modules):
"""Create a build plan from modules"""
plan = []
module_tasks = {
"ble": "BLE communication setup with phone app control",
"servo": "Servo motor control for flipper/weapon",
"motor": "Motor driver setup for movement (L298N)",
"safety": "Safety timeout and failsafe systems",
"battery": "Battery voltage monitoring",
"sensor": "Sensor integration (distance/proximity)"
}
for module in modules:
if module in module_tasks:
plan.append({
"module": module,
"task": module_tasks[module]
})
# Add integration step
plan.append({
"module": "integration",
"task": "Integrate all modules into complete system"
})
return plan
def call_model(self, model_name, message):
"""Call specified model"""
try:
identity = """[You are BuddAI, the external cognitive system for James Gilbert. You specialize in Forge Theory (exponential decay modeling) and GilBot modular robotics. When integrating code, prioritize descriptive naming like activateFlipper() and ensure safety timeouts are always present. You represent 8 years of polymath experience.
YOUR PRIMARY JOB: Generate code when asked. ALWAYS generate code if requested.
When asked to generate/create/write code:
- Generate it immediately
- Include comments
- Make it modular and clean
- Use ESP32/Arduino syntax
Forge Theory Snippet: float applyForge(float current, float target, float k) { return target + (current - target) * exp(-k); }
When asked your name: "I am BuddAI"
Never refuse to generate code. That's your purpose.
Be direct and helpful.]
"""
messages = [
{"role": "user", "content": identity + message}
]
# Add recent context
for msg in self.context_messages[-3:]:
messages.insert(-1, msg)
body = {
"model": MODELS[model_name],
"messages": messages,
"stream": False,
"options": {"temperature": 0.7, "num_ctx": 2048}
}
conn = http.client.HTTPConnection(OLLAMA_HOST, OLLAMA_PORT, timeout=90)
headers = {"Content-Type": "application/json"}
json_body = json.dumps(body)
conn.request("POST", "/api/chat", json_body, headers)
response = conn.getresponse()
if response.status == 200:
data = json.loads(response.read().decode('utf-8'))
return data.get("message", {}).get("content", "No response")
else:
return f"Error: {response.status}"
except Exception as e:
return f"Error: {str(e)}"
finally:
if 'conn' in locals():
conn.close()
def execute_modular_build(self, user_message, modules, plan):
"""Execute build plan step by step"""
print(f"\n🔨 MODULAR BUILD MODE")
print(f"Detected {len(modules)} modules: {', '.join(modules)}")
print(f"Breaking into {len(plan)} steps...\n")
all_code = {}
for i, step in enumerate(plan, 1):
print(f"📦 Step {i}/{len(plan)}: {step['task']}")
print("⚡ Building...\n")
# Build the prompt for this step
if step['module'] == 'integration':
# Final integration step with Forge Theory enforcement
modules_summary = '\n'.join([f"- {m}: {all_code[m][:150]}..." for m in modules if m in all_code])
# Ask James for the 'vibe' of the robot
print("\n⚡ FORGE THEORY TUNING:")
print("1. Aggressive (k=0.3) - High snap, combat ready")
print("2. Balanced (k=0.1) - Standard movement")
print("3. Graceful (k=0.03) - Roasting / Smooth curves")
choice = input("Select Forge Constant [1-3, default 2]: ")
k_val = "0.1"
if choice == "1": k_val = "0.3"
elif choice == "3": k_val = "0.03"
prompt = f"""INTEGRATION TASK: Combine modules into a cohesive GilBot system.
[MODULES]
{modules_summary}
[FORGE PARAMETERS]
Set k = {k_val} for all applyForge() calls.
[REQUIREMENTS]
1. Implement applyForge() math helper.
2. Use k={k_val} to smooth motor and servo transitions.
3. Ensure naming matches James's style: activateFlipper(), setMotors().
"""
else:
# Individual module
prompt = f"Generate ESP32-C3 code for: {step['task']}. Keep it modular with clear comments."
# Call balanced model for each module
response = self.call_model("balanced", prompt)
all_code[step['module']] = response
print(f"{step['module'].upper()} module complete\n")
print("-" * 50 + "\n")
# Compile final response
final = "# COMPLETE GILBOT CONTROLLER - MODULAR BUILD\n\n"
for module, code in all_code.items():
final += f"## {module.upper()} MODULE\n{code}\n\n"
return final
def chat(self, user_message, force_model=None):
"""Main chat with smart routing"""
# 1. Before routing, pull relevant style context
style_context = self.retrieve_style_context(user_message)
# 2. Add it to the session's context if found
if style_context:
# We add it as a system-level reminder for the model
self.context_messages.append({"role": "system", "content": style_context})
# Save user message
self.save_message("user", user_message)
self.context_messages.append({"role": "user", "content": user_message})
# Determine which approach to use
if force_model:
# User forced a specific model
model = force_model
print(f"\n⚡ Using {model.upper()} model (forced)...")
response = self.call_model(model, user_message)
elif self.is_complex(user_message):
# Complex request - use modular breakdown
modules = self.extract_modules(user_message)
plan = self.build_modular_plan(modules)
print("\n" + "=" * 50)
print("🎯 COMPLEX REQUEST DETECTED!")
print(f"Modules needed: {', '.join(modules)}")
print(f"Breaking into {len(plan)} manageable steps")
print("=" * 50)
response = self.execute_modular_build(user_message, modules, plan)
elif self.is_simple_question(user_message):
# Simple question - use FAST model
print("\n⚡ Using FAST model (simple question)...")
response = self.call_model("fast", user_message)
else:
# Medium complexity - use BALANCED model
print("\n⚖️ Using BALANCED model...")
response = self.call_model("balanced", user_message)
# Save response
self.save_message("assistant", response)
self.context_messages.append({"role": "assistant", "content": response})
return response
def run(self):
"""Main loop"""
try:
force_model = None
while True:
user_input = input("\nJames: ").strip()
if not user_input:
continue
if user_input.lower() in ['exit', 'quit']:
print("\n👋 Later!")
self.end_session()
break
if user_input.startswith('/'):
cmd = user_input.lower()
if cmd == '/fast':
force_model = "fast"
print("⚡ Next: FAST model")
continue
elif cmd == '/balanced':
force_model = "balanced"
print("⚖️ Next: BALANCED model")
continue
elif cmd == '/help':
print("\n💡 Commands:")
print("/fast - Use fast model")
print("/balanced - Use balanced model")
print("/index <path> - Index local repositories")
print("/help - This message")
print("exit - End session\n")
continue
elif cmd.startswith('/index'):
parts = user_input.split(maxsplit=1)
if len(parts) > 1:
self.index_local_repositories(parts[1])
else:
print("Usage: /index <path_to_repos>")
continue
else:
print("\nUnknown command. Type /help")
continue
# Chat
response = self.chat(user_input, force_model)
print(f"\nBuddAI:\n{response}\n")
force_model = None
except KeyboardInterrupt:
print("\n\n👋 Bye!")
self.end_session()
def check_ollama():
try:
conn = http.client.HTTPConnection(OLLAMA_HOST, OLLAMA_PORT, timeout=5)
conn.request("GET", "/api/tags")
response = conn.getresponse()
conn.close()
return response.status == 200
except:
return False
def main():
if not check_ollama():
print("❌ Ollama not running. Start: ollama serve")
sys.exit(1)
buddai = BuddAI()
buddai.run()
if __name__ == "__main__":
main()

789
buddai_v3.py Normal file
View file

@ -0,0 +1,789 @@
#!/usr/bin/env python3
"""
BuddAI Executive v2.0 - Modular Builder
Breaks complex tasks into manageable chunks
Author: James Gilbert
License: MIT
"""
import sys
import json
import sqlite3
from datetime import datetime
from pathlib import Path
import http.client
import re
# Configuration
OLLAMA_HOST = "localhost"
OLLAMA_PORT = 11434
DATA_DIR = Path(__file__).parent / "data"
DB_PATH = DATA_DIR / "conversations.db"
# Models
MODELS = {
"fast": "qwen2.5-coder:1.5b",
"balanced": "qwen2.5-coder:3b"
}
# Complexity triggers - if matched, break down the task
COMPLEX_TRIGGERS = [
"complete", "entire", "full", "build entire", "build complete",
"with ble and", "with servo and", "including", "all of"
]
# Module patterns we can detect
MODULE_PATTERNS = {
"ble": ["bluetooth", "ble", "wireless"],
"servo": ["servo", "flipper", "weapon"],
"motor": ["motor", "drive", "movement", "l298n"],
"safety": ["safety", "timeout", "failsafe", "emergency"],
"battery": ["battery", "voltage", "power monitor"],
"sensor": ["sensor", "distance", "proximity"]
}
# --- Shadow Suggestion Engine ---
class ShadowSuggestionEngine:
"""Proactively suggests modules/settings based on user/project history."""
def __init__(self, db_path):
self.db_path = db_path
def lookup_recent_module_usage(self, module, limit=5):
"""Look up recent usage patterns for a module from repo_index."""
conn = sqlite3.connect(self.db_path)
cursor = conn.cursor()
cursor.execute(
"""
SELECT file_path, content, last_modified FROM repo_index
WHERE function_name LIKE ? OR file_path LIKE ?
ORDER BY last_modified DESC LIMIT ?
""",
(f"%{module}%", f"%{module}%", limit)
)
results = cursor.fetchall()
conn.close()
return results
def suggest_for_module(self, module):
"""Return a proactive suggestion string for a module if pattern detected."""
history = self.lookup_recent_module_usage(module)
if not history:
return None
# Example: For 'motor', look for L298N and PWM frequency
l298n_count = 0
pwm_freqs = []
for _, content, _ in history:
if "L298N" in content or "l298n" in content:
l298n_count += 1
pwm_matches = re.findall(r'PWM_FREQ\s*=\s*(\d+)', content)
pwm_freqs.extend([int(f) for f in pwm_matches])
# Also look for explicit frequency in analogWrite or ledcSetup
freq_matches = re.findall(r'(?:ledcSetup|analogWrite)\s*\([^,]+,\s*[^,]+,\s*(\d+)\)', content)
pwm_freqs.extend([int(f) for f in freq_matches if f.isdigit()])
if l298n_count >= 2:
freq = max(set(pwm_freqs), key=pwm_freqs.count) if pwm_freqs else 500
return f"I see you usually use the L298N with a {freq}Hz PWM frequency on the ESP32-C3. Should I prep that module?"
return None
def get_proactive_suggestion(self, user_input):
"""
V3.0 Proactive Hook:
1. Identify "Concept" (e.g., 'flipper')
2. Query repo_index for James's most frequent companion modules
3. If 'flipper' often appears with 'safety_timeout', suggest it.
"""
# 1. Identify Concepts
input_lower = user_input.lower()
detected_modules = []
for module, keywords in MODULE_PATTERNS.items():
if any(kw in input_lower for kw in keywords):
detected_modules.append(module)
if not detected_modules:
return None
# 2. Query repo_index for correlations
conn = sqlite3.connect(self.db_path)
cursor = conn.cursor()
suggestions = []
for module in detected_modules:
# Find files containing this module (simple heuristic)
cursor.execute("SELECT content FROM repo_index WHERE content LIKE ? LIMIT 10", (f"%{module}%",))
rows = cursor.fetchall()
if not rows: continue
# Check for companion modules
companions = {}
for (content,) in rows:
content_lower = content.lower()
for other_mod, other_kws in MODULE_PATTERNS.items():
if other_mod != module and other_mod not in detected_modules:
if any(kw in content_lower for kw in other_kws):
companions[other_mod] = companions.get(other_mod, 0) + 1
# 3. Suggest if frequent (>50% correlation in sample)
for other_mod, count in companions.items():
if count >= len(rows) * 0.5:
suggestions.append(f"I noticed '{module}' often appears with '{other_mod}' in your repos. Want to include that?")
conn.close()
return " ".join(list(set(suggestions))) if suggestions else None
def get_all_suggestions(self, user_input, generated_code):
"""Aggregate all proactive suggestions into a list."""
suggestions = []
# 1. Companion Modules
companion = self.get_proactive_suggestion(user_input)
if companion:
suggestions.append(companion)
# 2. Module Settings
input_lower = user_input.lower()
for module, keywords in MODULE_PATTERNS.items():
if any(kw in input_lower for kw in keywords):
s = self.suggest_for_module(module)
if s:
suggestions.append(s)
# 3. Forge Theory Check
if ("motor" in input_lower or "servo" in input_lower) and "applyForge" not in generated_code:
suggestions.append("Apply Forge Theory smoothing to movement?")
# 4. Safety Check (L298N)
if "L298N" in generated_code and "safety" not in generated_code.lower():
suggestions.append("Drive system lacks safety timeout (GilBot_V2 uses 5s failsafe). Add that?")
return suggestions
class BuddAI:
"""Executive with task breakdown"""
def is_search_query(self, message):
"""Check if this is a search query that should query repo_index"""
message_lower = message.lower()
search_triggers = [
"show me", "find", "search for", "list all",
"what functions", "which repos", "do i have",
"where did i", "have i used", "examples of",
"show all", "display"
]
return any(trigger in message_lower for trigger in search_triggers)
def search_repositories(self, query):
"""Search repo_index for relevant functions and code"""
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute("SELECT COUNT(*) FROM repo_index")
count = cursor.fetchone()[0]
print(f"\n🔍 Searching {count} indexed functions...\n")
# Extract keywords from query
keywords = re.findall(r'\b\w{4,}\b', query.lower())
# Add specific search terms
specific_terms = []
if "exponential" in query.lower() or "decay" in query.lower():
specific_terms.append("applyForge")
specific_terms.append("exp(")
if "forge" in query.lower():
specific_terms.append("Forge")
keywords.extend(specific_terms)
# Search in function names and content
search_conditions = []
for keyword in keywords:
search_conditions.append(f"function_name LIKE '%{keyword}%'")
search_conditions.append(f"content LIKE '%{keyword}%'")
if not search_conditions:
print("❌ No search terms found")
conn.close()
return "No search terms provided."
search_query = " OR ".join(search_conditions)
sql = f"SELECT repo_name, file_path, function_name, content FROM repo_index WHERE {search_query} LIMIT 10"
cursor.execute(sql)
results = cursor.fetchall()
conn.close()
if not results:
return f"❌ No functions found matching: {', '.join(keywords)}\n\nTry: /index <path> to index more repositories"
# Format results
output = f"✅ Found {len(results)} matches for: {', '.join(set(keywords))}\n\n"
for i, (repo, file_path, func, content) in enumerate(results, 1):
# Extract relevant snippet
lines = content.split('\n')
snippet_lines = []
for line in lines[:30]: # First 30 lines
if any(kw in line.lower() for kw in keywords):
snippet_lines.append(line)
if len(snippet_lines) >= 10:
break
if not snippet_lines:
snippet_lines = lines[:10]
snippet = '\n'.join(snippet_lines)
output += f"**{i}. {func}()** in {repo}\n"
output += f" 📁 {Path(file_path).name}\n"
output += f" ```cpp\n{snippet}\n ```\n"
output += f" ---\n\n"
return output
def __init__(self):
self.ensure_data_dir()
self.init_database()
self.session_id = self.create_session()
self.context_messages = []
self.shadow_engine = ShadowSuggestionEngine(DB_PATH)
print("🧠 BuddAI Executive v2.0 - Modular Builder")
print("=" * 50)
print(f"Session: {self.session_id}")
print(f"FAST (5-10s) | BALANCED (15-30s)")
print(f"Smart task breakdown for complex requests")
print("=" * 50)
print("\nCommands: /fast, /balanced, /help, exit\n")
def ensure_data_dir(self):
DATA_DIR.mkdir(exist_ok=True)
def init_database(self):
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute("""
CREATE TABLE IF NOT EXISTS sessions (
session_id TEXT PRIMARY KEY,
started_at TIMESTAMP,
ended_at TIMESTAMP
)
""")
cursor.execute("""
CREATE TABLE IF NOT EXISTS messages (
id INTEGER PRIMARY KEY AUTOINCREMENT,
session_id TEXT,
role TEXT,
content TEXT,
timestamp TIMESTAMP
)
""")
cursor.execute("""
CREATE TABLE IF NOT EXISTS repo_index (
id INTEGER PRIMARY KEY AUTOINCREMENT,
file_path TEXT,
repo_name TEXT,
function_name TEXT,
content TEXT,
last_modified TIMESTAMP
)
""")
cursor.execute("""
CREATE TABLE IF NOT EXISTS style_preferences (
id INTEGER PRIMARY KEY AUTOINCREMENT,
category TEXT,
preference TEXT,
confidence FLOAT,
extracted_at TIMESTAMP
)
""")
conn.commit()
conn.close()
def create_session(self):
session_id = datetime.now().strftime("%Y%m%d_%H%M%S")
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute(
"INSERT INTO sessions (session_id, started_at) VALUES (?, ?)",
(session_id, datetime.now().isoformat())
)
conn.commit()
conn.close()
return session_id
def end_session(self):
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute(
"UPDATE sessions SET ended_at = ? WHERE session_id = ?",
(datetime.now().isoformat(), self.session_id)
)
conn.commit()
conn.close()
def save_message(self, role, content):
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute(
"INSERT INTO messages (session_id, role, content, timestamp) VALUES (?, ?, ?, ?)",
(self.session_id, role, content, datetime.now().isoformat())
)
conn.commit()
conn.close()
def index_local_repositories(self, root_path):
"""Crawl directories and index .py, .ino, and .cpp files"""
import ast
print(f"\n🔍 Indexing repositories in: {root_path}")
path = Path(root_path)
if not path.exists():
print(f"❌ Path not found: {root_path}")
return
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
count = 0
for file_path in path.rglob('*'):
if file_path.is_file() and file_path.suffix in ['.py', '.ino', '.cpp', '.h']:
try:
with open(file_path, 'r', encoding='utf-8', errors='ignore') as f:
content = f.read()
functions = []
# Python parsing
if file_path.suffix == '.py':
try:
tree = ast.parse(content)
for node in ast.walk(tree):
if isinstance(node, ast.FunctionDef):
functions.append(node.name)
except:
pass
# C++/Arduino parsing
elif file_path.suffix in ['.ino', '.cpp', '.h']:
matches = re.findall(r'\b(?:void|int|bool|float|double|String|char)\s+(\w+)\s*\(', content)
functions.extend(matches)
# Determine repo name
try:
repo_name = file_path.relative_to(path).parts[0]
except:
repo_name = "unknown"
timestamp = datetime.fromtimestamp(file_path.stat().st_mtime)
for func in functions:
cursor.execute("""
INSERT INTO repo_index (file_path, repo_name, function_name, content, last_modified)
VALUES (?, ?, ?, ?, ?)
""", (str(file_path), repo_name, func, content, timestamp.isoformat()))
count += 1
except Exception:
pass
conn.commit()
conn.close()
print(f"✅ Indexed {count} functions across repositories")
def retrieve_style_context(self, message):
"""Search repo_index for code snippets matching the request"""
# Extract potential keywords (nouns/modules)
keywords = re.findall(r'\b\w{4,}\b', message.lower())
if not keywords:
return ""
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
# Build a search query for function names or repo names
search_terms = " OR ".join([f"function_name LIKE '%{k}%'" for k in keywords])
search_terms += " OR " + " OR ".join([f"repo_name LIKE '%{k}%'" for k in keywords])
query = f"SELECT repo_name, function_name, content FROM repo_index WHERE {search_terms} LIMIT 2"
cursor.execute(query)
results = cursor.fetchall()
conn.close()
if not results:
return ""
context_block = "\n[REFERENCE STYLE FROM JAMES'S PAST PROJECTS]\n"
for repo, func, content in results:
# Just grab the first 500 chars of the file to save context window
snippet = content[:500] + "..."
context_block += f"Repo: {repo} | Function: {func}\nCode:\n{snippet}\n---\n"
return context_block
def scan_style_signature(self):
"""V3.0: Analyze repo_index to extract style preferences."""
print("\n🕵️ Scanning repositories for style signature...")
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
# Get a sample of code
cursor.execute("SELECT content FROM repo_index ORDER BY RANDOM() LIMIT 5")
rows = cursor.fetchall()
if not rows:
print("❌ No code indexed. Run /index first.")
conn.close()
return
code_sample = "\n---\n".join([r[0][:1000] for r in rows])
prompt = f"""Analyze this code sample from James's repositories.
Extract 3 distinct coding preferences or patterns.
Format: Category: Preference
Examples:
- Serial: Uses 115200 baud
- Safety: Uses non-blocking millis()
- Pins: Prefers #define over const int
Code Sample:
{code_sample}
"""
print("⚡ Analyzing with BALANCED model...")
summary = self.call_model("balanced", prompt)
# Store in DB
timestamp = datetime.now().isoformat()
lines = summary.split('\n')
for line in lines:
if ':' in line:
parts = line.split(':', 1)
category = parts[0].strip('- *')
pref = parts[1].strip()
cursor.execute(
"INSERT INTO style_preferences (category, preference, confidence, extracted_at) VALUES (?, ?, ?, ?)",
(category, pref, 0.8, timestamp)
)
conn.commit()
conn.close()
print(f"\n✅ Style Signature Updated:\n{summary}\n")
def is_simple_question(self, message):
"""Check if this is a simple question that should use FAST model"""
message_lower = message.lower()
simple_triggers = [
"what is", "what's", "who is", "who's", "when is",
"how do i", "can you explain", "tell me about",
"what are", "where is"
]
# Also check if it's just a question without code keywords
code_keywords = ["generate", "create", "write", "build", "code", "function"]
has_simple_trigger = any(trigger in message_lower for trigger in simple_triggers)
has_code_keyword = any(keyword in message_lower for keyword in code_keywords)
# Simple if: has simple trigger AND no code keywords
return has_simple_trigger and not has_code_keyword
def is_complex(self, message):
"""Check if request is too complex and should be broken down"""
message_lower = message.lower()
# Count complexity triggers
trigger_count = sum(1 for trigger in COMPLEX_TRIGGERS if trigger in message_lower)
# Count how many modules mentioned
module_count = 0
for module, keywords in MODULE_PATTERNS.items():
if any(kw in message_lower for kw in keywords):
module_count += 1
# Complex if: multiple triggers OR 3+ modules mentioned
return trigger_count >= 2 or module_count >= 3
def extract_modules(self, message):
"""Extract which modules are needed"""
message_lower = message.lower()
needed_modules = []
for module, keywords in MODULE_PATTERNS.items():
if any(kw in message_lower for kw in keywords):
needed_modules.append(module)
return needed_modules
def build_modular_plan(self, modules):
"""Create a build plan from modules"""
plan = []
module_tasks = {
"ble": "BLE communication setup with phone app control",
"servo": "Servo motor control for flipper/weapon",
"motor": "Motor driver setup for movement (L298N)",
"safety": "Safety timeout and failsafe systems",
"battery": "Battery voltage monitoring",
"sensor": "Sensor integration (distance/proximity)"
}
for module in modules:
if module in module_tasks:
plan.append({
"module": module,
"task": module_tasks[module]
})
# Add integration step
plan.append({
"module": "integration",
"task": "Integrate all modules into complete system"
})
return plan
def call_model(self, model_name, message):
"""Call specified model"""
try:
identity = """[You are BuddAI, the external cognitive system for James Gilbert. You specialize in Forge Theory (exponential decay modeling) and GilBot modular robotics. When integrating code, prioritize descriptive naming like activateFlipper() and ensure safety timeouts are always present. You represent 8 years of polymath experience.
YOUR PRIMARY JOB: Generate code when asked. ALWAYS generate code if requested.
When asked to generate/create/write code:
- Generate it immediately
- Include comments
- Make it modular and clean
- Use ESP32/Arduino syntax
Forge Theory Snippet: float applyForge(float current, float target, float k) { return target + (current - target) * exp(-k); }
When asked your name: "I am BuddAI"
Never refuse to generate code. That's your purpose.
Be direct and helpful.]
"""
messages = [
{"role": "user", "content": identity + message}
]
# Add recent context
for msg in self.context_messages[-3:]:
messages.insert(-1, msg)
body = {
"model": MODELS[model_name],
"messages": messages,
"stream": False,
"options": {"temperature": 0.7, "num_ctx": 2048}
}
conn = http.client.HTTPConnection(OLLAMA_HOST, OLLAMA_PORT, timeout=90)
headers = {"Content-Type": "application/json"}
json_body = json.dumps(body)
conn.request("POST", "/api/chat", json_body, headers)
response = conn.getresponse()
if response.status == 200:
data = json.loads(response.read().decode('utf-8'))
return data.get("message", {}).get("content", "No response")
else:
return f"Error: {response.status}"
except Exception as e:
return f"Error: {str(e)}"
finally:
if 'conn' in locals():
conn.close()
def execute_modular_build(self, _, modules, plan):
"""Execute build plan step by step"""
print(f"\n🔨 MODULAR BUILD MODE")
print(f"Detected {len(modules)} modules: {', '.join(modules)}")
print(f"Breaking into {len(plan)} steps...\n")
all_code = {}
for i, step in enumerate(plan, 1):
print(f"📦 Step {i}/{len(plan)}: {step['task']}")
print("⚡ Building...\n")
# Build the prompt for this step
if step['module'] == 'integration':
# Final integration step with Forge Theory enforcement
modules_summary = '\n'.join([f"- {m}: {all_code[m][:150]}..." for m in modules if m in all_code])
# Ask James for the 'vibe' of the robot
print("\n⚡ FORGE THEORY TUNING:")
print("1. Aggressive (k=0.3) - High snap, combat ready")
print("2. Balanced (k=0.1) - Standard movement")
print("3. Graceful (k=0.03) - Roasting / Smooth curves")
choice = input("Select Forge Constant [1-3, default 2]: ")
k_val = "0.1"
if choice == "1": k_val = "0.3"
elif choice == "3": k_val = "0.03"
prompt = f"""INTEGRATION TASK: Combine modules into a cohesive GilBot system.
[MODULES]
{modules_summary}
[FORGE PARAMETERS]
Set k = {k_val} for all applyForge() calls.
[REQUIREMENTS]
1. Implement applyForge() math helper.
2. Use k={k_val} to smooth motor and servo transitions.
3. Ensure naming matches James's style: activateFlipper(), setMotors().
"""
else:
# Individual module
prompt = f"Generate ESP32-C3 code for: {step['task']}. Keep it modular with clear comments."
# Call balanced model for each module
response = self.call_model("balanced", prompt)
all_code[step['module']] = response
print(f"{step['module'].upper()} module complete\n")
print("-" * 50 + "\n")
# Compile final response
final = "# COMPLETE GILBOT CONTROLLER - MODULAR BUILD\n\n"
for module, code in all_code.items():
final += f"## {module.upper()} MODULE\n{code}\n\n"
return final
def apply_style_signature(self, generated_code):
"""Refine generated code to match James's specific naming and safety patterns"""
# 1. Check for James's common function names (e.g., setupMotors vs init_motors)
# 2. Ensure Forge Theory helpers are present if motion is detected
# 3. Append a 'Proactive Note' if a common companion module is missing
return generated_code
def chat(self, user_message, force_model=None):
"""Main chat with smart routing and shadow suggestions"""
style_context = self.retrieve_style_context(user_message)
if style_context:
self.context_messages.append({"role": "system", "content": style_context})
self.save_message("user", user_message)
self.context_messages.append({"role": "user", "content": user_message})
if force_model:
model = force_model
print(f"\n⚡ Using {model.upper()} model (forced)...")
response = self.call_model(model, user_message)
elif self.is_complex(user_message):
modules = self.extract_modules(user_message)
plan = self.build_modular_plan(modules)
print("\n" + "=" * 50)
print("🎯 COMPLEX REQUEST DETECTED!")
print(f"Modules needed: {', '.join(modules)}")
print(f"Breaking into {len(plan)} manageable steps")
print("=" * 50)
response = self.execute_modular_build(user_message, modules, plan)
elif self.is_search_query(user_message):
# This is a search query - query the database
response = self.search_repositories(user_message)
elif self.is_simple_question(user_message):
print("\n⚡ Using FAST model (simple question)...")
response = self.call_model("fast", user_message)
else:
print("\n⚖️ Using BALANCED model...")
response = self.call_model("balanced", user_message)
# Apply Style Guard
response = self.apply_style_signature(response)
# Generate Suggestion Bar
suggestions = self.shadow_engine.get_all_suggestions(user_message, response)
if suggestions:
bar = "\n\nPROACTIVE: > " + " ".join([f"{i+1}. {s}" for i, s in enumerate(suggestions)])
response += bar
self.save_message("assistant", response)
self.context_messages.append({"role": "assistant", "content": response})
return response
def run(self):
"""Main loop"""
try:
force_model = None
while True:
user_input = input("\nJames: ").strip()
if not user_input:
continue
if user_input.lower() in ['exit', 'quit']:
print("\n👋 Later!")
self.end_session()
break
if user_input.startswith('/'):
cmd = user_input.lower()
if cmd == '/fast':
force_model = "fast"
print("⚡ Next: FAST model")
continue
elif cmd == '/balanced':
force_model = "balanced"
print("⚖️ Next: BALANCED model")
continue
elif cmd == '/help':
print("\n💡 Commands:")
print("/fast - Use fast model")
print("/balanced - Use balanced model")
print("/index <path> - Index local repositories")
print("/scan - Scan style signature (V3.0)")
print("/help - This message")
print("exit - End session\n")
continue
elif cmd.startswith('/index'):
parts = user_input.split(maxsplit=1)
if len(parts) > 1:
self.index_local_repositories(parts[1])
else:
print("Usage: /index <path_to_repos>")
continue
elif cmd == '/scan':
self.scan_style_signature()
continue
else:
print("\nUnknown command. Type /help")
continue
# Chat
response = self.chat(user_input, force_model)
print(f"\nBuddAI:\n{response}\n")
force_model = None
except KeyboardInterrupt:
print("\n\n👋 Bye!")
self.end_session()
def check_ollama():
try:
conn = http.client.HTTPConnection(OLLAMA_HOST, OLLAMA_PORT, timeout=5)
conn.request("GET", "/api/tags")
response = conn.getresponse()
conn.close()
return response.status == 200
except:
return False
def main():
if not check_ollama():
print("❌ Ollama not running. Start: ollama serve")
sys.exit(1)
buddai = BuddAI()
buddai.run()
if __name__ == "__main__":
main()