mirror of
https://github.com/JamesTheGiblet/BuddAI.git
synced 2026-01-08 21:58:40 +00:00
2.9 KiB
2.9 KiB
BuddAI Development Changelog
Version 3.2 - Self-Learning & Optimization System
📊 Phase 1: Data Collection
Implemented comprehensive data logging to track user interactions and code quality.
- Correction Logging: Added
save_correction()to store original vs. corrected code with user reasoning. - Compilation Logs: Added
log_compilation_result()to track hardware-specific compilation success rates. - Feedback System: Enhanced
record_feedback()to support comments and trigger failure analysis on negative feedback. - Database Updates: Added tables for
corrections,compilation_log,feedback(enhanced), andcode_rules.
🔬 Phase 2: Pattern Extraction
Added intelligence to learn from the collected data.
- Smart Learner: Created
SmartLearnerclass to diff code and extract patterns (e.g.,analogWrite->ledcWrite). - Hardware Profiles: Created
HardwareProfileclass to manage hardware-specific syntax (ESP32 vs Arduino). - Rule Storage: Learned patterns are stored in
code_ruleswith confidence scores. - Prompt Injection:
build_enhanced_prompt()now dynamically injects high-confidence rules into the system prompt.
✅ Phase 3: Validation
Implemented pre-flight checks to ensure code quality before display.
- Code Validator: Created
CodeValidatorclass to check for:- ESP32 PWM usage (ledcWrite enforcement).
- Blocking delays in motor code.
- Missing safety timeouts.
- Auto-Fix: The system can now automatically patch critical errors (like incorrect PWM calls) before showing code to the user.
- Hardware Detection: Automatically detects target hardware (ESP32, Arduino, Pico) from user prompts.
🔄 Phase 4: Feedback Loop
Established continuous improvement cycles.
- Adaptive Learner: Created
AdaptiveLearnerto analyze session history for implicit corrections ("actually, use X") and preferences. - Session Analysis: Added
/analyzecommand to scan the current session for learned lessons. - Explicit Teaching: Added
/teach <rule>command for manual rule insertion.
📈 Metrics & Fine-Tuning
Added tools to measure and cement progress.
- Learning Metrics: Created
LearningMetricsto calculate accuracy trends and correction rates over 30 days. - Fine-Tuning Prep: Created
ModelFineTunerto export corrections into JSONL format for training local LLMs (Qwen).
🛠 New CLI Commands
/learn: Extract patterns from stored corrections./analyze: Analyze current session for implicit feedback./correct <reason>: Mark previous response as wrong and save correction./good: Mark previous response as correct./teach <rule>: Explicitly teach a coding rule./validate: Run validation checks on the last response./rules: Display currently learned rules./metrics: Show accuracy and improvement stats./train: Export training data for fine-tuning.