🎯 LoRA Fine-Tuning Demo
Technical demonstration of LoRA adapters for domain-specific chatbots
Training domain: Mental health research literature
🎯 Technical Architecture
Demonstrating LoRA fine-tuning for domain specialization
Mental health = example subject matter only
Training Data (Example Domain)
425 mental health research papers from PubMed
Subject matter chosen to demonstrate domain specialization
⭐ LoRA Fine-Tuning (Key Technique)
Base Model: Mistral 7B (open-source LLM)
Method: LoRA adapters trained on mental health papers
Hardware: RTX 5070 Ti (16GB VRAM) with 4-bit quantization
💡 LoRA enables domain specialization without retraining the entire model—only ~0.1% of parameters are trained!
Backend
Flask API with crisis detection and medical disclaimers
Frontend
SvelteKit with real-time chat and responsive design
Deployment
Linux desktop with ngrok tunnel for production access
Try asking:
