32
This project integrates emotion detection, voice-to-text, AI processing, and text-to-voice capabilities into a web-based teaching assistant system. Powered by FastAPI and LLaMA 3.1 3B, it bridges the gap between human emotions and AI responses, creating a real-time emotion-driven interaction experience.
Features
Emotion Detection: Detects user's facial emotions in real-time using DeepFace and OpenCV
Voice-to-Text: Converts user's speech to text for natural language input
AI Processing: Processes user queries with emotion-aware AI responses
Image Search: Finds relevant images based on contextual prompts
Text-to-Voice: Converts AI responses to speech with emotion-appropriate voice synthesis
Web Interface: Modern UI built with HTML, TailwindCSS, and JavaScript
Detailed Docs: AI Teaching Assistant System | 0xArchit Projects Documentation
Built with