Building Hanashi AI: Your AI Language Learning Partner

Hanashi AI Interface

A mobile-first language learning platform that acts as your personal conversation partner.

The Vision

Language learning should be interactive, personalized, and focused on real conversation. Hanashi AI combines the benefits of having a native speaking partner with AI-powered learning tools to create an immersive learning environment.

Technical Architecture

Mobile App

  • React Native for cross-platform development
  • Zustand for state management
  • React Navigation for seamless routing
  • Custom hooks for real-time corrections
  • WebSocket integration for live chat

Backend

  • Node.js with Express on VPS
  • MongoDB for flexible data storage
  • Redis for caching and session management
  • WebSocket server for real-time communication
  • Multiple AI models for different language tasks

Key Features

Real-Time Error Correction

The app analyzes user messages in real-time, providing immediate feedback on grammar, word choice, and natural expression. Corrections appear with explanations, helping users understand their mistakes and learn from them.

Translation & Transliteration

Seamless translation between languages and automatic transliteration for languages like Japanese (romaji ↔ kana/kanji) help users bridge the gap between different writing systems.

Intelligent SRS Flashcards

The system automatically generates personalized flashcards from your conversations. Each correction and new vocabulary item is transformed into a review card, with spaced repetition optimized for your learning patterns.

AI Grammar & Vocabulary Notes

After each conversation, the AI generates comprehensive notes explaining the grammar patterns and vocabulary used, providing context and usage examples from your actual conversation.

Technical Challenges

Real-Time Processing

Implementing real-time corrections while maintaining smooth conversation flow was crucial. We solved this by:

  • Using WebSocket for instant communication
  • Implementing debounced corrections
  • Parallel processing of different AI tasks
  • Efficient caching of common corrections

Multi-Language Support

Supporting multiple languages required careful consideration:

  • Language-specific tokenization
  • Custom correction rules for each language
  • Flexible data structures for different writing systems
  • Optimized language model selection

Performance Optimizations

  • Lazy loading of conversation history
  • Efficient caching of AI responses
  • Background processing of SRS calculations
  • Optimized image and audio handling
  • Selective real-time updates

Looking Forward

Future development plans include:

  • Voice chat capabilities
  • Pronunciation feedback
  • Community features for language exchange
  • Expanded language support
  • Enhanced offline capabilities