Project
Virtual Reality
Game Design
ROLE
Individual
(with help of Keio Univ. Graduate Students)
tools
Unity 3D
Duration
4 months
Project Overview
This project presents an immersive VR-based neuroanatomy learning system that integrates gamification and AI-augmented learning to make medical education more interactive and memorable.
Learners can explore a 3D life-sized brain, interact with different regions, and experience dynamic visualizations, AI-generated illustrations, and simulated case studies. The project emphasizes spatial memory, contextual understanding, and active engagement.
Background
Neuroanatomy is widely recognized as one of the most challenging subjects in medical education. Students must not only memorize over 140 distinct brain structures but also understand their intricate 3D spatial and functional relationships. Studies show that around 70% of medical students experience significant difficulty when learning neuroanatomy, often developing “neurophobia”—anxiety caused by cognitive overload. Traditional resources like 2D atlases and slices fail to convey spatial depth, making it difficult to construct accurate mental models. As a result, learners struggle to retain and apply anatomical knowledge beyond short-term recall.
Problem
Neuroanatomy education still suffers from three major challenges:
Spatial comprehension gaps – learners often misjudge spatial relations, with reported localization error rates of 40–60% in 3D structure identification.
Fragmented knowledge and weak retention – students struggle to link structure, function, and pathology, achieving less than 30% long-term retention despite good short-term performance.
Low engagement and passive learning – more than half of surveyed students describe lectures and atlases as “monotonous and uninteractive,” lacking feedback and experiential depth.
Approach
The project adopts a “Immersion–Interaction–Reinforcement” framework to transform neuroanatomy learning from passive memorization to active exploration. Grounded in cognitive learning theory, it uses a design-based research methodology integrating spatial visualization, feedback-driven interaction, and semantic reinforcement.
Spatial understanding through an anatomically precise 3D brain model that supports zooming, rotation, and slicing.
Semantic linkage connecting each region’s structure, function, and network to build integrated conceptual maps.
Adaptive feedback that adjusts guidance and challenge levels based on learner performance data.
Multimodal reinforcement using visual, verbal, and interactive cues to strengthen both perceptual and conceptual memory.
The project followed a prototype-based experimental design, integrating system development, multimodal AI enhancement, and user evaluation.
Development: Built in Unity 2022.3 for Meta Quest 3, using a validated 3D brain model with 141 mirrored structures and a curated CSV neuroscience database reviewed by experts.
AI Integration: Employed GPT-5 for case generation and tutoring, Sora for visual mnemonics, and a 3D avatar assistant for contextual interaction.
Interaction Design: Two modes — Study Mode for exploration and AI visualization; Game Mode for timed spatial puzzles with adaptive hints.
User Study: Conducted a pilot test with 7 medical students, combining quantitative metrics (accuracy, task time) and qualitative feedback (SUS survey). 6 out of 7 participants reported improved spatial understanding, confirming usability and engagement.
Next Steps: Future controlled studies will measure knowledge retention, spatial localization accuracy, and learning efficiency, combining quantitative and qualitative methods for a full pedagogical evaluation.
Interactions & Mechanics
Study Mode
Users explore the 3D brain, click regions to access detailed information—functions, Brodmann areas, related pathways, behaviors, and disorders.
Selecting a network (e.g., Default Mode Network) highlights all associated regions for systematic visualization.
AI generates visual mnemonics, animated demonstrations, and case simulations.
Game Mode
A 3D brain puzzle challenges users to reassemble brain structures correctly within a time limit.
Difficulty can be set by structure, function, or network.
Contextual hints appear dynamically to reinforce anatomical learning.
AI Integration
Conversational assistant answers questions in real time.
Case-based reasoning introduces diseases linked to selected regions.
Etymology-based illustrations connect terminology to visual memory (e.g., “hippocampus” → seahorse).