Download presentation
Presentation is loading. Please wait.
Published byHamdani Setiawan Modified over 6 years ago
1
Who moved my cheese? Automatic Annotation of Rodent Behaviors with Convolutional Neural Networks
Zhongzheng Ren, Adriana Noronha, Annie Vogel Ciernia, Yong Jae Lee
2
Rodent Behavioral Research
Studying rodent behaviors is key to understanding their memory patterns Exploring left object
3
Rodent Behavioral Research
Exploring left circle object Neuroscience researchers hand label videos! Existing automatic methods are unreliable
4
Previous Work Based on hand-crafted pipeline system [Burgos-Artizzu et al. 2012, Dollar et al. 2005, Lorbach el al. 2015, …] - Does not generalize well Based on object tracking system [Hong et al 2015, Giancard et al. 2013, Nolduls et al. 2011, …] - Prone to drifting Any-Maze (commercial software): drifting example
5
Previous Work Our Idea: Treat it as per-frame classification problem
Based on hand-crafted pipeline system [Burgos-Artizzu et al. 2012, Dollar et al. 2005, Giancard et al. 2013, Lorbach el al. 2015, …] - Hard to generalize when experimental settings changed Based on object tracking - Prone to drifting Our Idea: Treat it as per-frame classification problem Take advantage of powerful CNN models Any-Maze (commercial software): drifting example
6
Key contributions Our CNN-based classification approach outperforms tracking based methods Dataset, model, and UI released for both the computer vision and neuroscience communities
7
Dataset and UI Rodent Memory (RM) video dataset containing 80+ videos with frame-level action labels using our UI Our UI: More accurate annotations Less time GT avg. length Avg. len [Ciernia 2013] Acc. NOR 13.5 4.2 68.7% OLM 17.3 4.3 63.6%
8
Approach We leverage pre-trained convolutional neural networks (CNN) for frame-level action classification AlexNet: Action Category Pre-trained on ImageNet C3D: Action Category Pre-trained on Sports-1M
9
Per-frame Classification Accuracy (%)
Results Train/test: 60/20 videos; 5 action categories Significantly outperform a widely-used commercial software Any-Maze, which tracks rodent body keypoints Per-frame Classification Accuracy (%) Any-Maze AlexNet C3D NOR 78.34 93.17 89.30 OLM 74.25 95.24 91.98
10
Qualitative Results Nothing Left circle square
11
Qualitative Results Right circle Square
12
Applied to Neuroscience experiments Correlation coefficient with GT.
T1, T2: exploration time of two objects Preference is reflected by: DI (Discrimination Index) = (T1 - T2) / (T1 + T2) Correlation coefficient with GT. Annotator T1 T2 DI Any-maze 0.67 0.659 0.599 Ours 0.79 0.952 0.845
13
Please visit our poster!
Thank you!
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.