Download presentation
Presentation is loading. Please wait.
Published byClaribel Doyle Modified over 6 years ago
1
BRAIN Alliance Research Team Annual Progress Report (Jul. 2016 – Feb
BRAIN Research Sub-project Title: Developing Tools for Visual Interactive Data and Pattern Analysis Prof. Ling Guan (Ryerson University) Prof. Jimmy Huang (York University) 4/29/2018
2
Outline Progress Overview HQP Training Selected Work and Publications
4/29/2018
3
Progress Overview Four objectives:
Objective 1: develop interactive user interfaces that allow the user to interact with the process of finding contrast patterns. Objective 2: develop interactive user interfaces that allow the user to explore the top-k patterns returned by a pattern mining process and identify the most interesting ones. Objective 3: develop real-time monitoring and control of facility networks. Objective 4: apply the technology developed to emotion recognition, intelligent monitoring and Global immersive collaboration learning environment. Objectives Jul 2016 Aug 2016 Sep 2016 Oct 2016 Nov 2016 Dec 2016 Jan 2017 Feb 2017 Mar 2017 Objective 1 Objective 2 Objective 3 Objective 4 4/29/2018
4
HQP Training HQP training in Year 2016 Highly Qualified Personnel
Number Researchers (including the Principal Investigator) 3 Post-Doctoral Fellows 2 Doctoral Students 1 Masters Students Undergraduate Students or Equivalent Sr. Research Associate Total 9 4/29/2018
5
3 Journal Papers + 6 Conference Papers
Publications Publications in Year 2016 Type Number Published works 3 Journal Papers + 6 Conference Papers Accepted but unpublished works Citations Number Total citations received for publication 5 4/29/2018
6
Selected Work We proposed a human gesture recognition method using Bag of Angles features for natural HCI in CAVE Environment. N. El Din Elmadany, Y. He, and L. Guan, “Human Gesture Recognition via Bag of Angles for 3D Virtual City Planning in CAVE Environment”, in Proc. of IEEE International Workshop on Multimedia Signal Processing (MMSP), pp. 1-5, Montreal, Canada, Sep. 2016 We proposed a multiview discriminative canonical correlation with application to human action recognition. N. El Madany, Y. He and L. Guan, “Human action recognition via Multiview discriminative canonical correlation,” in Proc. of IEEE Int. Conf. on Image Processing, pp , Phoenix, USA, September 2016 We proposed a new local depth map feature describing local spatiotemporal details of human motion and a collaborative representation for classification with regularized least square. C. Liang, E. Chen, L. Qi and L. Guan, “Improving Action Recognition Using Collaborative Representation of Local Depth Map Feature,” IEEE Signal Processing Letters, vol. 23, no. 9, pp , 2016. 4/29/2018
7
Selected Work (Cont.) We proposed a novel framework using kernel entropy component analysis and discriminative canonical correlation with application to emotion state identification. L. Gao, L. Qi and L. Guan, “Information fusion based on kernel entropy component analysis in discriminative canonical correlation space with application to audio emotion recognition,” in Proc. IEEE Int. Conf. on Acoustic, Speech and Signal Processing, pp , Shanghai, China, March 2016. We proposed a discriminative framework using kernel entropy component analysis and multiple discriminative canonical correlation and applied it to audio emotion state recognition. L. Gao, L. Qi and L. Guan, “A Novel Discriminative Framework Integrating Kernel Entropy Component Analysis and Discriminative Multiple Canonical Correlation for Information Fusion” in Proc. IEEE Int. Conf. on IEEE International Symposium on Multimedia, pp , San Jose, USA, 2016. We proposed a novel discriminative model for online behavioral analysis with application to emotion state identification L. Gao, L. Qin, and L. Guan, “A Novel Discriminative Model for Online Behavioral Analysis with Application to Emotion State Identification”, IEEE Intelligent Systems, vol. 31, no. 5, pp , Sep 4/29/2018
8
Selected Work (Cont.) We proposed Deep Discriminative Canonical Correlation Analysis (DDCCA), a method to learn the nonlinear transformation of two data sets such that the within-class correlation is maximized and the between-class correlation is minimized. We proposed a multi-set locality-preserving canonical correlation analysis with application to multiview emotion recognition. N. El Madany, Y. He and L. Guan, “Multiview Emotion Recognition via Multi-Set Locality-Preserving Canonical Correlation Analysis,” in Proc. of IEEE Int. Symposium on Circuits and Systems, pp , Montreal, Canada, May 2016. N. Elmadany, Y. He, and L. Guan, “Multiview learning via deep discriminative canonical correlation analysis”, in Proc. of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp , Shanghai, China, Apr 4/29/2018
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.