Online Education Evaluation for Signal Processing Course

Slides:



Advertisements
Similar presentations
Object Persistence for Synthetic Characters Damian Isla Bungie Studios Microsoft Corp. Bruce Blumberg Synthetic Characters MIT Media Lab.
Advertisements

GMD German National Research Center for Information Technology Darmstadt University of Technology Perspectives and Priorities for Digital Libraries Research.
Feature Selection as Relevant Information Encoding Naftali Tishby School of Computer Science and Engineering The Hebrew University, Jerusalem, Israel NIPS.
Neural networks Introduction Fitting neural networks
Naive Bayes Classifiers, an Overview By Roozmehr Safi.
Topic 6 - Image Filtering - I DIGITAL IMAGE PROCESSING Course 3624 Department of Physics and Astronomy Professor Bob Warwick.
Independent Component Analysis (ICA)
Report on Intrusion Detection and Data Fusion By Ganesh Godavari.
lecture 5, Sampling and the Nyquist Condition Sampling Outline  FT of comb function  Sampling Nyquist Condition  sinc interpolation Truncation.
Mining Behavior Models Wenke Lee College of Computing Georgia Institute of Technology.
Computer Engineering 294 R. Smith Outlines and Organization 10/ Organization, Outlines and Abstracts The objective of both written and verbal communication.
Neuro-fuzzy Systems Xinbo Gao School of Electronic Engineering Xidian University 2004,10.
Topic 7 - Fourier Transforms DIGITAL IMAGE PROCESSING Course 3624 Department of Physics and Astronomy Professor Bob Warwick.
Extracting Places and Activities from GPS Traces Using Hierarchical Conditional Random Fields Yong-Joong Kim Dept. of Computer Science Yonsei.
Example 16,000 documents 100 topic Picked those with large p(w|z)
Chapter 9 Neural Network.
Huffman Encoding Veronica Morales.
Tracking with Unreliable Node Sequences Ziguo Zhong, Ting Zhu, Dan Wang and Tian He Computer Science and Engineering, University of Minnesota Infocom 2009.
Object-Oriented Software Engineering Practical Software Development using UML and Java Chapter 7: Focusing on Users and Their Tasks.
Report on Intrusion Detection and Data Fusion By Ganesh Godavari.
Jun-Won Suh Intelligent Electronic Systems Human and Systems Engineering Department of Electrical and Computer Engineering Speaker Verification System.
Recognizing personalized flexible activity patterns Sergio A. Ordonez M. July 2015.
Finance 300 Financial Markets Lecture 3 Fall, 2001© Professor J. Petry
Neural Modeling - Fall NEURAL TRANSFORMATION Strategy to discover the Brain Functionality Biomedical engineering Group School of Electrical Engineering.
2004 謝俊瑋 NTU, CSIE, CMLab 1 A Rule-Based Video Annotation System Andres Dorado, Janko Calic, and Ebroul Izquierdo, Senior Member, IEEE.
Spring 2015 Kyle Stephenson
DeepDive Model Dongfang Xu Ph.D student, School of Information, University of Arizona Dec 13, 2015.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Convolutional LSTM Networks for Subcellular Localization of Proteins
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483, University of Illinois, Urbana-Champaign 1 Graphic Processing Processors (GPUs) Parallel.
High Throughput and Programmable Online Traffic Classifier on FPGA Author: Da Tong, Lu Sun, Kiran Kumar Matam, Viktor Prasanna Publisher: FPGA 2013 Presenter:
Deep Learning Overview Sources: workshop-tutorial-final.pdf
This multimedia product and its contents are protected under copyright law. The following are prohibited by law: any public performance or display, including.
Ch 1. Introduction Pattern Recognition and Machine Learning, C. M. Bishop, Updated by J.-H. Eom (2 nd round revision) Summarized by K.-I.
Speech Recognition through Neural Networks By Mohammad Usman Afzal Mohammad Waseem.
Big data classification using neural network
Convolutional Sequence to Sequence Learning
Adolescent Psychology
Neural Network Architecture Session 2
Convolutional Neural Network
as presented on that date, with special formatting removed
Using Cognitive Science To Inform Instructional Design
Michael Rahaim, PhD Candidate Multimedia Communications Lab
Deep Learning Amin Sobhani.
Outline Introduction Signal, random variable, random process and spectra Analog modulation Analog to digital conversion Digital transmission through baseband.
Statistical Models for Automatic Speech Recognition
Intelligent Information System Lab
Exam 2 Review Chapters 9-13.
Machine Learning Feature Creation and Selection
Fenglong Ma1, Jing Gao1, Qiuling Suo1
HISTORICAL AND CURRENT PROJECTIONS
Human-level control through deep reinforcement learning
Fully detailed logic model
Neural Networks Advantages Criticism
Two-Stream Convolutional Networks for Action Recognition in Videos
Presented by: Yang Yu Spatiotemporal GMM for Background Subtraction with Superpixel Hierarchy Mingliang Chen, Xing Wei, Qingxiong.
Analysis models and design models
CEM 510 Construction Planning & Scheduling Dr Almohawis
Connecting Data with Domain Knowledge in Neural Networks -- Use Deep learning in Conventional problems Lizhong Zheng.
Papers 15/08.
Recurrent Encoder-Decoder Networks for Time-Varying Dense Predictions
Convolutional Neural Networks
Deep Learning Authors: Yann LeCun, Yoshua Bengio, Geoffrey Hinton
Data Pre-processing Lecture Notes for Chapter 2
Automatic Handwriting Generation
Human-object interaction
Topic: Semantic Text Mining
Bug Localization with Combination of Deep Learning and Information Retrieval A. N. Lam et al. International Conference on Program Comprehension 2017.
Week 7 Presentation Ngoc Ta Aidean Sharghi
L. Glimcher, R. Jin, G. Agrawal Presented by: Leo Glimcher
Presentation transcript:

Online Education Evaluation for Signal Processing Course Through Student Learning Pathways Kelvin H. R. Ng1, Sivanagaraja Tatinati, Andy W. H. Khong3 Overview Models F1 Fully-connected 100 nodes F2 Convolution 500 3x3 Convolution 300 3x3 Convolution 100 3x3 S1 Auto-encoding 50 nodes LSTM 100 nodes S2 Embedding 50 nodes S3 Convolution 32 7x7 filters S4 Frequency-based Models Sequence- based Models Linear/Non-linear weighted relationship Fully-connected Layer Discrete-to-continuous mapping Principle feature extraction Auto-encoding/ Embedding Layer Spatial numeric features Convolution Layer Remember temporal patterns Long Short-term Memory Existing methods use frequency-based features to forecast course performances. A multi-modal learning schema based on deep-learning techniques with learning sequences, psychometric measures and personality traits is designed to study the impact of online learning sequences to forecast course outcomes. Sequences vs Frequencies We demonstrate that sequence-based models have better consistencies when duration of analysis (weekly, topical, pre/post-tests, semester) is varied. Limited Observation Behind Computer Screens Observations in online learning environment is limited. Many assumptions are required to be made. One such example is assuming that students are paying attention to ongoing video materials. Observation limitations restrict prediction performance but assumptions can be soften with the knowledge of user characteristics. Pseudo-stationary psychometrics can inform models to achieve better prediction performances. Data & Process Flow Results Start Practice Select answer(Q1) Select answer(Q2) Select answer(Q3) Edit answer(Q1) Submit answers End Practice Video Activity Performance-approach goals Academic Bouyancy Theory of Intelligence Mastery Grit Performance-avoidance Observable Variables (Measurable at any instance) Latent Variables (At most measurable at fixed intervals) Learning Outcomes Auto-encoding/ Embedding Layer Convolution Long Short-term Memory Frequency-based Features Online Log of Resource Usage Sequence-based Features Fully- connected Start activity Play Pause Scrub Forward Scrub Back End End activity Practice Activity Without covariates, frequency-based model generalize better. Sequence-based model benefit from additional information from covariates and surpass frequency-based models with limited information. Frequency-based models failed to train as analysis periods are increased. Under Central Limit Theorem, frequency-based features of individual students follow Gaussian distributions. Authors Acknowledgements 1 Ph.D. Candidate, School of Electrical and Electronics Engineering, Nanyang Technological University, Singapore, SG, e140025@e.ntu.edu.sg 2 Research Fellow, Delta-NTU Corporate Lab for Cyber-Physical Systems, Nanyang Technological University, Singapore, SG, tatinati@ntu.edu.sg 3Associate Professor, School of Electrical and Electronics Engineering, Nanyang Technological University, Singapore, SG, andykhong@ntu.edu.sg This work was conducted within the Delta-NTU Corporate Lab for Cyber-Physical Systems with funding support from Delta Electronics Inc and the National Research Foundation (NRF) Singapore under the Corp Lab@University Scheme Better results cannot be achieved with either full historical information or the inclusion of covariates. Only the combination of both achieve the best prediction performances for training and testing. S3 achieve the best generalization performances. Without knowledge on domain features, auto-encoding layers extracts principle features while convolution layers provide abstraction of these principle features.