Object Classification through Deconvolutional Neural Networks Student: Carlos Rubiano Mentor: Oliver Nina
Deep Convolutional Neural Networks have achieved success in large scale object classification
Problem Despite success, issue in way networks learn filters from training data Large amount of training data risks overfitting This can imply memorization Overfitting can be minimized by training different independent models with same/or different parts of training data Combine classification results from different models
Methodology Bootstrap aggregation to perform uniformly sampling with replacement of a training set D that contains n samples to create d subsets of n' where n ≥ n' Train n networks with different samples of the dataset and combine results through majority voting, probabilistic averaging and geometric mean Voting ensemble using same dataset and same amount of data due to randomness of minibatch process for stochastic gradient descent in DCNN
Preliminary Testings
Additional Results Kaggle, world's largest community of data scientists Currently holding a CIFAR-10 Object Recognition competition Applied our methodology and submitted results to competition Achieved 7th/155
What's next? Also exploring other ways for data augmentation Such as sampling with custom features and image variants Implement state of art Network in Network and combine with maxout and our methodology