Download presentation
1
Automated Solar Cavity Detection
Image Processing & Pattern Recognition Athena Johnson
2
Outline Introduction Background Problem Statement Proposed Solution
Experiments Conclusions Future Work
3
Introduction
4
background Solar Dynamics Observatory (SDO)
Extreme Ultraviolet Variability Experiment (EVE) Helioseismic and Magnetic Imager (HMI) Atmospheric Imaging Assembly (AIA) 1.5 Terabytes (TB) of data per day -- I suggest completely remove any discussion about CME. -- Instead, focus on large volume of solar images using numbers and facts, and thus the need for automated detection of solar cavities. -- My understanding is there is not much previous research on solar cavities. But you do need to explain all you know about previous research on solar cavity detection. Ultimately, the audience wants to clearly know: what have been done? What is your approach?
5
Atmospheric Imaging Assembly (AIA)
Images the Corona of the Sun Study of solar storms How they are created? How they propagate upward? How they emerge from the Sun? How magnetic fields heat the corona? -- I suggest completely remove any discussion about CME. -- Instead, focus on large volume of solar images using numbers and facts, and thus the need for automated detection of solar cavities. -- My understanding is there is not much previous research on solar cavities. But you do need to explain all you know about previous research on solar cavity detection. Ultimately, the audience wants to clearly know: what have been done? What is your approach?
6
SOLAR CAVITIES Currently an increase in implementations focused on Solar Cavities Off limb structures Darker elliptical structure, encompassed by lighter regions Hypothesized to be precursors to solar events Aid in establishing a predictive solar weather system -- I suggest completely remove any discussion about CME. -- Instead, focus on large volume of solar images using numbers and facts, and thus the need for automated detection of solar cavities. -- My understanding is there is not much previous research on solar cavities. But you do need to explain all you know about previous research on solar cavity detection. Ultimately, the audience wants to clearly know: what have been done? What is your approach?
7
SOLAR CAVITIES Labrosse, Dalla and Marshall (2010)
Radial intensity profiles Support Vector Machine (SVM) Region growing Calculation of metrics Running difference on subsequent images -- I suggest completely remove any discussion about CME. -- Instead, focus on large volume of solar images using numbers and facts, and thus the need for automated detection of solar cavities. -- My understanding is there is not much previous research on solar cavities. But you do need to explain all you know about previous research on solar cavity detection. Ultimately, the audience wants to clearly know: what have been done? What is your approach?
8
SOLAR CAVITIES Durak and Nasraoui (2010)
Exraction of principal contours Calculations on contours Adaboost -- I suggest completely remove any discussion about CME. -- Instead, focus on large volume of solar images using numbers and facts, and thus the need for automated detection of solar cavities. -- My understanding is there is not much previous research on solar cavities. But you do need to explain all you know about previous research on solar cavity detection. Ultimately, the audience wants to clearly know: what have been done? What is your approach?
9
Detections based on metrics Weak events missed Multiple detections
Problem statement Computation times Detections based on metrics Weak events missed Multiple detections Multiple events missed Low hit rates -- show a few different types of solar cavities to help with your points.
10
Haar Classifier Method that Paul Viola and Michael Jones published in 2001 Four key concepts Haar-like features Integral Image Adaboosting Cascade of Classifiers
11
Haar-Like Features Aids in satisfying real time requirements
Rectangular regions Reduces Computation Good.
12
Integral images Rapid computation of Haar-like features
13
Integral images 8+6+2+5+6+3 = 30 50-17-5+2 = 30 Original Image
14
adaboosting Aids in increasing the accuracy and speed
Begins with uniform weights over training examples Obtain a weak classifier Update weights Weak Classifier h1(x) Like integral image, start with statement on the reason why Adaboosting is used, then explain how it works.
15
adaboosting Weak Classifier h2(x) Weak Classifier h3(x)
16
adaboosting Weak classifiers combined to form the strong classifier
17
Cascade of classifiers
Increases the speed of detections All Haar-like features from all stages combined into a final Classifier Model Cascade of boosted classifiers with Haar-like features Again, why a cascade of classifiers is used?
18
Cascade of classifiers
A series of classifiers are applied to every subwindow of image A positive result from the first classifier, triggers evaluation from the second classifier and so on
19
Initial solution -- Talk about the problems with the first model first, then the second model. -- focus on the differences when you explain the model.
20
Results Manually selected Training Image Sets
This slide is completely out of place. If you want to show the result of the first model, show and explain the model first. Manually selected Training Image Sets Positive Samples = 100 Negative Samples = 400 ≈ 79.6% Correct detection rate was achieved
21
Results Missed detections in specific quadrants
Detections on the Sun’s disk Overlapping detections
22
Proposed Solution -- Talk about the problems with the first model first, then the second model. -- focus on the differences when you explain the model.
23
Minimized training sets
10 Positive Images 10 Negative Images Do not just use “experiment.” Use more specific title that is in consistent with the model.
24
Mark regions of interest and rotate
Deriving images from selected images Rotation applied to both training sets Use more specific title that is in consistent with the model.
25
Transform regions of interest
Transformations on cavities Use more specific title that is in consistent with the model.
26
Preprocessing Edge Detection Hough Lines Calculate the radius
Use more specific title that is in consistent with the model.
27
Results Derived Training Image Sets
Initial image in sets = 10 Positive Samples = 3600 Negative Samples = 3600 ≈ 96% Correct detection rate was achieved I understand this 96% is the result of performance testing result. Please check out how this rate is calculated. Average of 10 runs? 20 runs? From 10-fold cross validation?
28
Final image with detections
For each slide, you want to tell the audience something. If possible, use more specific slide title.
29
Conclusion Less manual work Short training times
< 22 hours Wider range of detections Weak and strong cavities Fast run times < 1 second per image Higher hit rates Let the facts talk. When you say “short training time” How long exactly?
30
Future work Technique Improvement Reduction of False Positives
Contour Detections Template Matching Customized Haar-like features
31
Future work Find optimal number of training sets Extract Metrics
User Interface
32
QUESTIONS?
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.