Download presentation
Presentation is loading. Please wait.
Published byBarnard Gallagher Modified over 9 years ago
1
S. Frasca Baton Rouge, March 2007
2
Whole sky blind hierarchical search (P.Astone, SF, C.Palomba - Roma1) Targeted search (F.Antonucci, F. Ricci – Roma1) Binary source search (T.Bauer, J.v.d.Brand, S.v.d.Putten – Amsterdam)
3
Our method is based on the use of Hough maps, built starting from peak maps obtained by the SFTs.
4
4 h-reconstructed data Data quality SFDB Average spect rum estimation peak map hough transf. candidates peak map hough transf. candidatescoincidences coherent step events Here is a rough sketch of our pipeline Data quality SFDB Average spect rum estimation
5
The software is described in the document at http://grwavsf.roma1.infn.it/pss/docs/PSS_UG.pdfPSS_UG.pdf
8
Time-domain big event removal Non-linear adaptive estimation of the power spectrum (these estimated p.s. are saved together with the SFTs and the peak maps. Only relative maxima are taken (little less sensitivity in the ideal case, much more robustness in practice)
16
Periodogram of 2 22 (= 4194304 ) data of C7
23
Seconds in abscissa. Note on the full piece the slow amplitude variation and in the zoom the perfect synchronization with the deci-second.
25
25 1kHz band analysis: peak maps On the peak maps there is a further cleaning procedure consisting in putting a threshold on the peaks frequency distribution This is needed in order to avoid a too much large number of candidates which implies a reduction in sensitivity. C7: peaks frequency distribution before and after cleaning
26
Now we are using the “standard” (not “adaptive”) Hough transform Here are the results
27
27 Parameter space observation time frequency band frequency resolution number of FFTs sky resolution spin-down resolution ~10 13 points in the parameter space are explored for each data set
28
28 On each Hough map (corresponding to a given frequency and spin-down) candidates are selected putting a threshold on the CR The choice of the threshold is done according to the maximum number of candidates we can manage in the next steps of the analysis Candidates selection In this analysis we have used Number of candidates found: C6: 922,999,536 candidates C7: 319,201,742 candidates
29
29 1kHz band: candidates analysis C6: frequency distribution of candidates (spin-down 0) f [Hz]
30
30 C6: frequency distribution of candidates (spin-down 0) f [Hz] Sky distribution of candidates (~673.8Hz) peaks frequency distribution
31
31 C6: frequency distribution of candidates (spin-down 0) f [Hz] Sky distribution of candidates (~980Hz)peaks frequency distribution
32
32 C6: frequency distribution of candidates (spin-down 0) f [Hz] Sky distribution of candidates (881-889Hz)peaks frequency distribution
33
33 C7: frequency distribution of candidates (spin-down 0) f [Hz] Sky distribution of candidates (779.5Hz)peaks frequency distribution
34
34 red line: theoretical distribution
35
35 ‘quiet’ band ‘disturbed’ band Many candidates appear in ‘bumps’ (at high latitude), due to the short observation time, and ‘strips’ (at low latitude), due to the symmetry of the problem
36
36 Coincidences Number of coincidences: 2,700,232 Done comparing the set of parameter values identifying each candidate To reduce the false alarm probability; reduce also the computational load of the coherent “follow-up” False alarm probability: band 1045-1050 Hz Coincidence windows:
40
40 ‘Mixed data’ analysis Let us consider two set of ‘mixed’ data: Produce candidates for data set A=A6+A7 Produce candidates for data set B=B6+B7 Make coincidences between A and B Two main advantages: larger time interval -> less ‘bunches’ of candidates expected easier comparison procedure (same spin-down step for both sets) A6B6A7B7 time C6C7
41
In any log file there are mainly: comments, parameters, “events” and statistics. These are the log files of the SFDB construction There are information on big time events and big frequency lines (as “events”)
42
File D:\SF_DatAn\pss_datan\Reports\crea_sfdb_20060131_173851.log started at Tue Jan 31 17:38:51 2006 even NEW: a new FFT has started PAR1: Beginning time of the new FFT PAR2: FFT number in the run even EVT: time domain events PAR1: Beginning time, in mjd PAR2: Duration [s] PAR3: Max amplitude*EINSTEIN even EVF: frequency domain events, with high threshold PAR1: Beginning frequency of EVF PAR2: Duration [Hz] PAR3: Ratio, in amplitude, max/average PAR4: Power*EINSTEIN**2 or average*EINSTEIN (average if duration=0, when age>maxage) stat TOT: total number of frequency domain events par GEN: general parameters of the run GEN_BEG is the beginning time (mjd) GEN_NSAM the number of samples in 1/2 FFT GEN_DELTANU the frequency resolution GEN_FRINIT the beginning frequency of the FFT EVT_CR is the threshold EVT_TAU the memory time of the AR estimation EVT_DEADT the dead time [s] EVT_EDGE seconds purged around the event EVF_THR is the threshold in amplitude EVF_TAU the memory frequency of the AR estimation EVF_MAXAGE [Hz] the max age of the process. If age>maxage the AR is re-evaluated EVF_FAC is the factor for which the threshold is multiplied, to write less EVF in the log file stop at Wed Feb 1 12:39:22 2006
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.