Download presentation
Presentation is loading. Please wait.
Published byShona Grace McBride Modified over 8 years ago
1
SEARCH FOR INSPIRALING BINARIES S. V. Dhurandhar IUCAA Pune, India
2
Inspiraling compact binaries GW
3
The inspiraling compact binary Two compact objects: Neutron stars /Blackholes spiraling in together Waveform cleanly modeled
6
Parameter Space for 1 – 30 solar masses
7
The matched filter Optimal filter in Gaussian noise: Detection probability is maximised for a given false alarm rate Stationary noise:
8
Matched filtering the inspiraling binary signal
9
Detection Strategy: Maximum Likelihood Parameters: Amplitude, t a, a, m 1, m 2 Amplitude: Use normalised templates Scanning over t a : FFT Initial phase a : Two templates for 0 and and add in quadrature masses chirp times: bank required Signal depends on many parameters
10
Detection Data: x (t) Templates: h 0 (t; ) and h (t; ) Padding: about 4 times the longest signal Fix Outputs: Use FFT to compute c 0 ( and c ( is the time-lag: scan over t a Statistic: c 2 ( c 0 2 ( + c ( Maximised over initial phase Conventionally we use its square root c( Compare c( with the threshold
13
FLAT SEARCH
14
Parameter space for the mass range 1 – 30 solar masses Parameter Space
15
Hexagonal tiling of the parameter space Minimal match: 0.97 Density of templates: ~ 1300/sec 2 LIGO I psd
16
Cost of the flat search Longest chirp: 95 secs for lower cut-off of 30 Hz Sampling rate: 2048 Hz Length of data train (> x 4): 512 secs Number of points in the data train: N = 2 20 Length of padding of zeros: 512 – 95 = 417 secs This is also the processed length of data Number of templates: n t = 11050 Online speed reqd. = n t x 6 N log 2 N / 417 ~ 3.3 GFlop
17
Hierarchical Search
20
Comparison of contours (H = 0.8) with 256 Hz and 1024 Hz cut-off
21
Only hierarchy in masses: gain factor 25-30 Boundary effects not included Hierarchy on masses only
22
92% power at f c = 256 Hz
24
Choice of the first stage threshold High enough to reduce false alarm rate -- reduce cost in the second stage -- For 2 18 points in a data train or so Low enough to make the 1 st stage bank as coarse as possible and hence reduce the cost in the 1 st stage S min ) S ~ 6
26
The first stage templates
28
Templates flowing out of the deemed parameter space are not wasted Zoomed view near low mass end
29
Need 535 templates
30
Cost of the EHS Longest chirp ~ 95 secs T data = 512 secs T pad = 417 secs f 1 samp = 512 Hz f 2 samp = 2048 Hz N 1 = 2 18 N 2 = 2 20 =.92 S min S = 8.2 ~ 6.05 n 1 = 535 templates n 2 = 11050 templates S min ~ 9.2 S 1 ~ 6 n 1 N 1 log 2 N 1 / T pad ~ 36 Mflops S 2 ~ n cross 6 N 2 log 2 N 2 / T pad ~ 13 Mflops S ~ 49 Mflops S flat ~ 3.3 GFlops n cross ~.82 Gain ~ 68
31
Optimised EHS
32
Performance of the code on real data S2 - L1 playground data was used – 256 seconds chunks, segwizard – data quality flags 18 hardware injections - recovered impulse time and SNR – matched filtering part of code was validated Monte-Carlo chirp injections with constant SNR - 679 chunks EHS pipeline ran successfully The coarse bank level generates many triggers which pass the and thresholds Clustering algorithm
33
The EHS pipeline Can run in standalone mode (local Beowulf) or in LDAS
34
Data conditioning Preliminary data conditioning: LDAS datacondAPI Downsamples the data at 1024 and 4096 Hz Welch psd estimate Reference calibration is provided Code data conditioning: Calculate response function, psd, h(f) Inject a signal
35
Template banks and matched filtering LAL routines generate fine and the coarse bank Compute for templates with initial phases of 0 and Hundreds of events are generated and hence a clustering algorithm is needed
36
The Clustering Algorithm Step 1: DBScan algorithm is used to identify clusters/islands Step 2: The cluster is condensed into an event which is followed up by a local fine bank search
37
DBScan (Ester et al 1996) 1. For every point p in C there is a q in C such that p is in the -nbhd of q This breaks up the set of triggers into components 2.A cluster must have at least N min number of points – otherwise its an outlier
38
Condensing a cluster Figure of merit: l(x) is the Lagrange interpolation polynomial
39
Conclusions Galaxy based Monte Carlo simulation is required to figure out the efficiency of detection. This will incorporate current understanding of BNS distribution in masses and in space Hierarchical search frees up CPU for additional parameters
41
CONCLUSION * Results shown for Gaussian noise - Now looking at real data – S1, S2 - Performance in non-Gaussian noise - Event multiplicity needs to be condensed into a single event * Reducing computational cost with hierarchical approach frees up CPU power for additional parameters
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.