APNIC LAME NS measurements
Overview Methodology Initial outcomes from 128 days runtime How bad is the problem? LAME-ness trends Proposals for dealing with LAME NS
Methodology Forked tree of perl processes – Implemented on Net::DNS package – Searches source:APNIC domain objects – For each listed object, checks nserver(s) – 20 parallel tasks Very low server impact, circa 3 queries/sec Completes scan of 45,000 objects in in under 5 hours Status is either OK, partially or fully lame Daily run
Initial outcomes from 128 days of data 20% to 30% of domains have problems – One or more NS not visible – SOA serial mismatches 10% to 15% fully lame – No functional NS in set of nserver – Zone file may have other (valid) NS (checks on SOA listed NS future work)
How bad is the problem? During sample period – 33% of domains all ns visible, all the time 43% better than 99% visible – 11% of domains all ns lame, all the time 18% have a semi-persistant problem Caveats: only one point of probe – Need to correlate with other query points – Coordination with RIPE, ARIN sweeps
Full and Partial LAME-ness trend
LAME-ness is all good or all bad No lame All lame
LAME-ness is consistent
Proposals for APNIC SIG-DB APNIC to send out 'reminders' to tech-c of domain objects with consistently LAME ns – Need to set threshold correctly Target the persistantly lame cases After time, if still lame: – APNIC disables DNS By marking domain: object with special data – Causes DNS generation to be skipped Tech-c (real data owner) can un-mark the domain object at any time