Download presentation
Presentation is loading. Please wait.
1
The new APNIC DNS generation system
2
Previous System Direct access to backend whois.db files – Constructed radix tree in memory from domain objects – Walked tree in order to derive Zone files for changed zones Change was defined by changed: field in domain object being current date Zone files pushed to DNS via RSYNC Secondary zones controlled by PEERS files, p2n.pl script.
3
Problems Monolithic solution – Hard to add new options New database changed data format – backend SQL in v3 would require re- coding anyway NS reload slow, buggy – Too many periods with only one functional NS
4
Problems contd Secondary management intertwined with primary function – Needed better functional separation – Bind problems master / secondary / master Uneccessary DNS changes – SOA incremented for descr,nic-hdl changes
5
Goals for a new production system Make zone update more efficient Allow addition of new features Separate secondaries from main process Simplify zonefile production
6
Goals (continued) Make zone update more efficient – Can support dynamic processes when viable Allow addition of new features – DNSSEC zone signing – LAME delegation processing – NOTIFY based push to DNS from master – SOA serial increment on real change, not from changed: flag
7
Goals (continued) Separate secondaries from main process – Improves DNS stability Simplify zonefile production – staged pipeline of simple phases – Use UNIX tools
8
Simple Pipelined process Get Whois data Sort & collate Filter Notify DNS Check For zone change Make zones Easy to add parallel passes (eg to merge external data sources)
9
Separated secondary processes Not based on whois domain records Separate servers Better records management on prime source contacts Improved management processes Can offer scaleable secondary services – To ccTLD, AP-community, members
10
Integrating the ERX process ARIN ERX transfer process: – fetch of (partial) zone contents – Production of matching file for ARIN,RIPE to fetch from APNIC Staged pipeline highly amenable to both local and ERX processes Permits normal whois activity for end- user management of domain object.
11
Implementation timeline April identify need for new process – Discuss DNS generation with other RIR, IETF dnsops people June ERX discussions Toronto – Implementation of treewalk->flatfile July – Full DNS zone generation August – Deployment
12
Issues Time between ns1, ns3 restart – Needed to be kept to a minimum soa serial/contents mismatch – Ensure at least one NS was functional at all times Check in-addr.arpa resolution offsite Check non-reverse-tree DNS status – ccTLD secondaries, RIPE/ARIN secondaries
13
Interesting side-effects DNS lookup times in reverse are faster for clients when no cached data – Less recursion to find authoritative answer NXDOMAIN is faster – Less noise on the wire Top level /8 serial increments more often – We can adjust cache/ttl settings to tune
14
Traffic Measurements DNS cutover, Aug 19
15
Post-Upgrade behaviour DNS update is faster, simpler Less delay from whois -> DNS Overall DNS traffic dropped More consistent load share JP/AU US/Europe now fully serviced APNIC can deploy new services – Pre-creation of domain objects – LAME checks
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.