Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
DNS Zombies (2016) (apnic.net)
73 points by mjlee on July 21, 2018 | hide | past | favorite | 6 comments


Would this have been as much of a problem if DNS were used for its original purpose -- mapping relatively long-lived hostnames to somewhat less long-lived IP addresses -- instead of the other way around?

Just as the inventors of zombifying pathogens in many stories didn't actually intend to cause the apocalypse, those who built these resolvers probably didn't anticipate that someone would create millions of single-use hostnames for tracking purposes.

Yes, tracking. The article also blames trackers for a percentage of zombie queries, but they themselves have been creating all these meaningless hostnames to collect information from unsuspecting users. It's the exact same technique that malicious trackers often use to evade blockers. I wouldn't mind if resolvers recognized them as abusive and NXDOMAIN'd them altogether.


That is not in fact the original purpose of DNS. Network addresses were generally persistent. But the mechanism for distributingthe IP-hostname mapping had become unmanageable:

   - Host name to address mappings were maintained by the Network
     Information Center (NIC) in a single file (HOSTS.TXT) which
     was FTPed by all hosts [RFC-952, RFC-953].  The total network
     bandwidth consumed in distributing a new version by this
     scheme is proportional to the square of the number of hosts in
     the network, and even when multiple levels of FTP are used,
     the outgoing FTP load on the NIC host is considerable.
     Explosive growth in the number of hosts didn't bode well for
     the future.

   - The network population was also changing in character.  The
     timeshared hosts that made up the original ARPANET were being
     replaced with local networks of workstations.  Local
     organizations were administering their own names and
     addresses, but had to wait for the NIC to change HOSTS.TXT to
     make changes visible to the Internet at large.  Organizations
     also wanted some local structure on the name space.

   - The applications on the Internet were getting more
     sophisticated and creating a need for general purpose name
     service.
https://tools.ietf.org/html/rfc1034

A compilation of further specification documents is in the Wikipedia article: https://en.wikipedia.org/wiki/Domain_Name_System


Could it be some sort of feedback as a result of servers sending more than 1 udp packet to compensate for possible packet loss? Maybe there's a loop somewhere with severs ignoring or changing the request id, a sort of broadcast storm.


I'm curious to know if the situation has evolved since 2016 or is still the same.


I don't believe so. I wrote a browser plugin that would ping a time based DNS entry similar to what the article does in order to alarm if someone was inspecting my traffic and show from where. I have seen hits from Singapore and other locations. Sometimes it's a web filtering vendor or other security tools company-- days or weeks later. Sometimes I can't determine who the source is and avoid using that network in the future.


can you elaborate on your canary system? I speculate you would need to setup your own DNS. For correlating subsequent inspection, you would need to do some allocation of honeypot addresses in a ip(v6?) prefix and capture that traffic. Logging requests to unique subdomains on a webserver you control would be another quickly built and limited mechanism. Embedding prepared urls in Email comes to my mind as another method of counter reconnaissance on the whole network delivery path. Though you would want to inform the legitimate recipient. Thanks for the ideas.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: