Hey Soylent community,
I’m thrilled to announce that we’ve successfully migrated our DNS to PDNS! This marks the final step in our infrastructure overhaul. Today, I pushed the big red button and turned off all the old systems. We are now operating 100% on the new infrastructure.
This transition brings us into uncharted territory, but so far, everything seems to be working perfectly. The new setup promises enhanced performance and reliability, and I’m optimistic about the improvements it will bring.
Thank you all for your patience and support throughout this process. Your feedback has been invaluable. If you have any questions or notice anything unusual, please let me know in the comments or on IRC.
Here’s to a smoother, faster, and more reliable SoylentNews!
(Score: 3, Informative) by kolie on Sunday November 10, @12:45AM (2 children)
UDP listening works fine, the issue is dsnsviz is exclusively homed on HE v6, and our v6 is routed via cogent currently. Welcome to the world of peering disputes.
What happened was, And you can look back in the history of DNSviz going far back, DNSSEC was ina quasi weird states. We were operating with two NS records pointing to the same server helium, And then running linode as 5 secondaries. There seems to be records in the actual zone file, And while they exist they might have not been marked for publication or being sent out I'm not entirely certain at this point what the pre-position was. I just know when we turned it on, we had a bunch of records there was several DNS key values and the one when I started enabling DNSEC in our side officially was not the one being used to sign keys there was now a third signature other than the one that was previously unused in the old bind install. I enable DNSSEC at the registrar because it wasn't according to anything I had, And then disabled it and then removed all the keys. I manually wiped all DNSSEC anything in the zone completely and cycled PDNS, and just did a from scratch implementation. Total time once I identified the cluster was about 15 minutes, ten to clean up five to fix the domains and roll out new zones.
We have an actual secondary now turned on as well, And I think they got different information initially which amplified the issue. The current master was a secondary for helium. The second secondary referenced the first one I setup, because that would be its long-term configuration.
I come from AD and we have a mantra, but it actually applies everywhere and very APT here
"It is DNS. It is always DNS."
(Score: 0) by Anonymous Coward on Sunday November 10, @04:17AM
Are they still screwing with each other? I remember reading about it in 2011 and thinking it had gone on too long then. Jesus. The longest one we've ever had lasted less than a year.
(Score: 1) by pTamok on Monday November 11, @10:31AM
Except when it is the cabling.
Failover between diverse firewalls didn't. Turns out patch was bad *and nobody noticed*.
That little clip that always gets broken off on Ethernet cables without boots: turns out it's important. This is usually an end-user problem, because when you first put the cable into the socket, it works, but it works its way out of the socket relatively quickly and 'My Internet isn't working!!!".
Line errors that couldn't be tracked down: Bad patch cable (or maybe oxidised connection).
I've also had a bad *moulded* power cable, that, upon investigation, turned out to be incorrectly wired. That could have killed someone, but luckily didn't.
Intermittent connectivity errors are a pain to diagnose.