posted by
NCommander
on Tuesday April 01 2014, @12:00PM
from the there-was-much-rejoicing dept.
As part of wanting to be part of a brighter and sunny future, we've decided to disconnect IPv4 on our backend, and go single-stack IPv6. Right now, reading to this post, you're connected to our database through shiny 128-bit IP addressing that is working hard to process your posts. For those of you still in the past, we'll continue to publish A records which will allow a fleeting glimpse of a future without NAT.Believe it or not, we're actually serious on this one.
We're not publishing AAAA records on production just yet as Slash has a few minor glitches when it gets an IPv6 address (they don't turn into IPIDs correctly), though we are publishing an AAAA record on dev. With one exception, all of our services communicate with each other on IPv6.
Perhaps I will write an article about our backend and the magical things that happen there :-).
This discussion has been archived.
No new comments can be posted.
Linode does not charge for IPv6 traffic inside of their network. So it made sense to put all of the back end traffic on IPv6 and save our network quota for use on the front end.
Its not a common setup to say the least, and sanity was questioned on it. One of our sysops guys was on VAC while we did this, and when he came back the response was basically "WTF?". The problem is our offsite backup is in a data center in France, and with the old IPv4 setup, we were looking at the possiblity of having to run a VPN and NAT. We could get around it by creative firewalling, and stupid DNS tricks, but I was sick of dealing with those from a previous job. Furthermore, I'd like for us to have mirrors in multiple data centers across the world, and IPv6 addressing means that no matter where a node is, it can always access another node with a consistently known IP address, and rdns/dns *just work*. No stupid hacks, no insane IPtables routes. It Just Works.
It might be kinda extreme, but it puts us very much ahead of the curve on such things, and our network is extremely nice to work with due to the way its setup as an end result (I've had a couple minds blown on how we do single signon/LDAP SSH/etc.).
(Score: 2) by Nerdfest on Tuesday April 01 2014, @02:33PM
What made you think of doing this, what are the advantages, is this common in other setups?
(Score: 1) by paulej72 on Tuesday April 01 2014, @02:46PM
Linode does not charge for IPv6 traffic inside of their network. So it made sense to put all of the back end traffic on IPv6 and save our network quota for use on the front end.
Team Leader for SN Development
(Score: 2) by NCommander on Tuesday April 01 2014, @03:19PM
Its not a common setup to say the least, and sanity was questioned on it. One of our sysops guys was on VAC while we did this, and when he came back the response was basically "WTF?". The problem is our offsite backup is in a data center in France, and with the old IPv4 setup, we were looking at the possiblity of having to run a VPN and NAT. We could get around it by creative firewalling, and stupid DNS tricks, but I was sick of dealing with those from a previous job. Furthermore, I'd like for us to have mirrors in multiple data centers across the world, and IPv6 addressing means that no matter where a node is, it can always access another node with a consistently known IP address, and rdns/dns *just work*. No stupid hacks, no insane IPtables routes. It Just Works.
It might be kinda extreme, but it puts us very much ahead of the curve on such things, and our network is extremely nice to work with due to the way its setup as an end result (I've had a couple minds blown on how we do single signon/LDAP SSH/etc.).
Still always moving