Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 10 submissions in the queue.
posted by NCommander on Tuesday April 01 2014, @12:00PM   Printer-friendly
from the there-was-much-rejoicing dept.
As part of wanting to be part of a brighter and sunny future, we've decided to disconnect IPv4 on our backend, and go single-stack IPv6. Right now, reading to this post, you're connected to our database through shiny 128-bit IP addressing that is working hard to process your posts. For those of you still in the past, we'll continue to publish A records which will allow a fleeting glimpse of a future without NAT.Believe it or not, we're actually serious on this one.

Linode IPv6 graph

We're not publishing AAAA records on production just yet as Slash has a few minor glitches when it gets an IPv6 address (they don't turn into IPIDs correctly), though we are publishing an AAAA record on dev. With one exception, all of our services communicate with each other on IPv6.

Perhaps I will write an article about our backend and the magical things that happen there :-).
 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Informative) by NCommander on Tuesday April 01 2014, @01:36PM

    by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Tuesday April 01 2014, @01:36PM (#24136) Homepage Journal

    Funny you should bring up OpenAFS, as we were considered it as a method to deploy slash to the web frontends (basically have one box be an OAFS master, and the webheads replicate locally so we can update once and deploy everywhere). The main reason we dumped IPv4 is we got into the rather silly situation of having to run NAT/VPN on our staff box so we could suck up backups easily (due to our firewall setup, you can only get into our internal cluster through one point).

    While OAFS is shiny, its a fucking PITA to setup, and I've got concerns about its fragility (we've got kerberos, but if our internal BIND takes a crap, kerberos stops working which breaks OAFS). We're probably going to go NFSv4 with replica to make this work, or cobble something our of rsync. Worse case scenario, we'll update nodes one by one (backwards compatibility on DB schemas makes this relatively easy).

    I ran through the list of services we run, and decided to go full monty on this, and make IPv4 a legacy technology. Here's specifically what we're running with IPv6 only

    • LDAP (TLS)
    • Kerberos (though this required making IPv6 rdns work which is a PITA)
    • icinga (with some homemade patches to do kerberosized SSH)
    • varnish (connects to Apache via IPv4, but relays the inbound IP)
    • nginx
    • OpenSSH
    • barcula
    • postfix (IPv4/IPv6 dualstack for the main server; emails from slash->world get to the MTA via IPv6)

    I'm probably forgetting a couple of things, but these were the major ones. Aside from our mystery service (which we'll announce later today), and Apache 1.3, our migration was seemless, and we can now have our clouds interconnect and not need to NAT.

    --
    Still always moving
    Starting Score:    1  point
    Moderation   +1  
       Informative=1, Total=1
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 2) by VLM on Tuesday April 01 2014, @02:33PM

    by VLM (445) on Tuesday April 01 2014, @02:33PM (#24197)

    "While OAFS is shiny, its a fucking PITA to setup"

    Oh its not that bad. Google spinlocksolutions and AFS. Obviously start following the tutorial with LDAP, then kerberos, then afs... The tutorials are extremely long because of endless screencaps and tests/experiments, the actual work required is pretty minimal. My puppetmaster has a couple files, maybe a screen of manifest instructions, that's about it. It really does make life easy in the long run.

    "but if our internal BIND takes a crap, kerberos stops working which breaks OAFS"

    That is true, I did end up with a ridiculous amount of replication. Multiple LDAP servers, multiple BIND, etc. If you're in physical world this is cheap/free, but I can totally see in virtual/cloudy world where each virtual machine costs $$$$ and every bit/cycle is accounted for, this is a bit of a scaling/financial issue. Every 24x7 machine I have is a primary for exactly one thing also a secondary for as many other things as I can set up.

    The biggest annoyance I have with AFS at home is the eternal battle between cron and AFS (really, kerberos) ... they just don't conceptually get along very well.

    Mystery service that doesn't like NAT... let me guess it involves SIP protocol? SIP doesn't like NAT very much. OR let me guess, minecraft.soylentnews.org?

    • (Score: 2) by NCommander on Tuesday April 01 2014, @02:42PM

      by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Tuesday April 01 2014, @02:42PM (#24204) Homepage Journal

      I'll take your word for it. We're still undecided on the filesystem issue, but it looks like IPv6 support still hasn't landed in OAFS, and I rather not reintroduce IPv4 back into our BIND instance. We're going to glue the sysops heads together somepoint this month and discuss it more indepth.

      As for cron and kerberos, keytabs are a wonderful thing; we use kerberosized SSH for our cron services so we don't have to deal with SSH authorized_keys madness (we have a backported OpenSSH on the server which can pop a key from LDAP which we use for staff gaining access to the network and for the SSH proxy), but kerberos allows us to have one central list of authentication. We've got master/slave KDCs setup, and BIND is replicated, though we haven't tested failover (yet). LDAP isn't, mostly because slapd is a fucking pig to setup (they threw out a perfectly sane config file for putting everything in LDAP and then poorly documented it to boot!), but all the services are using local accounts so the site itself will stay up if LDAP takes a shit on us.

      As for our IPv4 only service, you'll have to wait and see. Trust me, I think you'll approve of this (and I plan to write patches to bring it to IPv6 sooner or later)

      --
      Still always moving