Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Saturday November 25 2017, @09:47PM   Printer-friendly
from the Open-and-shut-case? dept.

http://blog.backslasher.net/ssh-openvpn-tunneling.html

The Story

I was asked to take care of a security challange - setup Redis replication between two VMs over the internet.
The VMs were in different continents, so I had keep the bandwidth impact to a minimum. I thought of 3 options:

        stunnel, which uses tunnels TCP connections via SSL
        SSH, which has TCP tunneling over it's secure channel (amongst its weponary)
        OpenVPN, which is designed to encapsulate, encrypt and compress traffic among two machines

I quickly dropped stunnel because its setup is nontrivial compared to the other two (no logging, no init file...), and decided to test SSH and OpenVPN.
I was sure that when it comes to speed, OpenVPN will be the best, because:

        The first Google results say so (and they even look credible)
                http://superuser.com/a/238801
                http://security.stackexchange.com/a/68367
                http://support.vpnsecure.me/articles/tips-tricks/comparison-chart-openvpn-pptp-ssh-tunnel
        Logic dictates that SSH tunneling will suffer from TCP over TCP, since SSH runs over TCP, [while] OpenVPN, being a VPN software, is solely designed to move packets from one place to another.

I was so sure of that, that I almost didn't test [but, after testing, the results showed that as] long as you only need one TCP port forwarded, SSH is a much faster choice, because it has less overhead. I was quite surprised.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by KiloByte on Saturday November 25 2017, @10:31PM

    by KiloByte (375) on Saturday November 25 2017, @10:31PM (#601527)

    If you care about speed, you always benchmark. Case in point: ATAoE stuff keep advertising very low overhead, both CPU and bandwidth, yadda yadda. Thus, it should win handily over NBD, right? Especially if you hit a bug in NBD that makes it work only over IPv6 for you, for a total overhead of 60 bytes more than ATAoE. Ie, in theory, NBD should get at most 96% of ATAoE's speed, right?

    Test setup:
    Server: QNAP253a, rotating rust, Debian stretch. Client 1: Pine64 with dwmac-sun8i network driver, Debian unstable. Client 2: my regular amd64 desktop, Debian unstable. All nodes have 1GBe, clients tested separately. Same ethernet segment, one hub between the machines, two more hubs exist in the house. Network is quiescent (ie, nothing more than ssh keepalives and the like). All tests repeated at least 5 times (and no real variance was noticed). vblade vs nbd-server.

    In linear transfers, NBD gets solid 106 MB/s both read and write, ATAoE 40 MB/s. Random transfers obviously less, but with about the same ratio (exact numbers vary by test pattern).

    Both protocols used their software's defaults, naive changes to available knobs didn't reveal any obvious problems with ATAoE. At this, I stopped testing and went with NBD.

    --
    Ceterum censeo systemd esse delendam.
    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3