Don't complain about lack of options. You've got to pick a few when you do multiple choice. Those are the breaks.
Feel free to suggest poll ideas if you're feeling creative. I'd strongly suggest reading the past polls first.
This whole thing is wildly inaccurate. Rounding errors, ballot stuffers, dynamic IPs, firewalls. If you're using these numbers to do anything important, you're insane.
This discussion has been archived.
No new comments can be posted.
scp has been deprecated since early 2019 with OpenSSH 8.0:
The scp protocol is outdated, inflexible and not readily fixed. We
recommend the use of more modern protocols like sftp and rsync for
file transfer instead.
However, big transfers can be terribly slow over 100Mb/s lines. Using 1000Mb/s is much better, but then as the data sets approach terabytes, then even that is too slow and it's time to consider shipping a removable drive or two.
-- Money is not free speech. Elections should not be auctions.
(Score: 3, Insightful) by JoeMerchant on Tuesday September 09, @04:19PM
(2 children)
(Score: 2, Insightful) by Anonymous Coward on Tuesday September 16, @03:21AM
by Anonymous Coward
on Tuesday September 16, @03:21AM (#1417352)
I mainly use sftp rather than scp mainly because it seems marginally more convenient with filezilla, but honestly, I'm not sure there's much of a difference. That being said, I'll also use sshfs from time to time which seems to use sftp for the actual transfers. But, really, just about anything other than the horrible SMB BS that MS has been pushing for decades. It sucks less than it used to, but it's inconvenient and I still remember when it completely destroyed my MP3 collection by reassembling the files out of order on a previous version and I don't have any faith that newer revisions have more care than that. I really wish that MS would just give up on it and just adopt sshfs as the way of doing filesharing on Windows.
I don't recall you complaining or even offering support when I was fighting for aristarchus' right to express his views in his own journal. During the period of 2018-2022 in particular, I had lots of people saying he should be banned. But why - he had only expressed his views in his journal? That is exactly what it was for. I know he would rather have had them published on the front pages, but that is not the purpose of the front pages. This site is apolitical.
I certainly didn't agree with him. It didn't make me a ardent left-winger or a communist just because I argued for his right to be heard.
Learn to write ;) fascist pig
Yet, because you didn't read what I wrote you have made an idiot of yourself. You assumed that you were being accused of creating the site. I said no such thing. Your assumption says more about you than it does about me. And now that the boot is on the other foot and the right-wingers have all got accounts and can create journals, and you feel that i should be trying to silence them. You want me to moderate them on your behalf. You want them to be banned. They are using their journals just as aristarchus used his. To express their personal views. I don't agree with them either. He was never penalised for having those views. His ban was for doxxing another community member.
You have made 4 comments so far today (CET). Only one was your usual complaining which now qualifies as repetitive spam. It has been flagged. Other than this comment that I am responding to now, your comments have been accepted as being relevant (at least loosely if not constructively) as being on topic. However, it seems that the message is still not getting through to you. So in response to your insult of "fascist pig" (which I am not) I will call you an immature commie dickbutt. I guess that is what you consider to be an intelligent exchange of views?
So, I have been civil to you and I will have to see if you have the decency to respond in kind, or if you will continue with your spamming of polls and journals. The ball is in your court. I will not hold my breath in anticipation.
Depreciated to whom? I still use it just like I still use TCP Wrappers to secure services on some of my machines. That doesn't mean I don't also use fail2ban in some cases. Depreciated does not equal useless. I also use rsync and sftp. It depends on the case.
-- This page was generated by a Swarm of Roaming Elephants
The software's own developers consider scp obsolete, that's in the link above. One of the reasons is that there is no standard for it or even a specification beyond the vague guidance to "do kind of what rcp did". The other is that it is broken (and kind of insecure) in such ways that would require not just refactoring but refactoring in a way which would fully break backwards compatibility, which is the only reason to keep scp around.
Thus the newer versions of OpenSSH now have scp lead to a wrapper around SFTP instead of using the old scp binaries. The -O option with scp can force use of the old binaries. However, the new default is to run SFTP underneath.
-- Money is not free speech. Elections should not be auctions.
Flags:
-a: This is a combination flag that preserves permissions, ownership, modification times, and symbolic links - standard for making an exact copy.
-v: Show detailed output, showing you which files are being transferred.
-z : compress file data during the transfer. This is a major optimization, but see the cnote below. It uses CPU resources on both ends to save network bandwidth.
-h numbers in a human-friendly format
-P: crucial for large files - combines --progress and --partial.
(--progress: shows a progress bar for each file, so you know the transfer isn't stalled.
--partial: if the connection is interrupted, this keeps the partially transferred file on the destination. The next time you run the command, rsync will resume the transfer from where it left off instead of starting over)
-e 'ssh ...' specifies the remote shell to use, allows passing optimized parameters to SSH itself
'ssh -T' disable pseudo-terminal allocation, which can slightly reduce overhead for file transfers.
-c aes128-gcm@openssh.com: tell SSH to use a faster, modern encryption cipher. Older default ciphers can be a bottleneck. This cipher provides a great balance of security and speed.
-o Compression=no: explicitly disable SSH's built-in compression. Using both rsync's compression (-z) and SSH's compression is redundant and inefficient. It's better to let rsync handle it, as it can do so more intelligently.
(Score: 0) by Anonymous Coward on Saturday October 04, @03:27AM
(2 children)
by Anonymous Coward
on Saturday October 04, @03:27AM (#1419446)
If you are going to use ssh parameters like that repeatedly, you are probably better either off setting a permanent configuration for ssh to use automatically for that host or setting up rsync to communicate directly over another transparent transport like stunnel.
(Score: 0) by Anonymous Coward on Thursday October 09, @10:41PM
by Anonymous Coward
on Thursday October 09, @10:41PM (#1420093)
That was a generic you, but scripting it works too. Although, if you have the same server from multiple clients, I still think people are better off setting the setting once in the daemon side.
scp has been deprecated since early 2019 with OpenSSH 8.0:
The scp protocol is outdated, inflexible and not readily fixed. We recommend the use of more modern protocols like sftp and rsync for file transfer instead.
— OpenSSH 8.0 Release Notes [openssh.com]
You should read what you type later. They are NOT the same thing....
SCP protocol is obsolete. `man scp` tells you
Since OpenSSH 9.0, scp has used the SFTP protocol for transfers by default.
CAVEATS
The legacy SCP protocol (selected by the -O flag) requires execution of the remote user's shell to perform glob(3)
pattern matching. This requires careful quoting of any characters that have special meaning to the remote shell,
such as quote characters.
Flagged Comment by Anonymous Coward
on Tuesday September 09, @10:59AM (#1416642)
(Score: 0, Insightful) by Anonymous Coward on Tuesday September 09, @11:05AM
(3 children)
by Anonymous Coward
on Tuesday September 09, @11:05AM (#1416643)
I, too, use Soylent News for transferring data. First, I encode the data with base64 and, if needed, split it into multiple sections so that it can fit within the length limits established by Rehash. Then I post the encoded data as comments to Soylent News. When I'm ready to decode them on another computer, I access the same comments and decode them to get my original data back. In fact, I have a simple proof of concept.
(Score: 0) by Anonymous Coward on Friday September 12, @12:44PM
by Anonymous Coward
on Friday September 12, @12:44PM (#1416946)
How, AC, does anyone know if that is a link to CSAM, or another attempt at doxxing between a group of users? Or maybe even somebody trying to exchange an account so that it can be abused?
If transferring files on the local LAN, rsync. If transferring the files locally, but not network connected, external hard disk If transferring the files across the continent, muti-TB external SSD
(Score: 2) by Unixnut on Sunday September 14, @03:04PM
(3 children)
Yeah I was going to come here and say the same thing. I assumed they meant "between computers or devices in the same household", so I put down rsync (but I use it over NFS rather than SSH, so a bit of a mix), but even in my home network there is variety.
For example, while pretty much everything is rsync over NFS at home, that does not include my mobile phone because Android is a PITA (I can't root my phone otherwise all my work and banking apps cease to function), so for those I used to transfer via SD-card, but now I set up Nextcloud to my local server for synching. Some of my machines only have 100mbit/s connections, or are wifi only, in which case for high-data-transfer rates I just use an external HDD/SSD.
So just in my home network I can pick five of the seven options.
I have to say I was surprised to see some people said they still use optical media. Its been so long for me that I actually had to re-learn how to burn a CD recently in order to make a bootable installer (it still seems a bit hit-and-miss with USB stick booting on systems), and I still have a pile of blank CD-Rs (and CD-RWs) from a good 20 years ago, with little idea what to do with them (I am loathe to chuck away stuff, especially if it is new and never used, which is why I still have new-in-wrapping coloured minidiscs from the 90s still sitting on a shelf).
(Score: 1, Insightful) by Anonymous Coward on Tuesday September 16, @03:45AM
by Anonymous Coward
on Tuesday September 16, @03:45AM (#1417354)
I recently upgraded my NVME drive, so the old one went into a cheap orico case, and the new one went in the new computer. Even with crappy USB2 from my old chromebook, I still get decent transfer speeds that beat most of my other options. For my multi-TB backups though, I just use a USB3 case with a regular HDD and just rotate between ones on premises and those in a different building.
On the Android phones, transferring to/from a PC, I am using the "FTP Server" APK, connecting to the IPSWITCH FTP client on the PC side ( very old software. Still works on W95 thru W7 ), using wireless interfaces. I use my own private intranet to avoid conflicts with anything else. It's slow, but it eventually gets there.
"Private Intranet": Sounds fancy, but what mine is is an old wireless access point that is not plugged into the internet. But it still supports multiple wireless devices logging onto it and lets them see each other.
Does the equivalent of a wired connection between the RJ45 ports.
Phone-to-Phone via Wireless: Trebleshot. ( F-droid ). Via my private intranet.
On the Android phone, the "Asus File Manager" allows you to transfer files on a LAN via TCP/IP web browser at the other end.
-- "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
I can't root my phone otherwise all my work and banking apps cease to function
Not strictly on-topic, but how crazy is this? In 2025 we are not allowed to use a device that we own to access our own bank accounts. "You may not access your own money unless you use our tracking device." Oh and also we have to pay for the tracking device, the banks doesn't even give it to us for free.
I have fond memories of delivering tape when I was a teenager, looking
forward to the time when I would be in the office doing cool computer stuff
instead of just delivering computer stuff. At some point I worked at a place
that had tape backup; but by then it was no longer the huge reels. It was a little
cartridge that probably held a lot more data.
-- Appended to the end of comments you post. Max: 120 chars.
(Score: 0) by Anonymous Coward on Tuesday September 16, @03:53AM
(1 child)
by Anonymous Coward
on Tuesday September 16, @03:53AM (#1417358)
I've been considering getting an LTO system as I've got like 12TB of stuff and having the 2 or 3 HDD it takes to rotate the backups offsite properly is rather annoying. It would be nice to be able to rotate through 3 tapes. But, the price tag is high enough that, I'll probably just buy a couple more HDDs and backup to mirrored ZFS setup and just use a 3rd disk mirrored to it to rotate to and from storage off site.
You can buy a second hand LTO3 drive for under £100, (LTO4 might be a better choice for you) and buy a pack of five tapes. (See Ebay for prices).
You need to have four tapes in active use: Grandfather, father, son, NEXT, and also a spare in case one dies. *
I keep the father tape offsite, (take new father offsite, return with former father as new grandfather. Every six months the grandfather tape is archived, the spare becomes NEXT, and a couple of new tapes bought.
Note that you should ONLY EVER use tar to write tapes, and ABSOLUTELY NEVER use proprietary backup software, or there is a good chance your old tapes will be unreadable when you need them.
The choice of GTAR or BSD tar is up to you (unless forced by your employer). You can probably be quite selective in what you choose to tar. Do not archive your OS - it is faster to reinstall a fresh one. Keep your immutable data (eg porn) on a separate set - you don't need to keep writing it to new tapes.
Obviously - do not use Windows if you value your data. I recommend OpenBSD for servers, but YMMV.
I have been doing this since LTO was invented, and before that, same procedure, going back to 556bpi 7-track tape on ICL1905 and CDC7600.
None of this tape stuff removes the need for (offsite?) database write logging if financial data is involved.
* Remember: if you do have a tape wreck, it might be because the drive is screwed, and discovering this might take out another tape! (You can use the "NEXT" tape for a dry run to test your restore process).
Test your tape restore procedure at least annually - some software upgrade, somewhere, may shoot you in the foot!
-- Warning: Opening your mouth may invalidate your brain!
I have a wired home LAN or two or so with gigabit ethernet and Netgear switches. I am now backing up multi-hundred gigabyte disk images and VM images and using such things as rsync, dd and various compression tools.
Last year sometime I was working on some firmware to do SERDES and I was absolutely blown away by how fast it can go these days. 10Gbit ethernet really isn't that fast, and none of my home machines have anything faster that 1Gig. However, there's USB 3.x and I have a couple of external 2.5" USB 3.x 5TB drives for doing backups.
Even USB 3.2 is faster (allegedly) than 10Gbit/s ethernet. It's also cheap and everything has it these days. So the obvious question is, why can't I take one USB cable from one PeeCee and plug it into another and transfer data? Because it's Not That Simple(TM).
OK, so there's this thing called an Ethernet switch, why can't I get a "USB switch." I can get a USB hub. It's this master/slave design. You can get things to plug into your phone to turn it from a slave/client into a master/controller.
So why can't I buy a thing with multiple USB 3.2 ports that I can plug several machines into and get them to speak at 10+ Gbit/s? I mean 20, 30, 40+ should be quite simple nowadays.
(Score: 4, Interesting) by Anonymous Coward on Wednesday September 10, @01:34AM
(2 children)
by Anonymous Coward
on Wednesday September 10, @01:34AM (#1416725)
You can connect two computers and transfer files using USB if you know what you are doing but it isn't necessarily the easiest thing to do. If you are using USB-A or USB-B that are USB 2 or certain 3.0 controllers, there are two exclusive ways to connect them. If they are 3.0 compliant there are two exclusive options to those.
No matter what, you can use a protocol bridge. This device appears as a client peripheral to both hosts. The software can then be used to transfer between devices with the bridge acting as a relay, essentially. The speed there is limited by the device's capability.
The easiest is to use a special transfer cable that has the pinout correctly crossed to allow two host systems to connect without frying either. Then software allows transfers by treating each as the host and ignoring certain errors caused by two host devices sending out-of-spec data. Those errors do result in a slower transfer speed.
The second USB-A and USB-B approach is to use OTG (USB On-The-Go). There, a special pinout is used by standard cables to set which is the host (A device) and which is the client (B device). The computers can negotiate proper roles and send data that way in spec and at full speed.
If you are UBS 3.0 and above, you could potentially use Dual Role Devices. If at least on controller is dual role, you can connect the two computers and then use software to communicate between them. This is much more likely to succeed using type C connectors, since those controllers are supposed to be dual role. If using an A or B connector you will be limited to 10 Gbps. If using type C and USB 3.2, then you can get the full 20+ Gbps.
Finally, if your computers are thunderbolt capable, you can use thunderbolt with a type C connector. This would give you around 32 Gbps (Thunderbolt 3) of speed at the expense of more complicated software stack.
Isn't the mixed standards of USB fun? Then you add in that not every piece of hardware isn't necessarily 100% compatible with what it should be. But with newer controllers and the right software and drivers, you can wade your way through the mud.
(Score: 4, Informative) by Anonymous Coward on Wednesday September 10, @10:47PM
(1 child)
by Anonymous Coward
on Wednesday September 10, @10:47PM (#1416806)
I looked and found the documentation I didn't earlier for the software side. In Linux, you can use the Mass Storage [kernel.org] driver to make one device look like a USB mass storage device. There are other drivers that can be used to act like all kinds gadgets. [kernel.org] You can even write your own using the gadget API. [kernel.org] There are similar drivers available for Windows, MacOS, and most *BSDs. So be careful with those USB sticks you plug in, they might actually be something else when plugged in.
(Score: 0) by Anonymous Coward on Sunday September 14, @06:17AM
by Anonymous Coward
on Sunday September 14, @06:17AM (#1417106)
This is completely off topic but I wanted to answer the question in your journal. The "size_t dsize;" part of the function declaration is the alternate syntax for parameter forward declaration. It is supported in later versions of C and understood as an extension by most compilers.
First, it works. Not just with the computers, but tablets and phones. Choices: USB memory stick, SD card, or similar I have an old very large USB drive that I used to use for backing up my data, but it ran out of room so I bought a bigger one. Now I'm putting all my DVDs on it; it stays plugged into the TV's USB unless I'm adding a new movie. External hard drive See "or similar," above Optical media (CD/DVD/Blu-ray) Obsolete, we have far bigger thumb drives now Network app (rsync, scp, etc.) File Manager, whatever the app developer calls it? I'm rarely at a text prompt any more. Network file system (nfs, samba, etc.) A little redundant but I guess you wanted to be thorough The "cloud" (Dropbox, Cloud, Google Drive, etc.) I guess if push came to shove I could use some of my hosted drive space, but If you use other people's servers, be sure to back your data up on your own servers! There's nothing on any of my web sites that isn't mirrored on my private network. Forty three years of computing has taught me to never trust any device, especially someone else's device! Email Sure, if I'm moving a file to my daughter and the file is small enough. Other (specify in comments) Subetheric transport modules (not yet invented)
-- Mad at your neighbors? Join ICE, $50,000 signing bonus and a LICENSE TO MURDER!
Flagged Comment by Anonymous Coward
on Wednesday September 10, @10:59PM (#1416808)
Flagged Comment by Anonymous Coward
on Thursday September 11, @10:15AM (#1416848)
(Score: 3, Insightful) by janrinok on Thursday September 11, @10:32AM
(40 children)
No, I didn't. You can't pass responsibility for something on to somebody else. The flagging is in response to the spamming. I am not responsible for the spamming. But you do know who is, don't you?
Flagged Comment by Anonymous Coward
on Thursday September 11, @07:39PM (#1416893)
Flagged Comment by Anonymous Coward
on Friday September 12, @07:50AM (#1416931)
Flagged Comment by Anonymous Coward
on Friday September 12, @09:48PM (#1416995)
Flagged Comment by Anonymous Coward
on Sunday September 14, @10:57AM (#1417115)
Flagged Comment by Anonymous Coward
on Sunday September 14, @11:01PM (#1417196)
Flagged Comment by Anonymous Coward
on Wednesday September 17, @08:06PM (#1417544)
(Score: -1, Spam) by Anonymous Coward on Thursday September 18, @12:45PM
(8 children)
by Anonymous Coward
on Thursday September 18, @12:45PM (#1417611)
The abuser censoring information he does not like. QED. Hope you live long enough for fascism to retake France, maybe then you'll regret silencing those speaking against it.
Your comment is marked as Spam because this is a poll about "When transferring multiple 100+ MB files between computers or devices...", and your comment is completely irrelevant. Once again you are spamming a discussion that does not require your childish comments.
I do not mind you speaking against fascism in the appropriate journal, where it would be on-topic. However, there are no journals wanting to discuss it. Get an account, create a journal, and off you go. If you insist on acting like an idiot, you will continue to be treated like an idiot, whether you are one or not.
Flagged Comment by Anonymous Coward
on Friday September 19, @12:21AM (#1417661)
Flagged Comment by Anonymous Coward
on Friday September 19, @08:33AM (#1417684)
Flagged Comment by Anonymous Coward
on Tuesday September 23, @12:46AM (#1418216)
(Score: 1, Insightful) by Anonymous Coward on Tuesday September 23, @03:12AM
(2 children)
by Anonymous Coward
on Tuesday September 23, @03:12AM (#1418230)
WAAAAAH! WAAAAAH! It's SO UNFAIR that I'm being FORCED to create an account and post my own journals to discuss arbitrary topics! MEANIE JANRINOK just won't STOP BULLYING ME! I'm SPECIAL and should be able to post OFF-TOPIC comments wherever I damn well please, and YOU ALL MUST STFU and ACCEPT IT! Forcing me to create an account where I can freely express my views in my own journal is CENSORSHIP! SO UNFAIR! WAAAAAH! WAAAAAH!
So I chose the 1st option. Which said "something similar" to a USB Drive. Which one could argue an external NVMe SSD is a fancy (albeit typically larger) USB drive. It connects via USB 3.0 and is a nice quick way to transfer big files. You don't need some special cloud service or any annoying kind of setup. You just plug it in and you're good. Even, if your target doesn't have USB 3.0 (less and less likely) it's still an easy plug and play. It would just take longer at that point.
-- Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 2) by krishnoid on Friday September 12, @03:01AM
(4 children)
I don't think that's even "arguably" a USB Drive. A 5-bay enclosure for 3.5in rotational disk that connects via USB ... likely more of an argument.
Also you could copy everything over from the USB drive, then use network rsync with its checksum option to doublecheck it and sync up any deltas and fix up metadata. Works great.
(Score: 1, Interesting) by Anonymous Coward on Tuesday September 16, @03:56AM
by Anonymous Coward
on Tuesday September 16, @03:56AM (#1417360)
I've got a funky 5 bay external SATA enclosure with 5 SATA cables that connect to it. It is kind of funky and I'll need to get some more ports for my computer, but it was under $100, and I'll just make a giant RAIDZ out of it. Given the state of software RAID these days, it seems like the way to go for me. I saw the USB ones, and I'm kind of skeptical that the performance would be there for that many disks.
An external NVMe enclosure with a USB 3.0 interface is essentially a fancy "USB Drive". (Sure, you also need an NVMe drive to put in the enclosure, but it's certainly much more similar to a USB Flash Drive than a 5-bay 3.5" monstrosity.) Example: https://www.amazon.com/UGREEN-Enclosure-Tool-Free-Thunderbolt-Compatible/dp/B09T97Z7DM [amazon.com]
An external NVMe USB Drive is more like 'ye olde USB Flash Drive than any rotational disk storage device. At least the external NVMe storage device is using similar technology for storage as the super slow old school USB Flash Drive.
-- Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
That said, I do have some external spinning disk storage that would definitely fall into the "External hard drive" storage option. However, I use that as more of a "cold storage" backup. Whereas the external NMVe SSD enclosure that doesn't need an extra power brick to run is what I typically use to swap large files or many files from one device to another. That includes my phone to my computer.
-- Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 0, Troll) by Anonymous Coward on Tuesday September 16, @04:54AM
(6 children)
by Anonymous Coward
on Tuesday September 16, @04:54AM (#1417362)
If we're going to be doing that, then perhaps we should be punishing the geniuses that came up with SMB. What sort of a file transfer protocol doesn't have provisions for reassembling the packets in order? I remember my MP3 collection getting badly corrupted during transfer back in the day just by transferring too much over SMB.
Flagged Comment by Anonymous Coward
on Tuesday September 16, @05:23AM (#1417363)
(Score: 0) by Anonymous Coward on Thursday September 18, @04:26AM
(4 children)
by Anonymous Coward
on Thursday September 18, @04:26AM (#1417585)
The answer should be all of them since that is the concern of lower layers in the network stack. However, SMB, just like most of them, does support packet reassembly and reordering. If your network card is giving your computer bad data or the driver mangling data from the card (Ethernet card not verifying checksum?), that is on your card or driver manufacturer not SMB. If your disk or memory isn't reliable, that isn't SMB's fault either. And the reason I'm pointing at bad hardware is even if SMB gets packets out of order, it can still reassemble the proper data order because each packet is either individually sequenced or idempotent. Long story short, garbage data came to SMB as garbage or was turned to garbage later by something not SMB related precisely because SMB (NFS, AFP, etc.) was designed when networks were less reliable.
(Score: -1, Troll) by Anonymous Coward on Thursday September 18, @05:33PM
(3 children)
by Anonymous Coward
on Thursday September 18, @05:33PM (#1417635)
I don't agree, this was like 25 years ago and by that point, there had been like 5 or 6 major revisions of consumer Windows on top of the various NT releases since 1983 that could have come with a properly functioning software system. They did wind up releasing SMB2, but from what I can tell, that didn't come until Vista and I had no real reason to use or trust SMB by that point. Meanwhile, by the time that XP came out, they already had robocopy going back to sometime in 1996 that they could very easily have worked around the wonkiness of the SMB protocol they had at the time. It was just something that you're not likely to ever have heard about unless you specifically went looking for it.
It's a bit of a moot point whether you want to blame file explorer for not having proper data integrity systems in place to verify copies or SMB for being this stupid, but either way, SMB is just not something that I'd recommend anybody trusting
(Score: 0) by Anonymous Coward on Friday September 19, @05:44AM
(2 children)
by Anonymous Coward
on Friday September 19, @05:44AM (#1417675)
You disagree and yet none of that affects how SMB, even back to version 1, works under the hood. And robocopy worked for all their network sharing protocols, not just SMB. In fact, I most often had to use robocopy with WebDAV than SMB thanks to shitty phone lines I used to use. Oh well. Doesn't effect anyone else if you don't use the most widely available and used network file sharing protocol on the planet to share files with yourself.
(Score: -1, Troll) by Anonymous Coward on Saturday September 20, @02:32AM
(1 child)
by Anonymous Coward
on Saturday September 20, @02:32AM (#1417789)
It's still a broken protocol if it can't detect whether or not the pieces were assembled correctly. That may be acceptable in the enterprise space where you have professionals that are there managing things and can implement things on top to verify that, but in a home environment, it's a ludicrous assumption. Especially since MS had been colluding with Intel to release those god awful Wintel modems where they charged more for hardware that had fewer chips in it.
As far as this goes, it was an OK protocol 40 years ago, but even by the late '90s, budget hardware was being released and it was the option for sharing files between machines if they couldn't fit on floppies. They should have done the right thing and just abandoned SMB in favor of something like sFTP or at least properly updated it with security features and the ability to verify that the file on the other end is the one that was sent.
(Score: 0) by Anonymous Coward on Saturday September 20, @07:18AM
by Anonymous Coward
on Saturday September 20, @07:18AM (#1417801)
Then you are in a world of hurt because almost all protocols have that problem out of the box, including HTTP, FTP, NFS, SFTP, and the list goes on. The only one that doesn't is rsync, which is part of the reason it is relatively slow. Ironically, SMB does have the ability to verify files sent to it even in the 90s, just like you requested, but you probably had signing turned off.
Flagged Comment by Anonymous Coward
on Tuesday September 30, @07:59PM (#1419110)
Flagged Comment by Anonymous Coward
on Monday September 15, @07:58PM (#1417310)
Flagged Comment by Anonymous Coward
on Monday September 15, @09:49PM (#1417325)
(Score: 2, Insightful) by Anonymous Coward on Wednesday September 17, @03:39PM
by Anonymous Coward
on Wednesday September 17, @03:39PM (#1417509)
Sometimes I transfer from Computer A to my phone then from my phone to Computer B.
This is when USB stuff is blocked due to policies etc.
Of course in theory transferring to/fro phones should be blocked too (and in some places they are blocked). But most people usually don't bother to tell the relevant parties to get phones blocked because they want to actually get their jobs done.
After all, if phones get blocked, the various committees, teams, etc might not have yet prepared a practical way to actually get vendor updates or other required files to the systems.
Flagged Comment by Anonymous Coward
on Wednesday September 17, @09:59PM (#1417556)
(Score: -1, Troll) by Anonymous Coward on Sunday September 21, @08:05AM
(12 children)
by Anonymous Coward
on Sunday September 21, @08:05AM (#1417930)
Why have I been blocked?
This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data.
What can I do to resolve this?
You can email the site owner to let them know you were blocked. Please include what you were doing when this page came up and the Cloudflare Ray ID found at the bottom of this page.
(Score: 2, Insightful) by janrinok on Sunday September 21, @10:02AM
(4 children)
(Score: 1, Informative) by Anonymous Coward on Tuesday September 23, @04:45AM
(3 children)
by Anonymous Coward
on Tuesday September 23, @04:45AM (#1418234)
I know the grand parent comment is a troll either way, but I thought I'd add this for anyone else (including you) who may find this helpful. That error message occurs if you use Cloudflare as a proxy. Now, SoylentNews doesn't use Cloudflare as a reverse proxy, which is easy to verify by checking the routing of their traffic. However, that doesn't mean that the user isn't using them as a forward proxy. So if anyone here runs a website and sees spurious reports from users like that, it could be Cloudflare blocking the user because their WARP proxy is detecting the abuse. Which leads to an amusing conundrum: how much do you really want to help a user access your site when their actions are so bad or so obvious that they have been identified as abusive based on the minimum amount of data a forward proxy would have?
yet when I point out the issues regarding censorship on this site without calling you out you get angry
07:41Z
Fuck Runaway! Fuck him in his Christian ass!!! Why hasn't he been banned from this site, for doxxing and promoting genocide?
You wrote both of those comments, in different threads. You are getting your alter-egos confused. This is not the first time you have forgotten who you are pretending to be.
What do either of your comments have to do with the subject of the poll or journal where you have made them? What does Runaway's religion have to do with anything? Where is your evidence that he has doxxed somebody? Which of the site rules has he broken which would justify him being banned?
Get a grip of yourself - you are losing it. Better still, please go away and stop bothering people on this site.
I've got a home network with 2 NAS devices so ext4 is probably the cumulative total, albeit for a lot of small files I don't even notice are moving from 1 filesystem to another..
But my camera, while it advertises wireless, wants me to use a Canon app to use it. So I have to haul out an old USB 2.0 cable to transfer my files.
-- It was a once in a lifetime experience. Which means I'll never do it again.
Flagged Comment by Anonymous Coward
on Monday September 22, @06:47PM (#1418173)
Flagged Comment by Anonymous Coward
on Wednesday September 24, @08:16PM (#1418432)
Flagged Comment by Anonymous Coward
on Friday September 26, @02:57AM (#1418547)
Flagged Comment by Anonymous Coward
on Sunday September 28, @08:23PM (#1418897)
Flagged Comment by Anonymous Coward
on Monday September 29, @10:38AM (#1418948)
(Score: 2) by jasassin on Tuesday September 30, @12:13PM
(1 child)
(Score: 0) by Anonymous Coward on Wednesday October 01, @08:50AM
by Anonymous Coward
on Wednesday October 01, @08:50AM (#1419174)
I know the sleepers on semi trucks are larger than my first dorm room nowadays, but I find that there is usually much more room for our backup tapes in the semi-trailer. /s
Flagged Comment by Anonymous Coward
on Wednesday October 01, @07:36AM (#1419169)
Flagged Comment by Anonymous Coward
on Wednesday October 01, @10:30PM (#1419246)
Flagged Comment by Anonymous Coward
on Saturday October 04, @09:26PM (#1419516)
(Score: 0) by Anonymous Coward on Sunday October 05, @02:03AM
(3 children)
by Anonymous Coward
on Sunday October 05, @02:03AM (#1419524)
When I worked in academia and needed to move really large data sets from one file system to another -- and I mean distributed file systems with multiple petabyte storage capacities -- I generally used Globus. It was simpler and more robust than using something like sftp. Globus says it's cloud-based, but it doesn't really fit the cloud storage described in the poll. I wasn't uploading the data to cloud storage and then downloading it elsewhere, so it wasn't like Dropbox or Box. So where does Globus fit in the options?
(Score: 2) by janrinok on Sunday October 05, @02:24AM
(2 children)
(Score: 0) by Anonymous Coward on Sunday October 05, @03:10AM
by Anonymous Coward
on Sunday October 05, @03:10AM (#1419526)
I often try to think of clever responses, usually jokes, that don't quite fit the poll options. In this case, it's not a joke, but I'm not sure it really fits any of the options (except "other"). I know it's commonly used in academia in the US, and I used it when I worked in academia.
I can tell you how I used it. Let's say you have a cluster with a few petabytes of scratch space. There's a lot of storage, it's distributed, and it's going to have some redundancy built in. If you're running a job on the cluster, you'll generally read data from and write data to the scratch space. It's very fast and there's lots of space, but reliability isn't guaranteed, and it's really not feasible to make backups of the scratch space. Even with the redundancy, there are points of failure in the system, and data loss can and sometimes does happen. You want to transfer your data in and out of near-line storage that actually does get backed up frequently.
You could submit a job to the cluster and use scp, rsync, or something like that when your job starts running. But it's really inefficient because you don't know when your job will actually run and your job really is a bottleneck. The storage system is parallel, but the data has to be transferred to a single process running on a single node, which then moves the data to or from the near-line storage (or another cluster somewhere else, if you're using the data on another system). It's slow, and you also need to resubmit the job if your transfer fails for some reason. Prior to Globus, this was how I transferred large data sets between systems.
With Globus, you use a web interface, add the cluster's scratch space as an endpoint, add the near-line storage or another cluster as the other endpoint, select the files you want to move, and press a button to start the transfer. There are nice progress bars showing the status of the transfer. If the transfer gets interrupted, I believe it can automatically retry or restart the transfer. I believe there's some kind of hashing or checksums to verify the integrity of the files after they're transferred. Transfers also seemed to be faster, and I assume that's because Globus is utilizing the parallel capabilities of the endpoints instead of funneling everything through the bottleneck of a single process running on a single node. From a user standpoint, it's a much better experience for moving large data sets. It's faster, easier, and less prone to transfers failing.
That said, I don't really know what's happening behind the scenes to do this. I know there's Globus software running on the endpoints, but I don't know what happens between the endpoints. But it really is a very good way to transfer large data sets. There were times I needed to transfer multi-terabyte data sets, and Globus was easily the best tool for that job. But I don't know where it fits among the options.
(Score: 0) by Anonymous Coward on Sunday October 05, @06:27AM
by Anonymous Coward
on Sunday October 05, @06:27AM (#1419539)
Basically, it sidesteps the way things usually work in a large cluster with a SAN by imitating cluster design. If I were to curl data from a large public data set into a public folder on a the SAN, then the flow would be:
Storage nodes <-SAN-> Node running curl <-general internet-> Node running web server <-SAN-> Storage Nodes
This is quite slow for four reasons: The node running curl will be relatively slow and limited by the number of curl processes; the SANs are going to be relatively slow because they are competing with other SAN traffic; the general internet connection is competing with other traffic and slow; the web server is having to relay data in competition with other users and uses. All of this added together makes the whole transfer process much harder and longer than it need to be. However, with Globus, things look very different. Instead, you use the software that the process looks like this:
Storage nodes <-dedicated network-> transfer node <-dedicated bandwidth internet-> transfer node <-dedicated network-> Storage nodes
This ends up being faster because the traffic between storage nodes and the DTN is not competing with the general SAN traffic, the DTN processes are multit-hreaded and dedicated to data transfer plus it turns out they end up transferring the same data at the same time. A minimum amount of traffic is guaranteed for data transfers, so it usually ends up being faster since the minimum is faster. And the transfer node on the other end is similarly not competing with other uses. It is, as I said, sort of like a storage cluster on top of your storage cluster. You have the data nodes that hold the actual data, control nodes to control them and keep the cluster sane, the dedicated network between storage node sub-clusters, and the dedicated network for cluster communication. It is quite nice in the right circumstances, and it has a slick interface for end users and admins. But it can be massive overkill for most uses.
(Score: 5, Informative) by canopic jug on Tuesday September 09, @10:49AM (41 children)
scp has been deprecated since early 2019 with OpenSSH 8.0:
As noted, use Rsync or SFTP instead.
However, big transfers can be terribly slow over 100Mb/s lines. Using 1000Mb/s is much better, but then as the data sets approach terabytes, then even that is too slow and it's time to consider shipping a removable drive or two.
Money is not free speech. Elections should not be auctions.
(Score: 3, Insightful) by JoeMerchant on Tuesday September 09, @04:19PM (2 children)
I would have voted e-mail for smaller files, particularly photos and other little stuff from phones and other random places.
Big? scp.
🌻🌻🌻🌻 [google.com]
(Score: 4, Insightful) by JoeMerchant on Tuesday September 09, @04:21PM (1 child)
Amend: scp works for many cases, deprecated or not. sftp is often a more accessible option in certain multi-OS configurations.
🌻🌻🌻🌻 [google.com]
(Score: 2, Insightful) by Anonymous Coward on Tuesday September 16, @03:21AM
I mainly use sftp rather than scp mainly because it seems marginally more convenient with filezilla, but honestly, I'm not sure there's much of a difference. That being said, I'll also use sshfs from time to time which seems to use sftp for the actual transfers. But, really, just about anything other than the horrible SMB BS that MS has been pushing for decades. It sucks less than it used to, but it's inconvenient and I still remember when it completely destroyed my MP3 collection by reassembling the files out of order on a previous version and I don't have any faith that newer revisions have more care than that. I really wish that MS would just give up on it and just adopt sshfs as the way of doing filesharing on Windows.
(Score: 2) by janrinok on Tuesday September 09, @09:35PM (24 children)
Learn to read. It doesn't say that you created it. But you did use the account, as you have admitted.
[nostyle RIP 06 May 2025]
(Score: -1, Troll) by Anonymous Coward on Thursday September 11, @06:17PM (18 children)
Learn to write ;) fascist pig
(Score: 2) by janrinok on Thursday September 11, @07:36PM (17 children)
I don't recall you complaining or even offering support when I was fighting for aristarchus' right to express his views in his own journal. During the period of 2018-2022 in particular, I had lots of people saying he should be banned. But why - he had only expressed his views in his journal? That is exactly what it was for. I know he would rather have had them published on the front pages, but that is not the purpose of the front pages. This site is apolitical.
I certainly didn't agree with him. It didn't make me a ardent left-winger or a communist just because I argued for his right to be heard.
Yet, because you didn't read what I wrote you have made an idiot of yourself. You assumed that you were being accused of creating the site. I said no such thing. Your assumption says more about you than it does about me. And now that the boot is on the other foot and the right-wingers have all got accounts and can create journals, and you feel that i should be trying to silence them. You want me to moderate them on your behalf. You want them to be banned. They are using their journals just as aristarchus used his. To express their personal views. I don't agree with them either. He was never penalised for having those views. His ban was for doxxing another community member.
You have made 4 comments so far today (CET). Only one was your usual complaining which now qualifies as repetitive spam. It has been flagged. Other than this comment that I am responding to now, your comments have been accepted as being relevant (at least loosely if not constructively) as being on topic. However, it seems that the message is still not getting through to you. So in response to your insult of "fascist pig" (which I am not) I will call you an immature commie dickbutt. I guess that is what you consider to be an intelligent exchange of views?
So, I have been civil to you and I will have to see if you have the decency to respond in kind, or if you will continue with your spamming of polls and journals. The ball is in your court. I will not hold my breath in anticipation.
[nostyle RIP 06 May 2025]
(Score: 2) by Revek on Wednesday September 17, @06:50AM (2 children)
Depreciated to whom? I still use it just like I still use TCP Wrappers to secure services on some of my machines. That doesn't mean I don't also use fail2ban in some cases. Depreciated does not equal useless. I also use rsync and sftp. It depends on the case.
This page was generated by a Swarm of Roaming Elephants
(Score: 3, Informative) by canopic jug on Wednesday September 17, @08:06AM
The software's own developers consider scp obsolete, that's in the link above. One of the reasons is that there is no standard for it or even a specification beyond the vague guidance to "do kind of what rcp did". The other is that it is broken (and kind of insecure) in such ways that would require not just refactoring but refactoring in a way which would fully break backwards compatibility, which is the only reason to keep scp around.
Thus the newer versions of OpenSSH now have scp lead to a wrapper around SFTP instead of using the old scp binaries. The -O option with scp can force use of the old binaries. However, the new default is to run SFTP underneath.
Money is not free speech. Elections should not be auctions.
(Score: 0) by Anonymous Coward on Monday September 22, @04:13PM
The protocol is deprecated, not the utility.
(Score: 3, Interesting) by bmimatt on Thursday October 02, @11:25PM (3 children)
Flags:
-a: This is a combination flag that preserves permissions, ownership, modification times, and symbolic links - standard for making an exact copy.
-v: Show detailed output, showing you which files are being transferred.
-z : compress file data during the transfer. This is a major optimization, but see the cnote below. It uses CPU resources on both ends to save network bandwidth.
-h numbers in a human-friendly format
-P: crucial for large files - combines --progress and --partial.
(--progress: shows a progress bar for each file, so you know the transfer isn't stalled.
--partial: if the connection is interrupted, this keeps the partially transferred file on the destination. The next time you run the command, rsync will resume the transfer from where it left off instead of starting over)
-e 'ssh ...' specifies the remote shell to use, allows passing optimized parameters to SSH itself
'ssh -T' disable pseudo-terminal allocation, which can slightly reduce overhead for file transfers.
-c aes128-gcm@openssh.com: tell SSH to use a faster, modern encryption cipher. Older default ciphers can be a bottleneck. This cipher provides a great balance of security and speed.
-o Compression=no: explicitly disable SSH's built-in compression. Using both rsync's compression (-z) and SSH's compression is redundant and inefficient. It's better to let rsync handle it, as it can do so more intelligently.
(Score: 0) by Anonymous Coward on Saturday October 04, @03:27AM (2 children)
If you are going to use ssh parameters like that repeatedly, you are probably better either off setting a permanent configuration for ssh to use automatically for that host or setting up rsync to communicate directly over another transparent transport like stunnel.
(Score: 2) by bmimatt on Thursday October 09, @11:32AM (1 child)
I script my rsyncs, most of them are mostly copy/paste
(Score: 0) by Anonymous Coward on Thursday October 09, @10:41PM
That was a generic you, but scripting it works too. Although, if you have the same server from multiple clients, I still think people are better off setting the setting once in the daemon side.
(Score: 2) by gnuman on Tuesday October 07, @12:34PM
You should read what you type later. They are NOT the same thing....
SCP protocol is obsolete. `man scp` tells you
scp the utility, is NOT deprecated.
(Score: 0, Insightful) by Anonymous Coward on Tuesday September 09, @11:05AM (3 children)
I, too, use Soylent News for transferring data. First, I encode the data with base64 and, if needed, split it into multiple sections so that it can fit within the length limits established by Rehash. Then I post the encoded data as comments to Soylent News. When I'm ready to decode them on another computer, I access the same comments and decode them to get my original data back. In fact, I have a simple proof of concept.
QXJpc3RhcmNodXMgaXMgYSBiaWdvdCwgbG9zZXIsIHByb2xpZmljIHNwYW1tZXIsIGFuZCBhbiBh
YnNvbHV0ZSBtb3Jvbi4gSGUgaGFzIG5vdGhpbmcgdXNlZnVsIHRvIHNheSBhbmQgc2hvdWxkIG5l
dmVyIHBvc3Qgb24gdGhpcyBzaXRlIChvciBhbnl3aGVyZSBlbHNlIG9uIHRoZSBpbnRlcm5ldCkg
ZXZlciBhZ2Fpbi4K
(Score: 0) by Anonymous Coward on Friday September 12, @12:44PM
(Score: 0) by Anonymous Coward on Tuesday September 16, @03:24AM
You're supposed to use GSR (goatseman separated rot13) for spreading information via Soylentnews, didn't you see the RFC on that?
(Score: 4, Interesting) by crm114 on Tuesday September 09, @01:42PM (4 children)
If transferring files on the local LAN, rsync.
If transferring the files locally, but not network connected, external hard disk
If transferring the files across the continent, muti-TB external SSD
(Score: 2) by Unixnut on Sunday September 14, @03:04PM (3 children)
Yeah I was going to come here and say the same thing. I assumed they meant "between computers or devices in the same household", so I put down rsync (but I use it over NFS rather than SSH, so a bit of a mix), but even in my home network there is variety.
For example, while pretty much everything is rsync over NFS at home, that does not include my mobile phone because Android is a PITA (I can't root my phone otherwise all my work and banking apps cease to function), so for those I used to transfer via SD-card, but now I set up Nextcloud to my local server for synching. Some of my machines only have 100mbit/s connections, or are wifi only, in which case for high-data-transfer rates I just use an external HDD/SSD.
So just in my home network I can pick five of the seven options.
I have to say I was surprised to see some people said they still use optical media. Its been so long for me that I actually had to re-learn how to burn a CD recently in order to make a bootable installer (it still seems a bit hit-and-miss with USB stick booting on systems), and I still have a pile of blank CD-Rs (and CD-RWs) from a good 20 years ago, with little idea what to do with them (I am loathe to chuck away stuff, especially if it is new and never used, which is why I still have new-in-wrapping coloured minidiscs from the 90s still sitting on a shelf).
(Score: 1, Insightful) by Anonymous Coward on Tuesday September 16, @03:45AM
I recently upgraded my NVME drive, so the old one went into a cheap orico case, and the new one went in the new computer. Even with crappy USB2 from my old chromebook, I still get decent transfer speeds that beat most of my other options. For my multi-TB backups though, I just use a USB3 case with a regular HDD and just rotate between ones on premises and those in a different building.
(Score: 1) by anubi on Thursday September 18, @03:37AM
On the Android phones, transferring to/from a PC, I am using the "FTP Server" APK, connecting to the IPSWITCH FTP client on the PC side ( very old software. Still works on W95 thru W7 ), using wireless interfaces. I use my own private intranet to avoid conflicts with anything else. It's slow, but it eventually gets there.
"Private Intranet": Sounds fancy, but what mine is is an old wireless access point that is not plugged into the internet. But it still supports multiple wireless devices logging onto it and lets them see each other.
Does the equivalent of a wired connection between the RJ45 ports.
Phone-to-Phone via Wireless: Trebleshot. ( F-droid ). Via my private intranet.
On the Android phone, the "Asus File Manager" allows you to transfer files on a LAN via TCP/IP web browser at the other end.
"Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
(Score: 2) by BeaverCleaver on Sunday September 28, @07:55AM
Not strictly on-topic, but how crazy is this? In 2025 we are not allowed to use a device that we own to access our own bank accounts. "You may not access your own money unless you use our tracking device." Oh and also we have to pay for the tracking device, the banks doesn't even give it to us for free.
(Score: 5, Funny) by DannyB on Tuesday September 09, @03:20PM (1 child)
RFC 1149 [wikipedia.org]
For some odd reason all scientific instruments searching for intelligent life are pointed away from Earth.
(Score: 3, Insightful) by Dr Spin on Tuesday September 09, @07:59PM (4 children)
... even if station wagons are unavailable.
Depending on the amount of data, DAT or LTO, as I no longer own a 1/2" tape drive.
Warning: Opening your mouth may invalidate your brain!
(Score: 3, Interesting) by istartedi on Wednesday September 10, @06:58PM (2 children)
I have fond memories of delivering tape when I was a teenager, looking forward to the time when I would be in the office doing cool computer stuff instead of just delivering computer stuff. At some point I worked at a place that had tape backup; but by then it was no longer the huge reels. It was a little cartridge that probably held a lot more data.
Appended to the end of comments you post. Max: 120 chars.
(Score: 0) by Anonymous Coward on Tuesday September 16, @03:53AM (1 child)
I've been considering getting an LTO system as I've got like 12TB of stuff and having the 2 or 3 HDD it takes to rotate the backups offsite properly is rather annoying. It would be nice to be able to rotate through 3 tapes. But, the price tag is high enough that, I'll probably just buy a couple more HDDs and backup to mirrored ZFS setup and just use a 3rd disk mirrored to it to rotate to and from storage off site.
(Score: 2) by Dr Spin on Monday October 06, @03:08PM
You can buy a second hand LTO3 drive for under £100, (LTO4 might be a better choice for you) and buy a pack of five tapes. (See Ebay for prices).
You need to have four tapes in active use: Grandfather, father, son, NEXT, and also a spare in case one dies. *
I keep the father tape offsite, (take new father offsite, return with former father as new grandfather. Every six months the grandfather tape is archived, the spare becomes NEXT, and a couple of new tapes bought.
Note that you should ONLY EVER use tar to write tapes, and ABSOLUTELY NEVER use proprietary backup software, or there is a good chance your old tapes will be unreadable when you need them.
The choice of GTAR or BSD tar is up to you (unless forced by your employer). You can probably be quite selective in what you choose to tar. Do not archive your OS - it is faster to reinstall a fresh one. Keep your immutable data (eg porn)
on a separate set - you don't need to keep writing it to new tapes.
Obviously - do not use Windows if you value your data. I recommend OpenBSD for servers, but YMMV.
I have been doing this since LTO was invented, and before that, same procedure, going back to 556bpi 7-track tape on ICL1905 and CDC7600.
None of this tape stuff removes the need for (offsite?) database write logging if financial data is involved.
* Remember: if you do have a tape wreck, it might be because the drive is screwed, and discovering this might take out another tape! (You can use the "NEXT" tape for a dry run to test your restore process).
Test your tape restore procedure at least annually - some software upgrade, somewhere, may shoot you in the foot!
Warning: Opening your mouth may invalidate your brain!
(Score: 4, Interesting) by turgid on Tuesday September 09, @09:05PM (4 children)
I have a wired home LAN or two or so with gigabit ethernet and Netgear switches. I am now backing up multi-hundred gigabyte disk images and VM images and using such things as rsync, dd and various compression tools.
Last year sometime I was working on some firmware to do SERDES and I was absolutely blown away by how fast it can go these days. 10Gbit ethernet really isn't that fast, and none of my home machines have anything faster that 1Gig. However, there's USB 3.x and I have a couple of external 2.5" USB 3.x 5TB drives for doing backups.
Even USB 3.2 is faster (allegedly) than 10Gbit/s ethernet. It's also cheap and everything has it these days. So the obvious question is, why can't I take one USB cable from one PeeCee and plug it into another and transfer data? Because it's Not That Simple(TM).
OK, so there's this thing called an Ethernet switch, why can't I get a "USB switch." I can get a USB hub. It's this master/slave design. You can get things to plug into your phone to turn it from a slave/client into a master/controller.
So why can't I buy a thing with multiple USB 3.2 ports that I can plug several machines into and get them to speak at 10+ Gbit/s? I mean 20, 30, 40+ should be quite simple nowadays.
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].
(Score: 4, Interesting) by Anonymous Coward on Wednesday September 10, @01:34AM (2 children)
You can connect two computers and transfer files using USB if you know what you are doing but it isn't necessarily the easiest thing to do. If you are using USB-A or USB-B that are USB 2 or certain 3.0 controllers, there are two exclusive ways to connect them. If they are 3.0 compliant there are two exclusive options to those.
No matter what, you can use a protocol bridge. This device appears as a client peripheral to both hosts. The software can then be used to transfer between devices with the bridge acting as a relay, essentially. The speed there is limited by the device's capability.
The easiest is to use a special transfer cable that has the pinout correctly crossed to allow two host systems to connect without frying either. Then software allows transfers by treating each as the host and ignoring certain errors caused by two host devices sending out-of-spec data. Those errors do result in a slower transfer speed.
The second USB-A and USB-B approach is to use OTG (USB On-The-Go). There, a special pinout is used by standard cables to set which is the host (A device) and which is the client (B device). The computers can negotiate proper roles and send data that way in spec and at full speed.
If you are UBS 3.0 and above, you could potentially use Dual Role Devices. If at least on controller is dual role, you can connect the two computers and then use software to communicate between them. This is much more likely to succeed using type C connectors, since those controllers are supposed to be dual role. If using an A or B connector you will be limited to 10 Gbps. If using type C and USB 3.2, then you can get the full 20+ Gbps.
Finally, if your computers are thunderbolt capable, you can use thunderbolt with a type C connector. This would give you around 32 Gbps (Thunderbolt 3) of speed at the expense of more complicated software stack.
Isn't the mixed standards of USB fun? Then you add in that not every piece of hardware isn't necessarily 100% compatible with what it should be. But with newer controllers and the right software and drivers, you can wade your way through the mud.
(Score: 4, Informative) by Anonymous Coward on Wednesday September 10, @10:47PM (1 child)
I looked and found the documentation I didn't earlier for the software side. In Linux, you can use the Mass Storage [kernel.org] driver to make one device look like a USB mass storage device. There are other drivers that can be used to act like all kinds gadgets. [kernel.org] You can even write your own using the gadget API. [kernel.org] There are similar drivers available for Windows, MacOS, and most *BSDs. So be careful with those USB sticks you plug in, they might actually be something else when plugged in.
(Score: 2) by turgid on Friday September 12, @08:32AM
This is really good information thanks. I'll have to try something out in my copious spare time...
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].
(Score: 0) by Anonymous Coward on Sunday September 14, @06:17AM
This is completely off topic but I wanted to answer the question in your journal. The "size_t dsize;" part of the function declaration is the alternate syntax for parameter forward declaration. It is supported in later versions of C and understood as an extension by most compilers.
(Score: 3, Insightful) by Gaaark on Wednesday September 10, @02:59AM
I use a combination, depending on the sit-ey-ation, so i voted 'Other'.
Not cloud.
--- Please remind me if I haven't been civil to you: I'm channeling MDC. I have always been here. ---Gaaark 2.0 --
(Score: 3, Informative) by mcgrew on Wednesday September 10, @03:16PM (1 child)
Reasons:
First, it works. Not just with the computers, but tablets and phones.
Choices:
USB memory stick, SD card, or similar
I have an old very large USB drive that I used to use for backing up my data, but it ran out of room so I bought a bigger one. Now I'm putting all my DVDs on it; it stays plugged into the TV's USB unless I'm adding a new movie.
External hard drive
See "or similar," above
Optical media (CD/DVD/Blu-ray)
Obsolete, we have far bigger thumb drives now
Network app (rsync, scp, etc.)
File Manager, whatever the app developer calls it? I'm rarely at a text prompt any more.
Network file system (nfs, samba, etc.)
A little redundant but I guess you wanted to be thorough
The "cloud" (Dropbox, Cloud, Google Drive, etc.)
I guess if push came to shove I could use some of my hosted drive space, but If you use other people's servers, be sure to back your data up on your own servers! There's nothing on any of my web sites that isn't mirrored on my private network. Forty three years of computing has taught me to never trust any device, especially someone else's device!
Email
Sure, if I'm moving a file to my daughter and the file is small enough.
Other (specify in comments)
Subetheric transport modules (not yet invented)
Mad at your neighbors? Join ICE, $50,000 signing bonus and a LICENSE TO MURDER!
(Score: 3, Insightful) by janrinok on Thursday September 11, @10:32AM (40 children)
1. It is not a journal.
2. You are banned. Your continued spamming doesn't change that fact.
[nostyle RIP 06 May 2025]
(Score: -1, Troll) by Anonymous Coward on Thursday September 11, @03:31PM (36 children)
"There is some positive eugenic effect"
You. You did this.
(Score: 2) by janrinok on Thursday September 11, @03:57PM (35 children)
No, I didn't. You can't pass responsibility for something on to somebody else. The flagging is in response to the spamming. I am not responsible for the spamming. But you do know who is, don't you?
[nostyle RIP 06 May 2025]
(Score: -1, Spam) by Anonymous Coward on Thursday September 18, @12:45PM (8 children)
The abuser censoring information he does not like. QED. Hope you live long enough for fascism to retake France, maybe then you'll regret silencing those speaking against it.
(Score: 1, Offtopic) by janrinok on Thursday September 18, @02:14PM (6 children)
Your comment is marked as Spam because this is a poll about "When transferring multiple 100+ MB files between computers or devices...", and your comment is completely irrelevant. Once again you are spamming a discussion that does not require your childish comments.
I do not mind you speaking against fascism in the appropriate journal, where it would be on-topic. However, there are no journals wanting to discuss it. Get an account, create a journal, and off you go. If you insist on acting like an idiot, you will continue to be treated like an idiot, whether you are one or not.
[nostyle RIP 06 May 2025]
(Score: 1, Insightful) by Anonymous Coward on Tuesday September 23, @03:12AM (2 children)
WAAAAAH! WAAAAAH! It's SO UNFAIR that I'm being FORCED to create an account and post my own journals to discuss arbitrary topics! MEANIE JANRINOK just won't STOP BULLYING ME! I'm SPECIAL and should be able to post OFF-TOPIC comments wherever I damn well please, and YOU ALL MUST STFU and ACCEPT IT! Forcing me to create an account where I can freely express my views in my own journal is CENSORSHIP! SO UNFAIR! WAAAAAH! WAAAAAH!
(Score: 1, Offtopic) by janrinok on Tuesday September 23, @09:55AM (11 children)
[nostyle RIP 06 May 2025]
(Score: 3, Interesting) by Freeman on Thursday September 11, @03:38PM (5 children)
So I chose the 1st option. Which said "something similar" to a USB Drive. Which one could argue an external NVMe SSD is a fancy (albeit typically larger) USB drive. It connects via USB 3.0 and is a nice quick way to transfer big files. You don't need some special cloud service or any annoying kind of setup. You just plug it in and you're good. Even, if your target doesn't have USB 3.0 (less and less likely) it's still an easy plug and play. It would just take longer at that point.
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 2) by krishnoid on Friday September 12, @03:01AM (4 children)
I don't think that's even "arguably" a USB Drive. A 5-bay enclosure for 3.5in rotational disk that connects via USB ... likely more of an argument.
Also you could copy everything over from the USB drive, then use network rsync with its checksum option to doublecheck it and sync up any deltas and fix up metadata. Works great.
(Score: 3, Touché) by krishnoid on Friday September 12, @03:02AM
But it *is* "arguably" a USB memory "stick", whoops.
(Score: 1, Interesting) by Anonymous Coward on Tuesday September 16, @03:56AM
I've got a funky 5 bay external SATA enclosure with 5 SATA cables that connect to it. It is kind of funky and I'll need to get some more ports for my computer, but it was under $100, and I'll just make a giant RAIDZ out of it. Given the state of software RAID these days, it seems like the way to go for me. I saw the USB ones, and I'm kind of skeptical that the performance would be there for that many disks.
(Score: 3, Informative) by Freeman on Wednesday September 17, @04:33PM (1 child)
This is a USB "Drive".
https://en.wikipedia.org/wiki/USB_flash_drive [wikipedia.org]
An external NVMe enclosure with a USB 3.0 interface is essentially a fancy "USB Drive". (Sure, you also need an NVMe drive to put in the enclosure, but it's certainly much more similar to a USB Flash Drive than a 5-bay 3.5" monstrosity.)
Example: https://www.amazon.com/UGREEN-Enclosure-Tool-Free-Thunderbolt-Compatible/dp/B09T97Z7DM [amazon.com]
An external NVMe USB Drive is more like 'ye olde USB Flash Drive than any rotational disk storage device. At least the external NVMe storage device is using similar technology for storage as the super slow old school USB Flash Drive.
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 2) by Freeman on Wednesday September 17, @04:36PM
That said, I do have some external spinning disk storage that would definitely fall into the "External hard drive" storage option. However, I use that as more of a "cold storage" backup. Whereas the external NMVe SSD enclosure that doesn't need an extra power brick to run is what I typically use to swap large files or many files from one device to another. That includes my phone to my computer.
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 4, Touché) by The Vocal Minority on Sunday September 14, @04:16AM (11 children)
Am I doing it wrong???
(Score: 3, Informative) by krishnoid on Monday September 15, @05:24PM (7 children)
Well, they're written by the same guy [wikipedia.org] and are both really reliable, so I'm voting no.
(Score: 0, Troll) by Anonymous Coward on Tuesday September 16, @04:54AM (6 children)
If we're going to be doing that, then perhaps we should be punishing the geniuses that came up with SMB. What sort of a file transfer protocol doesn't have provisions for reassembling the packets in order? I remember my MP3 collection getting badly corrupted during transfer back in the day just by transferring too much over SMB.
(Score: 0) by Anonymous Coward on Thursday September 18, @04:26AM (4 children)
The answer should be all of them since that is the concern of lower layers in the network stack. However, SMB, just like most of them, does support packet reassembly and reordering. If your network card is giving your computer bad data or the driver mangling data from the card (Ethernet card not verifying checksum?), that is on your card or driver manufacturer not SMB. If your disk or memory isn't reliable, that isn't SMB's fault either. And the reason I'm pointing at bad hardware is even if SMB gets packets out of order, it can still reassemble the proper data order because each packet is either individually sequenced or idempotent. Long story short, garbage data came to SMB as garbage or was turned to garbage later by something not SMB related precisely because SMB (NFS, AFP, etc.) was designed when networks were less reliable.
(Score: -1, Troll) by Anonymous Coward on Thursday September 18, @05:33PM (3 children)
I don't agree, this was like 25 years ago and by that point, there had been like 5 or 6 major revisions of consumer Windows on top of the various NT releases since 1983 that could have come with a properly functioning software system. They did wind up releasing SMB2, but from what I can tell, that didn't come until Vista and I had no real reason to use or trust SMB by that point. Meanwhile, by the time that XP came out, they already had robocopy going back to sometime in 1996 that they could very easily have worked around the wonkiness of the SMB protocol they had at the time. It was just something that you're not likely to ever have heard about unless you specifically went looking for it.
It's a bit of a moot point whether you want to blame file explorer for not having proper data integrity systems in place to verify copies or SMB for being this stupid, but either way, SMB is just not something that I'd recommend anybody trusting
(Score: 0) by Anonymous Coward on Friday September 19, @05:44AM (2 children)
You disagree and yet none of that affects how SMB, even back to version 1, works under the hood. And robocopy worked for all their network sharing protocols, not just SMB. In fact, I most often had to use robocopy with WebDAV than SMB thanks to shitty phone lines I used to use. Oh well. Doesn't effect anyone else if you don't use the most widely available and used network file sharing protocol on the planet to share files with yourself.
(Score: -1, Troll) by Anonymous Coward on Saturday September 20, @02:32AM (1 child)
It's still a broken protocol if it can't detect whether or not the pieces were assembled correctly. That may be acceptable in the enterprise space where you have professionals that are there managing things and can implement things on top to verify that, but in a home environment, it's a ludicrous assumption. Especially since MS had been colluding with Intel to release those god awful Wintel modems where they charged more for hardware that had fewer chips in it.
As far as this goes, it was an OK protocol 40 years ago, but even by the late '90s, budget hardware was being released and it was the option for sharing files between machines if they couldn't fit on floppies. They should have done the right thing and just abandoned SMB in favor of something like sFTP or at least properly updated it with security features and the ability to verify that the file on the other end is the one that was sent.
(Score: 0) by Anonymous Coward on Saturday September 20, @07:18AM
Then you are in a world of hurt because almost all protocols have that problem out of the box, including HTTP, FTP, NFS, SFTP, and the list goes on. The only one that doesn't is rsync, which is part of the reason it is relatively slow. Ironically, SMB does have the ability to verify files sent to it even in the 90s, just like you requested, but you probably had signing turned off.
(Score: 2, Insightful) by Anonymous Coward on Wednesday September 17, @03:39PM
This is when USB stuff is blocked due to policies etc.
Of course in theory transferring to/fro phones should be blocked too (and in some places they are blocked). But most people usually don't bother to tell the relevant parties to get phones blocked because they want to actually get their jobs done.
After all, if phones get blocked, the various committees, teams, etc might not have yet prepared a practical way to actually get vendor updates or other required files to the systems.
(Score: -1, Troll) by Anonymous Coward on Sunday September 21, @08:05AM (12 children)
Why have I been blocked?
This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data.
What can I do to resolve this?
You can email the site owner to let them know you were blocked. Please include what you were doing when this page came up and the Cloudflare Ray ID found at the bottom of this page.
(Score: 2, Insightful) by janrinok on Sunday September 21, @10:02AM (4 children)
Have you sent an email? Why not?
We do not use security service - try a different connection. It is not a problem that we can fix.
[nostyle RIP 06 May 2025]
(Score: 1, Informative) by Anonymous Coward on Tuesday September 23, @04:45AM (3 children)
I know the grand parent comment is a troll either way, but I thought I'd add this for anyone else (including you) who may find this helpful. That error message occurs if you use Cloudflare as a proxy. Now, SoylentNews doesn't use Cloudflare as a reverse proxy, which is easy to verify by checking the routing of their traffic. However, that doesn't mean that the user isn't using them as a forward proxy. So if anyone here runs a website and sees spurious reports from users like that, it could be Cloudflare blocking the user because their WARP proxy is detecting the abuse. Which leads to an amusing conundrum: how much do you really want to help a user access your site when their actions are so bad or so obvious that they have been identified as abusive based on the minimum amount of data a forward proxy would have?
(Score: 2) by janrinok on Wednesday October 01, @08:30AM (4 children)
07:41Z
You wrote both of those comments, in different threads. You are getting your alter-egos confused. This is not the first time you have forgotten who you are pretending to be.
What do either of your comments have to do with the subject of the poll or journal where you have made them? What does Runaway's religion have to do with anything? Where is your evidence that he has doxxed somebody? Which of the site rules has he broken which would justify him being banned?
Get a grip of yourself - you are losing it. Better still, please go away and stop bothering people on this site.
[nostyle RIP 06 May 2025]
(Score: 2) by Snotnose on Sunday September 21, @07:27PM
I've got a home network with 2 NAS devices so ext4 is probably the cumulative total, albeit for a lot of small files I don't even notice are moving from 1 filesystem to another..
But my camera, while it advertises wireless, wants me to use a Canon app to use it. So I have to haul out an old USB 2.0 cable to transfer my files.
It was a once in a lifetime experience. Which means I'll never do it again.
(Score: 2) by jasassin on Tuesday September 30, @12:13PM (1 child)
I use a Semi-Truck filled with backup tapes.
jasassin@gmail.com GPG Key ID: 0xE6462C68A9A3DB5A
(Score: 0) by Anonymous Coward on Wednesday October 01, @08:50AM
I know the sleepers on semi trucks are larger than my first dorm room nowadays, but I find that there is usually much more room for our backup tapes in the semi-trailer. /s
(Score: 0) by Anonymous Coward on Sunday October 05, @02:03AM (3 children)
When I worked in academia and needed to move really large data sets from one file system to another -- and I mean distributed file systems with multiple petabyte storage capacities -- I generally used Globus. It was simpler and more robust than using something like sftp. Globus says it's cloud-based, but it doesn't really fit the cloud storage described in the poll. I wasn't uploading the data to cloud storage and then downloading it elsewhere, so it wasn't like Dropbox or Box. So where does Globus fit in the options?
(Score: 2) by janrinok on Sunday October 05, @02:24AM (2 children)
https://www.globus.org/data-transfer [globus.org]
It is the first time that I have heard of this system. I am still not sure how it works.
[nostyle RIP 06 May 2025]
(Score: 0) by Anonymous Coward on Sunday October 05, @03:10AM
I often try to think of clever responses, usually jokes, that don't quite fit the poll options. In this case, it's not a joke, but I'm not sure it really fits any of the options (except "other"). I know it's commonly used in academia in the US, and I used it when I worked in academia.
I can tell you how I used it. Let's say you have a cluster with a few petabytes of scratch space. There's a lot of storage, it's distributed, and it's going to have some redundancy built in. If you're running a job on the cluster, you'll generally read data from and write data to the scratch space. It's very fast and there's lots of space, but reliability isn't guaranteed, and it's really not feasible to make backups of the scratch space. Even with the redundancy, there are points of failure in the system, and data loss can and sometimes does happen. You want to transfer your data in and out of near-line storage that actually does get backed up frequently.
You could submit a job to the cluster and use scp, rsync, or something like that when your job starts running. But it's really inefficient because you don't know when your job will actually run and your job really is a bottleneck. The storage system is parallel, but the data has to be transferred to a single process running on a single node, which then moves the data to or from the near-line storage (or another cluster somewhere else, if you're using the data on another system). It's slow, and you also need to resubmit the job if your transfer fails for some reason. Prior to Globus, this was how I transferred large data sets between systems.
With Globus, you use a web interface, add the cluster's scratch space as an endpoint, add the near-line storage or another cluster as the other endpoint, select the files you want to move, and press a button to start the transfer. There are nice progress bars showing the status of the transfer. If the transfer gets interrupted, I believe it can automatically retry or restart the transfer. I believe there's some kind of hashing or checksums to verify the integrity of the files after they're transferred. Transfers also seemed to be faster, and I assume that's because Globus is utilizing the parallel capabilities of the endpoints instead of funneling everything through the bottleneck of a single process running on a single node. From a user standpoint, it's a much better experience for moving large data sets. It's faster, easier, and less prone to transfers failing.
That said, I don't really know what's happening behind the scenes to do this. I know there's Globus software running on the endpoints, but I don't know what happens between the endpoints. But it really is a very good way to transfer large data sets. There were times I needed to transfer multi-terabyte data sets, and Globus was easily the best tool for that job. But I don't know where it fits among the options.
(Score: 0) by Anonymous Coward on Sunday October 05, @06:27AM
Basically, it sidesteps the way things usually work in a large cluster with a SAN by imitating cluster design. If I were to curl data from a large public data set into a public folder on a the SAN, then the flow would be:
This is quite slow for four reasons: The node running curl will be relatively slow and limited by the number of curl processes; the SANs are going to be relatively slow because they are competing with other SAN traffic; the general internet connection is competing with other traffic and slow; the web server is having to relay data in competition with other users and uses. All of this added together makes the whole transfer process much harder and longer than it need to be.
However, with Globus, things look very different. Instead, you use the software that the process looks like this:
This ends up being faster because the traffic between storage nodes and the DTN is not competing with the general SAN traffic, the DTN processes are multit-hreaded and dedicated to data transfer plus it turns out they end up transferring the same data at the same time. A minimum amount of traffic is guaranteed for data transfers, so it usually ends up being faster since the minimum is faster. And the transfer node on the other end is similarly not competing with other uses.
It is, as I said, sort of like a storage cluster on top of your storage cluster. You have the data nodes that hold the actual data, control nodes to control them and keep the cluster sane, the dedicated network between storage node sub-clusters, and the dedicated network for cluster communication. It is quite nice in the right circumstances, and it has a slick interface for end users and admins. But it can be massive overkill for most uses.