Daniel Stenberg lets the world know that cURL, the little command line utility that lets you download stuff off the internet via HTTP along with a bunch of other protocols, has turned 17 today (March 20). Considering that it is also available to all of us for use in our programs as a nifty little library called 'libcurl', and that PHP, the most common web development language depends on libcurl for handling HTTP requests, we can be happy that cURL exists. I personally cannot count the number of times it has saved me and the machines I administer.
Related Stories
March 20th, 2018, Daniel Stenberg notes twenty years of his flexible, multi-protocol, text-based utility, curl. It is a very common client-side file transfer utility. The associated development libraries, libcurl are a couple of years younger.
curl itself and components from libcurl are found nearly everywhere these days. Due to such widespread use, it is hard to be precise with usage numbers, but conservative estimates suggest billions of people every day are using it, though mostly under the hood several layers down inside devices they own. It is the Internet transfer utility of choice for thousands of software applications. It is found in cars, television sets, routers, printers, audio equipment, mobile phones, tablets, settop boxes, and media players for starters.
A detailed, free-of-charge, ebook, Everything curl, covers basically everything there is to know about curl, libcurl, and the associated project.
Earlier on SN:
Daniel Stenberg, author of the ubiquitous URL fetcher cURL and the libcurl multiprotocol file transfer library, and recipient of the 2017 Polheim Prize, has been blocked again from attending a US-based conference. Daniel has written a post in his blog about his two-year odyssey through the byzantine US bureaucracy to try to get permission to attend a work-related conference in California. He has been in the US nine times previously but despite pages of paperwork, hundreds of dollars in fees, and personal visits to the embassy, no dice. As a result the conference will have to move outside the US and probably Canada too if it wants to stay open to the world's top talent.
Earlier on SN:
US Visa Applications May Soon Require Five Years of Social Media Info (2018)
Reducing Year 2038 Problems in curl (2018)
cURL turns Seventeen Today (2015)
(Score: 5, Funny) by wantkitteh on Saturday March 21 2015, @12:00AM
As you're now 17, we expect to see you hiding out the back of your birthday party with a furtive beer and sneaking out after dark with the cute gURL next door.
(Score: 2) by davester666 on Saturday March 21 2015, @08:42AM
Thank you for not mentioning the golden girls.
(Score: -1, Redundant) by Anonymous Coward on Saturday March 21 2015, @12:24AM
wget http://example.org/file.ext
Is all I want to know.
----
Something badly wrong with ecode parsing and URL's in plain-text and html modes!
(Score: 4, Informative) by wonkey_monkey on Saturday March 21 2015, @12:35AM
Now it's grown up will it come with a no-nonsense cli program like it's older sibling wget?
curl http://example.org/file.ext >file.ext
Sorry if that's not "no-nonsense" enough for you.
systemd is Roko's Basilisk
(Score: 4, Informative) by hemocyanin on Saturday March 21 2015, @12:57AM
also -o or -O
Name the file whatever you want:
curl http://example.org/longfilename.ext [example.org] -o file.ext
Use the remote file name:
curl http://example.org/longfilename.ext [example.org] -O
(Score: 0) by Anonymous Coward on Saturday March 21 2015, @01:23AM
So how is it better than wget?
(Score: 4, Informative) by hemocyanin on Saturday March 21 2015, @01:36AM
curl and wget are different, but also share a lot of functionality. Where curl really stands out is that you don't really need to go through the intermediary of a file to process data -- the default behavior is to dump the text of a page into your terminal and you can then pipe it through whatever you want to manipulate it from there. It's an a great tool to scrape sites for info, saving to file only what you want.
(Score: 2) by hemocyanin on Saturday March 21 2015, @01:37AM
Lame self-reply -- I believe wget has better mirroring capabilities than curl if you want to mirror a site.
So you see, overlap with different competencies. You should learn both.
(Score: 2) by tynin on Saturday March 21 2015, @02:09AM
I think we both agree this would be better if it was framed as a vi Vs emacs debate. Common on man, the flames don't spread themselves!
Besides, wget is the original... curl didn't show up for a whole year later. Anything you can do with curl, I can do with wget better.
(Score: 2) by hemocyanin on Saturday March 21 2015, @04:02PM
wget the fuck out of here before i curl you out.
OK -- not so great, my heart isn't in this flame war.
(Score: 2) by Marand on Saturday March 21 2015, @05:53AM
You basically covered it there. They have similarities, but curl's more useful inside scripts or command chains with pipes, and wget's one of the most useful/flexible mirroring tools.
Another nice option is aria2c. It doesn't do recursive downloading like wget, but it's fast, has support for things wget doesn't (metalinks, bittorrent) and has some interesting options like file splitting, settings for multiple connections per server, and some other things. I first found it a while back when I needed metalink [wikipedia.org] support for something* and neither curl nor wget had it (curl has it now I think), and ended up liking it.
Now I mostly only use wget for mirroring and use aria2c or curl for everything else.
* I don't even remember what I needed it for, now; I vaguely remember it being something on an open source project's page or discussion board, and being annoyed that they had to use some obscure link format that I (correctly) predicted I wouldn't see again for years.
(Score: 2) by wonkey_monkey on Saturday March 21 2015, @08:46AM
I didn't say it was.
systemd is Roko's Basilisk
(Score: -1, Spam) by Anonymous Coward on Saturday March 21 2015, @01:50AM
yo mama is so fat she invades foreign countries from her living room
(Score: -1, Spam) by Anonymous Coward on Saturday March 21 2015, @02:10AM
...is the butt
(Score: -1, Spam) by Anonymous Coward on Saturday March 21 2015, @02:22AM
ciri: tama lama ding dong ping pong
(Score: 2) by CirclesInSand on Saturday March 21 2015, @01:26AM
Yeah, I thought all the cool people used wget. What's the difference between that and cURL?
(Score: 0) by Anonymous Coward on Saturday March 21 2015, @01:54AM
real men use fsockopen
/me cracks open the popcorn :p
(Score: 0) by Anonymous Coward on Saturday March 21 2015, @02:01AM
Can I be hip with netcat
(Score: 0, Troll) by Anonymous Coward on Saturday March 21 2015, @02:29AM
I see several "how is this different from wget?" comments and fairly tepid replies.
The biggest difference is that curl can be used to POST things too.
(Score: 2) by http on Saturday March 21 2015, @03:16AM
wget also has post capability.
I browse at -1 when I have mod points. It's unsettling.
(Score: 2) by Subsentient on Saturday March 21 2015, @05:18AM
I used libcurl for some important stuff and I still do for my bot aqu4. libcurl is wonderful.
"It is no measure of health to be well adjusted to a profoundly sick society." -Jiddu Krishnamurti
(Score: 2) by darkfeline on Saturday March 21 2015, @07:02PM
I tend to stick with Python and BeautifulSoup for my web scraping purposes, so I don't really use curl. Sometimes I use wget for fetching sources and other files from the command line.
I can see how curl might be useful combined with, say, Perl, but you really should use a dedicated HTML parser instead of REs to scrape pages.
I see that curl also supports protocols like Kerberos and IMAP, which does sound useful, but I haven't needed it.
Join the SDF Public Access UNIX System today!
(Score: 2) by Yog-Yogguth on Sunday April 05 2015, @12:13PM
Thanks for mentioning Beautiful Soup [crummy.com] I hadn't heard of it before!
If anyone is looking for a Python challenge they could try using it with rules of their own making (stripping out tags based on tag content? Stripping away non-text? Adding required features? Heading tags i.e. <H1> etc. structure/layout detection? Smiley/emote/emoticon detection and translation?) aimed at functioning as a workaround to whip non-ADA-compliant web pages into faked ADA compliance before piping it to a screen reader or other assistive technology. Maybe someone will get inspired :)
Even bigger challenge: parsing through javascript to see if there's anything sensible there/functionality that can somehow be duplicated into a link, or if it's just fluff to be discarded.
[Ideas? Check. Time and knowledge? Sorry I haven't got enough (not yet at least but likely never).]
Bite harder Ouroboros, bite! tails.boum.org/ linux USB CD secure desktop IRC *crypt tor (not endorsements (XKeyScore))