The operator of a website that accepts subscriber logins only over unencrypted HTTP pages has taken to Mozilla's Bugzilla bug-reporting service to complain that the Firefox browser is warning that the page isn't suitable for the transmission of passwords.
"Your notice of insecure password and/or log-in automatically appearing on the log-in for my website, Oil and Gas International, is not wanted and was put there without our permission," a person with the user name dgeorge wrote here (the link was made private shortly after this post went live). "Please remove it immediately. We have our own security system, and it has never been breached in more than 15 years. Your notice is causing concern by our subscribers and is detrimental to our business."
Around the same time this post was going live, participants of this Reddit thread claimed to hack the site using what's known as a SQL injection exploit. Multiple people claimed that passwords were stored in plaintext rather than the standard practice of using cryptographic hashes. A few minutes after the insecurity first came up in the online discussion, a user reported the database was deleted. Ars has contacted the site operator for comment on the claims, but currently Ars can't confirm them. The site, http://www.oilandgasinternational.com, was displaying content as it did earlier at the time this post was being updated.
As a member of the Mozilla developer team pointed out in reply to the complaint, both Firefox and Chrome routinely issue warnings whenever users encounter a login page that's not protected by HTTPS encryption. The warnings became standard earlier this year.
The site in question appears to be completely offline at this time.
Source: ArsTechnica
(Score: 5, Insightful) by Rosco P. Coltrane on Wednesday March 22 2017, @05:18AM (5 children)
The site wasn't hacked in 15 years because nobody gave a shit about it for 15 years. Not anymore...
(Score: 1, Insightful) by Anonymous Coward on Wednesday March 22 2017, @05:25AM (3 children)
Yeah that was pretty much 'challenge accepted'. Then they showed him his entrails before blowing away his db.
(Score: 0) by Anonymous Coward on Wednesday March 22 2017, @05:40AM (2 children)
Don't worry, Trump will bail out his business.
(Score: 1, Touché) by Anonymous Coward on Wednesday March 22 2017, @05:53AM
They'll pay Hillary 20bn to route their pipe through Syria instead.
(Score: 0) by Anonymous Coward on Thursday March 23 2017, @03:51AM
Dont think Trump as a business partner would bail him out. He strikes me as the 'your fired' sort of guy....
(Score: 2, Insightful) by Soylentbob on Wednesday March 22 2017, @07:03AM
Rather: The site administrators didn't notice any hacks for 15 years.
Because not everyone is kind enough to announce their presence. (Ok, not sure if querying user-data via SQL injection counts as hacking the site, or if it counts as just using the offered access.)
(Score: 4, Informative) by Soylentbob on Wednesday March 22 2017, @05:41AM (17 children)
Here [reddit.com] are some details on how the site taken offline. Summary:
Precondition:
Site admin claims his [Windows 2003 based, IIS powered] site is secure, states on site "All credit card information is encrypted using our Secure Transaction Server."
Action:
- Developer (lets call him Bobby [xkcd.com]) calls admin (number was on website), tells him site is insecure
- Admin hangs up call
- Bobby tries to log in, "accidentally" drops table with userdata
- Bobby calls admin again, tells him to try to log in to his website
Result:
Game over, site taken offline
Personally, I usually don't like to take delight in the suffering of others. In this case, I think the disruption of the site was a public service, and probably even the admin himself is better of this way. From what I read, the accounts were stored with passwords in cleartext in the DB. Obviously, credit card data was not encrypted on the way to the website. In case of stolen userdata, that might have been constructed as criminal neglect? Bobby just dropped the table.
(Score: 2) by sgleysti on Wednesday March 22 2017, @05:47AM (16 children)
What really surprised me about this article:
both Firefox and Chrome routinely issue warnings whenever users encounter a login page that's not protected by HTTPS encryption. The warnings became standard earlier this year.
It's crazy they're only warning about this now.
And plaintext password db is utter amateur. Even as a self-taught php script kiddie in high school, I knew to salt and hash passwords before storage; it's in the docs.
(Score: 0) by Anonymous Coward on Wednesday March 22 2017, @06:01AM (11 children)
Passwords are so amateur. The only security I use is webknocking and my sites have never been breached.
(Score: 1) by Soylentbob on Wednesday March 22 2017, @06:43AM (10 children)
The only security I use is webknocking
Yes, I think soylentnews is using something similar. To get any content, or more fundamental, any connection at all, you first have to knock using a so-called "syn"-request.
(Score: 0) by Anonymous Coward on Wednesday March 22 2017, @07:10AM (9 children)
That's port knocking, numb skull. Web knocking is where you request a certain secret 404 page and quietly get admin access granted without a password.
(Score: 2, Informative) by Soylentbob on Wednesday March 22 2017, @07:26AM (8 children)
Your Irony-detector is broken. And either mine is damaged as well or you actually think that security by obscurity is a good idea. (With HTTPS, the URL would still be in browser history etc.)
(Score: 2, Touché) by Anonymous Coward on Wednesday March 22 2017, @07:56AM (6 children)
You mock security through obscurity yet HTTPS provides security only for as long as a secret key remains obscured.
(Score: 0) by Anonymous Coward on Wednesday March 22 2017, @09:52AM
Actually, the path of that secret 404 page could indeed be considered a password, as it is doubtless easy to change it. Of course it still is a bad idea because the "password" will likely end up in your browser history (and your browser might "helpfully" provide it to a number of anti-phishing sites as well).
(Score: 1) by Soylentbob on Wednesday March 22 2017, @10:37AM (3 children)
Actually I was thinking as well about the question if the URL could be considered as safe as a password. As the AC above already mentioned, there are some reasons why it might not be comparable. (I already mentioned the browser history as a potential problem in my previous post.) On top of his arguments, the secret URL is only valid considering in case it is used together with HTTPS (afaik everything but the domain name is encrypted in HTTPS, also the remainder of the URL). In HTTP, the full URL including the secret part would be transferred unencrypted.
(Score: 0) by Anonymous Coward on Wednesday March 22 2017, @11:32AM (2 children)
Of course, in HTTP, also the password in your POST form would be transferred unencrypted as well, so this is no difference.
BTW, how are passwords from .htaccess (how passwords originally were meant to to implemented) transmitted when using HTTP?
(Score: 1) by Soylentbob on Wednesday March 22 2017, @11:42AM (1 child)
Of course, in HTTP, also the password in your POST form would be transferred unencrypted as well, so this is no difference.
The difference is still the entry in the browser history, which might be accessible to malicious Java scripts or plugins.
BTW, how are passwords from .htaccess (how passwords originally were meant to to implemented) transmitted when using HTTP?
Seems to be [stackoverflow.com] plain-text when not using https.
(Score: 0) by Anonymous Coward on Wednesday March 22 2017, @01:42PM
Helpfully (not), as that link mentions, every modern browser supports HTTP Digest Authentication [wikipedia.org], which actually is a well designed password authentication scheme (it does use MD5; I'm unsure if there is a security problem with that now), but no browser UI I'm aware of has ever distinguished between the secure digest auth method and the "send the password in plaintext" method (which isn't entirely stupid if the connection is HTTPS). Pretty much completely nullifying the security of digest auth. The "modern browser" part where IE6 doesn't support it is what killed it years ago.
(Score: 2) by tangomargarine on Wednesday March 22 2017, @03:49PM
By that logic, all security is security through obscurity if it relies on select people knowing how to unlock the system. I guess securing something such that *nobody* can access it would fall outside that definition.
"Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
(Score: 2) by HiThere on Thursday March 23 2017, @12:04AM
Security by obscurity is a valid way to buy time while trying to come up with something better. But try to be really obscure, like the factors of a large prime number or something.
Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
(Score: 0) by Anonymous Coward on Wednesday March 22 2017, @06:26AM
I think what happened "earlier this year" is they went from a little yes/no dialog warning to a big "get me out of here" full-screen scare warning. I'm pretty sure firefox has warned about this sort of thing since it was firebird.
(Score: 0) by Anonymous Coward on Wednesday March 22 2017, @06:28AM
I think you hit the nail on the head there, the people that design these clusterfucks were never script kiddies in high school.
(Score: 0) by Anonymous Coward on Wednesday March 22 2017, @08:32AM
It's crazy they're only warning about this now.
The crazy thing is that the warning came in place just as people finally stared to understand that HTTPS is completely broken.
(Score: 4, Insightful) by driverless on Wednesday March 22 2017, @10:36AM
It's crazy they're only warning about this now.
Hey, it's only taken them twenty years to get this far. They'll still report a site claiming to be Bank of America that's hosted in Kazakhstan on a Windows 7 Home Premium laptop as legit, provide is has a $5.99 GoDaddy certificate.
(Score: 2) by Arik on Wednesday March 22 2017, @05:59AM (8 children)
HTTP was unfit for anything more than testing transport from the beginning. Less than a year ago browser makers finally decided to pretend they care.
If laughter is the best medicine, who are the best doctors?
(Score: 5, Informative) by maxwell demon on Wednesday March 22 2017, @06:26AM (7 children)
No, http was perfectly fit for the original purpose, to distribute public scientific information to fellow scientists.
The Tao of math: The numbers you can count are not the real numbers.
(Score: 2, Informative) by Arik on Wednesday March 22 2017, @06:34AM (5 children)
If laughter is the best medicine, who are the best doctors?
(Score: 4, Informative) by Zyx Abacab on Wednesday March 22 2017, @07:23AM (1 child)
Editors routinely change authors' text without consent, or with forced consent. This sometimes leads to errors and false information. Are books insufficient in conveying linguistic messages?
HTTP works perfectly well for its original purpose: transferring static hypertext documents, as well as a limited set of auxiliary documents, from server to heterogeneous clients.
The problem, if any, is that people keep using HTTP for stuff it can't do well; like authentication across an insecure network, or transferring sensitive information.
And just because the solutions to those particular problems lie above, or below, HTTP doesn't mean that HTTP itself sucks—it just means that those particular solutions have been implemented on another layer.
(Score: 0) by Anonymous Coward on Wednesday March 22 2017, @03:11PM
Blame commercialization of the Internet.
When it was mostly LANs with interconnects between numerous trusted administrative domains, and not "foreign" entities, user IDs and passwords were privacy locks, not security locks. A bad apple anywhere can ruin apples when placed amongst good ones--jerks could still mess that up, but scientific communities had more jerk egos than jerk trolls looking to cause issues.
It isn't true, but I blame AOL for it. At least dial-up was reasonable security. wire-tapped modem traffic recorded and played back later for review was not as easily deciphered as modern technology is.
(Score: 1, Interesting) by Anonymous Coward on Wednesday March 22 2017, @06:29PM
How is this a troll? It seems the mods on crack from the green site have found their way over here...
(Score: 0) by Anonymous Coward on Thursday March 23 2017, @03:54AM (1 child)
It is fine for my at home use. For trolling through imdb for interesting info, or other 'pictures'. But for 'trusted' communications it is not designed for that. That is what https helps mitigate (note, not solve).
(Score: 2) by Arik on Friday March 24 2017, @12:08AM
Sounds good but doesn't work well in practice. First off only encrypting the interesting bits just makes them easier to pick out - and the metadata that can not be encrypted is immediately collectible. If anyone on the network is going to need security, ever, then it actually needs to be backed into the beginning and be applied as the default, not an oddball option 'normal' people shouldn't think about.
If laughter is the best medicine, who are the best doctors?
(Score: 1, Insightful) by Anonymous Coward on Wednesday March 22 2017, @06:37AM
The purpose of the World Wide Web has been porn since 1992 [wikipedia.org].
(Score: 5, Insightful) by Shimitar on Wednesday March 22 2017, @07:16AM (33 children)
Ok, this is why i don't like HTTPS in general: the cert system is foundamentally broken and flawed. Not on a technical point of view (well, maybe too, but it is not my point), because it's basically a scam to rob money.
If i run my website (i run a few of them, none for profit) and if i need to go HTTPS because, well, because it's better that way of course, what can i do? Pay up quite some money to some company who has the privilege of having it's root certs pre-installed on users browsers.
Yes, there are a few free solutions, but VERY few are accepted by common browsers, and no, asking the users to install a root certificate in the browser OR accept self-signed certs is not good or viable in my opinion.
I run self-signed on many of my web pages, but often i need real certs, for example for any service which connects to an Android device, where installing/accapting self-signed certs requires a lock-screen with password to be enabled... Or for any service which needs to be accessed by users which are not part of my family (and can be convinced to accept self-signed warnings).
This mozilla move to show up very annoying reminders on password forms is another i don't like. Now people started to complain to me too, for no reason, since it's a private network with no internet access and two-factors authentication embedded, so that passwords are actualyl almost unneeded, but still required due to internal regulation.
Certificates should be freely available from some certified organization (more than one)...
... am i being too much IoT-socialist here?
Coding is an art. No, java is not coding. Yes, i am biased, i know, sorry if this bothers you.
(Score: 5, Informative) by bradley13 on Wednesday March 22 2017, @07:26AM (25 children)
While I agree with your point of view: that the concept of centralized CAs is fundamentally broken. Moreso, because any CA can issue certs for any domain - how dumb is that?
However, the rest of your post has been wrong for at least two years, and it's called "LetsEncrypt". Free certs, accepted by all major browsers, and dead easy to install and renew.
You may disagree with the system - heck, I disagree with the system - but for the moment, we have to live with it. LetsEncrypt at least makes it painless.
Everyone is somebody else's weirdo.
(Score: 4, Informative) by Anonymous Coward on Wednesday March 22 2017, @08:49AM (15 children)
We've been looking at letsencrypt at work and went with Verisign, because Verisign is cheaper. The thing is, with Verisign you get a certificate and can forget about it for a year or two, where as letsencrypt certificates expire so fast that you either have to hire someone just to update certificates, or use their horrible, root-requiring software that would make anyone who cares about security run away screaming.
And the protocol behind is just as horrible as the software, making it pretty much impossible to create a KISS letsencrypt client.
(Score: 5, Informative) by The Mighty Buzzard on Wednesday March 22 2017, @10:43AM (9 children)
Yeah, that's an exceedingly foolish statement. It's astoundingly simple to write your own cron job that can be run as any user to update your cert. If you are incapable of writing half a dozen lines of shell script, you should not be administering a server.
My rights don't end where your fear begins.
(Score: 0) by Anonymous Coward on Wednesday March 22 2017, @03:16PM (6 children)
please post an example, for those of us that do not use your OS of choice.
(Score: 2) by tangomargarine on Wednesday March 22 2017, @03:43PM
If you are incapable of writing half a dozen lines of shell script, you should not be administering a server.
please post an example that is incompatible with my OS of choice.
FTFY ;)
Although now Bash On Ubuntu On Windows is somehow a thing. And Cygwin has long been around.
"Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
(Score: 3, Touché) by Anonymous Coward on Wednesday March 22 2017, @03:48PM
if you're not using linux or bsd on your server you need to get off the fucking internet with your bullshit.
(Score: 0) by Anonymous Coward on Wednesday March 22 2017, @06:21PM
Windows Scheduler
powershell script
cygwin shell script
new linux subsystem with ubuntu flavored bash script
bat file script
OSX Timed Jobs
shell script
(Score: 2) by Justin Case on Thursday March 23 2017, @01:12AM (2 children)
Also, I know your request is not serious, but if it were, accepting a script from some random stranger in the Internet for maintaining https on your website is a sure sign of gross incompetence, the type that should get you fired and lifetime blacklisted.
(Score: 2) by The Mighty Buzzard on Thursday March 23 2017, @10:22AM (1 child)
You need to blacklist yourself then because I guarantee you have done this for your system init jobs. Unless you think sending in a pull request to $distro somehow makes them not a random stranger anymore.
Being able to read the script makes its origin irrelevant. Not being able to read a dozen or two lines of straight-forward shell scripting, now that should get you blacklisted from ever admining anything.
My rights don't end where your fear begins.
(Score: 2) by Justin Case on Thursday March 23 2017, @03:01PM
I hear your point but I don't consider a well known, long established, peer vetted signed code repository "some random stranger".
Yes, sometimes they do screw up, and it is widely discussed, and those who are paying attention know what to watch out for.
It is like buying a sandwich from the sandwich shop vs. eating one you found lying on the sidewalk.
(Score: 0) by Anonymous Coward on Wednesday March 22 2017, @08:27PM (1 child)
That only works if your service is one of a small handful of common services that listens on the standard ports or your DNS provider offers a remote access API and you're stupid enough to leave the keys to the kingdom on a remotely accessible server so that your script can access that API.
How would I use LetsEncrypt for the SSL cert on an IRC server listening on a non-standard port (there are no standard ports for an SSL IRC connection) when my DNS host provides no API for automated access to add/modify a TXT record? (If your answer includes something like "get a better DNS host" I'll take that as a "It can't be done and I'm full of shit" answer from you. Note the "stupid enough" section above.) I can find absolutely no way to make this scenario automated to work with LetsEncrypt. It will require manual verification of the host through a manually created DNS TXT record every three months, whereas I could pay for a cert and only have to worry about it every 3 years. One of these is a production-ready solution, the other is a toy useful for little more than testing use.
LetsEncrypt looks really good on paper. It would actually work if everything could be automated. In practice it's a bad joke. Perhaps in a few more years they will have dropped their 3 month expiration stupidity and it will be an actual workable solution. (I know, I know, don't hold your breath as stupidity tends to be forever and the LetsEncrypt administration got a 2for1 deal on their supply.) As it stands right now there are a number of things that just can't be automated.
(Score: 2) by The Mighty Buzzard on Thursday March 23 2017, @10:36AM
In fact, no. You don't even have to have a working service at all to get a cert. You only need access to run a script on the box the cert is for.
You seem to think the port of the service matters. It does not. The certbot script, for instance, will run its own tiny web daemon as necessary during installation/renewal if you don't have a web server already running on the box. The retrieved cert doesn't give a happy damn what service it is for.
As for adding records to DNS, it may be desirable but it is absolutely not necessary to get an IRC server up and running with a valid and verifiable cert.
My rights don't end where your fear begins.
(Score: 0) by Anonymous Coward on Wednesday March 22 2017, @01:47PM
LetsEncrypt's certs are intentionally short-lived to discourage manual setups. It's not that difficult to setup a user that only has permissions to communicate with the LetsEncrypt servers to verify ownership of the domain and update the keys. Then you just set that up and forget about it.
(Score: 2) by theluggage on Wednesday March 22 2017, @01:54PM (3 children)
where as letsencrypt certificates expire so fast that you either have to hire someone just to update certificates, or use their horrible, root-requiring software that would make anyone who cares about security run away screaming.
The whole point of LetsEncrypt is to make it usable by people who maybe don't know much about security, and certainly couldn't set up groups and permissions on their server that would enable LetsEncrypt to do what it needed without root (chmod -R o+rwx /var/www anybody?). Having the client run as root is one of those trade-offs that happen in real life. In the age of virtualisation and containers, "root" on your webserver shouldn't be able to do much more than delete your website, anyhow.
Anyway, the "official" LetsEncrypt client is more of a proof-of-concept. The real target audience of LE is people who run their wordpress sites on point-and-drool webserver managers like Plesk, CPanel etc. the latest versions of which already have add-ons implementing the LetsEncrypt protocol, making turning on https a check-the-box option.
If you're smart enough to run your own server without crutches, then you're probably smart enough to tweak one of the available LetsEncrypt client scripts to work with your security model. Or, as you say, pay for a 2-year "traditional" cert... People really seem to be expecting to get something for nothing when it comes to SSL certs...
In the past, I've used free, 1-year StartCom certs - frankly, even using LetsEncrypt manually is easier than that. For one thing, you don't need to be able to receive mail on "webmaster@yourdomain.com" and fish out the verification email from the spam trap...
(Score: 2) by urza9814 on Wednesday March 22 2017, @10:08PM (2 children)
I'd much prefer if it was by email actually. At least as an option. The problem I have with LE is so far I haven't found a way to get it working that doesn't require reconfiguring my NAT, reverse proxy, firewall, and DNS resolver. Of course, I haven't actually sat down determined to solve that yet, it's more of a "What do I need to do to get it renewed this time...?" But the problem I have is that, for example, there's no way to get the mail server approved unless there's also a web server on the same subdomain. I don't WANT web servers running on the mail server, I don't want port 80 open at all. And the scripts to renew the certs seem to throw an error if it can't access the server locally even if it can be accessed remotely -- so while external requests have to hit the firewall box and therefore could be routed through the reverse proxy to my main webserver for approval, the internal clients don't use that reverse proxy, they use the local DNS settings, so I have to reconfigure my DNS resolver to point the mail subdomain to my webserver or I get an error. I've considered configuring EVERYTHING to use the reverse proxy but the pfsense docs advise against it. I could maybe use a hosts file but I hate using those...I'd rather configure the software than the system it's running on and I want to keep all routing logic at the routing layer rather than spewing it across various host files...
(Score: 2) by theluggage on Thursday March 23 2017, @01:41PM
But the problem I have is that, for example, there's no way to get the mail server approved... I don't WANT web servers running on the mail server, I don't want port 80 open at all.
Well, there's also the option to validate by adding a token to the DNS record for your server, which doesn't require a web server. However, you have to remember the primary purpose of Let's Encrypt:
From the Let'sEncrypt website:
The objective of Let’s Encrypt and the ACME protocol is to make it possible to set up an HTTPS server and have it automatically obtain a browser-trusted certificate, without any human intervention
...i.e. the goal is the point-n-drool "enable https" checkbox on your web service control panel (probably next to the "create Wordpress site" button, so moderate your expectations of security!) so if you're not using it to enable HTTPS on a webserver then it's not surprising that it doesn't suit your purposes. Other certificate providers are available.
(Score: 0) by Anonymous Coward on Friday March 24 2017, @06:38PM
You can automate the whole process of port opening, running the web server and firing up the certbot script and finally closing the port again, with a cron job.
This way, you minimize the time that port is open, therefore the risk taken is only for that period of time (seconds?)
(Score: 4, Interesting) by Pino P on Wednesday March 22 2017, @01:08PM (8 children)
Let's Encrypt is the same as all other widely trusted CAs in that it issues certificates only for fully qualified domain names, not names within made-up TLDs (commonly .local or .internal) or private IP addresses (10/8, 172.16/12, or 192.168/16). This means everyone with an Internet gateway, printer, NAS, or other web server at home will have to buy a domain in order to avoid the "insecure" warning when logging in.
(Score: 2) by tibman on Wednesday March 22 2017, @01:56PM (7 children)
If it's internal then you can just install your self-signed cert anyways.
SN won't survive on lurkers alone. Write comments.
(Score: 0) by Anonymous Coward on Wednesday March 22 2017, @03:18PM (1 child)
did you not read what the problem was?
Ignorance. People claim the SSL cert is not valid because the private SSL cert warning appears by default and as users they do not understand that it is OK for a private network for devices not on the internet.
That ignorance is widespread.
(Score: 2) by tibman on Wednesday March 22 2017, @04:58PM
CA signed local cert is useless because a mitm is as easy as self-signed. Many people would be able to buy a "gmail.com" cert, for example. Even though all those certs are different they point to the exact same domain. Just as secure as self-signed only more misleading. If you have air-gapped machines and you still want ssl then just install the self-signed certificate in the client machines. You obviously would only have a limited number because the ssl server is private.
You say it's okay for an unknown cert to be presented to you for a local domain. That's like, your opinion, man : ) But you could easily mitm anyone on a private network with your idea. You could self-sign and resolve gmail.com to a local machine. Nobody on the network would get any warning. That's bad.
SN won't survive on lurkers alone. Write comments.
(Score: 2) by Pino P on Wednesday March 22 2017, @03:20PM (4 children)
If it's internal then you can just install your self-signed cert anyways.
As far as I can tell, that has stopped working as of Android 7. From "Add & remove certificates" [google.com]:
And it turns out that the developers of Google Chrome have not "cho[sen] to let [it] work with manually added CA certificates." From "User certificate no usable on Android Nougat" [strongswan.org]:
There are two additional complications if you want to support friends and family who are bringing their own devices, such as to stream videos from your NAS. First, you end up having to walk said friends and family through installing your internal CA's root certificate on each device. Second, installing your internal CA's root certificate on an Android device that still usefully supports user CAs, namely one running Android 6 or earlier, causes the device to start requiring a PIN or pattern to unlock it. From "Add & remove certificates" [google.com]:
Is it reasonable to require a visiting friend or family member to go through these steps?
(Score: 2) by tibman on Wednesday March 22 2017, @05:03PM
I'm assuming they already ask you for your wifi password. Showing them how to get that annoying insecure message to go away when they visit your private (local) website surely isn't that big of a deal. On most browsers it's like three clicks.
You also only pointed out one particular OS (a phone OS). An OS that doesn't even let users have full control of itself. Even windows gives you more control over your security.
SN won't survive on lurkers alone. Write comments.
(Score: 2) by tibman on Wednesday March 22 2017, @05:05PM (1 child)
Wish i could edit. Streaming videos from your NAS over https? I'm doubting that, no offense.
SN won't survive on lurkers alone. Write comments.
(Score: 2) by Pino P on Wednesday March 22 2017, @06:29PM
Streaming videos from a NAS will start requiring HTTPS once browsers start restricting the Fullscreen API to secure contexts [w3.org] in order to deter phishing attacks that spoof the entire window manager [feross.org].
(Score: 1) by Arik on Friday March 24 2017, @12:14AM
If Android was truly a Free OS you could simply comment a couple lines and recompile to fix such brain damage, but I bet you can't do that.
"Is it reasonable to require a visiting friend or family member to go through these steps? "
It's not reasonable at all, it sounds to me like a system cleverly designed to give the appearance of offering security, while ensuring that it's such a PITA to actually use that no one will use it. Even the geeks that can figure it out won't use it, for interoperability and support issues.
If laughter is the best medicine, who are the best doctors?
(Score: 3, Interesting) by Zyx Abacab on Wednesday March 22 2017, @07:29AM
No, you're not. Restricting access to these technologies to only those with money, or connections to "trusted" companies, is in fundamental conflict with the open nature of the Internet.
(Score: 1, Funny) by Anonymous Coward on Wednesday March 22 2017, @07:41AM (3 children)
You do know there are CAs driven by non-profit institutions that issue free certificates right?
Also Certificates are about *trust*. The CA entity behind your certificate is supposed to do some level of checking about ensuring that you are the person/company you are claiming to be (depending on the cert level and of course $$ paid).
(Score: 2, Informative) by Anonymous Coward on Wednesday March 22 2017, @08:42AM (1 child)
Also Certificates are about *trust*
Which is why nobody who cares about security should use a system that allows companies like Verisign and McAfee* to create a new certificate for your site without your permission.
* People who are a fan of the capitalist ideas that companies operate outside the rule of law can replace these example with the Turkish and Chinese governments.
(Score: 0) by Anonymous Coward on Wednesday March 22 2017, @10:02AM
Yeah, if China wanted e.g. to MITM soylentnews.org, they just could issue a certificate themselves, and every browser in its default configuration will happily tell you that the site you're connecting to is the real thing and your data is secure.
(Score: 0) by Anonymous Coward on Wednesday March 22 2017, @08:36PM
Please, tell us where we might find these non-profit institutions (note the plural) where we can get these free certificates.
LetsEncrypt? 3-month expiration good for testing use only unless you use it on one of a few select services running on a standard port. Save money on the cert, spend more on administration.
StartCom? BWAHAHAHAHAHAHAHAHA!!!!!! No. Just, no.
Those are the only two I can find when I search for free SSL certs. Most of the hits are for sites that simply redirect to LetsEncrypt.
You, sir/madam, are full of shit.
(Score: 2) by theluggage on Wednesday March 22 2017, @01:03PM (1 child)
Ok, this is why i don't like HTTPS in general: the cert system is foundamentally broken and flawed. Not on a technical point of view (well, maybe too, but it is not my point),
No, the requirements of the cert system are fundamentally flawed: Joe public wants to connect securely to a website they found on the interwebs. Joe public doesn't want to have to visit the website operator, verify that it really is them and collect a copy of their public key in person, then install it in their browser. Nor do banks want the cost of managing that and educating users.
Or - put simply - doing the job properly isn't a practical option.
Public key cryptography sounds wonderful - until you realise that unless you personally collect "Bob's public key" from Bob himself it still all comes down to trust.
because it's basically a scam to rob money.
Who do you expect to pay for the process of reliably confirming your identity (as a website operator)? There's a reason that the free options have very short validity (e.g. LetsEncrypt) - they have to use cheap, cheerful and, therefore, less than watertight methods of validating identity.
(Score: 0) by Anonymous Coward on Wednesday March 22 2017, @06:20PM
Ok, this is why i don't like HTTPS in general: the cert system is foundamentally broken and flawed. Not on a technical point of view (well, maybe too, but it is not my point),
No, the requirements of the cert system are fundamentally flawed: Joe public wants to connect securely to a website they found on the interwebs. Joe public doesn't want to have to visit the website operator, verify that it really is them and collect a copy of their public key in person, then install it in their browser.
Or - put simply - doing the job properly isn't a practical option.
Granted, for some value of properly.
But you can do it less improperly than allowing any CA to issue certificates for any domain. You can allow a site to present multiple certificates, so we can remove untrustworthy CAs without throwing up "unknown issuer" warnings for a substantial fraction of the internet. That's what GP was talking about -- it's fundamentally broken in ways that aren't required by practicality.
There's a reason that the free options have very short validity (e.g. LetsEncrypt) - they have to use cheap, cheerful and, therefore, less than watertight methods of validating identity.
No, LetsEncrypt has a short lifetime to encourage automation and discourage people managing their certificates manually, because people who manage certificates manually sooner or later let it slip.
Paid services offer longer validity despite what I consider equivalent levels of validation; the ubiquituous "can receive email at webmaster@example.com" is no more proof of legitimate control of example.com than LetsEncrypt's "can make arbitrary files appear at specified URLs on example.com".