The TrueCrypt website has been changed it now has a big red warning stating "WARNING: Using TrueCrypt is not secure as it may contain unfixed security issues". They recommend using BitLocker for Windows 7/8, FileVault for OS X, or (whatever) for Linux.
So, what happened? The TrueCrypt site says:
This page exists only to help migrate existing data encrypted by TrueCrypt.
The development of TrueCrypt was ended in 5/2014 after Microsoft terminated support of Windows XP. Windows 8/7/Vista and later offer integrated support for encrypted disks and virtual disk images. Such integrated support is also available on other platforms (click here for more information). You should migrate any data encrypted by TrueCrypt to encrypted disks or virtual disk images supported on your platform.
Did the TrueCrypt devs (or SourceForge?) get a NSL? They are offering a "new" version (7.2), but apparently the signing key has changed and a source code diff seems to indicate a lot of the functionality has been stripped out. What's up?
Red Flag! Run away!!!"It's only a bunny.""But that rodent has a mean streak a mile wide! Look at all the bones""You're a looney."
So how could Microsoft's end of support for XP have anything to do with an encrypted file format? Only if that file format was actually a catepillar pretending to be a snake on a iPhone held hostage in Oz. I swear, the world just keeps getting stranger each and every day. And I am more and more smug that I only use Linux, Free Software encryption the only t oahtthalhglkhfk;ngkm,njhovahv,anv,!!
I swear, the world just keeps getting stranger each and every day.
It's overpopulation, man! There are too many people! With too many ideas in their stupid bulbous heads!
Who choose to spout those ideas anonymously on websites.OMG I'm doing it too! All is lost.
Lazy people can't be arsed to create an account!
Hey, all you lazy AC's! I have an account! I may be hiding behind a fake email that is proxied and tor'ed to the max, but I least I take a stand! (what were we taking about?)
If you're hiding you'd be posting AC, you karma whore, karma whore, karma whore.
Point taken. (Hanging head in shame, made worse by the fact that I have an account, I have karma to lose, a reputation to preserve, a kingdom to rule, and, wait, nothing really matters. (Don't you really hate it, when you are of a certain age, that you can be trapped between Bohemian Rhapsody a la' Queen, and Disney's latest Princess film? The flaming never bothered me, anyway! (. . . I'm just a little silouette of a man, Scharamoche. . . . Let it go!! Let it go!!!)
There you go.
It's not paranoia if they really are out to get you.
Seems like the best way to communicate with users in the event of getting shut down with an NSL involved would be to give a reason that is obviously bullshit. They can compel you to not mention the NSL, but they can't compel you to say anything else, right? ("can't"...)
And anyone who uses 7.2 without someone doing a comprehensive code review is obviously a fool.
Maybe they found bugs they can't fix. Sometimes inherent design flaws cause entire projects to be abandoned. At least there's an announcement.
Ya, they should have just abandoned it and let it rot.
Right. And then they go on to recommend fucking BitLocker, proprietary software that is quite probably backdoored by the NSA given how Microsoft seems so cosy with them. Inherent design flaws my ass.
If you don't trust Microsoft, you don't access your secret data under Windows. Because no matter whether you use BitLocker, TrueCrypt or anything else, as soon as you access the data under Windows, Windows will have access to it. So given that Windows and BitLocker are both made by Microsoft, there's no security difference between BitLocker under Windows and TrueCrypt under Windows. Indeed, you could argue that BitLocker under Windows is more secure, since you only have to trust Microsoft, while with TrueCrypt under Windows you have to trust both Microsoft and the TrueCrypt developers.
And no, that TrueCrypt's source code is available doesn't help you in this case, since Windows' source code isn't.
since Windows' source code isn't.
Not to you or me certainly, but it is to some people [windowsitpro.com]. I don't really know how talented their security reviewers are or what NDAs they're bound by, but it isn't fair to say that nobody has access to it.
I see the Wayback Machine for truecrypt.org says:
This URL has been excluded from the Wayback Machine.
Sounds like more than just an abandoned project to me. I might have to go find that tinfoil hat again.
Archive.org retroactively respects robots.txt, anybody could take over a domain and get it excluded from archive.org more or less on the fly.
Huh, I wasn't aware of that. That seems like a misfeature to me, but oh well. Might not need the hat after all then!
It seems like a misfeature, but it's the way they've decided to do takedown requests for people who left sensitive information secured only by obscurity.
I've done this. I think I also had to e-mail them to delete the older versions before I changed robots.txt to deny all.
We don't know enough for any firm conclusions. That said, my guess is 'warrant canary'.
http://en.wikipedia.org/wiki/Warrant_canary [wikipedia.org](In case anyone isn't already familiar with the term.)
The tone is pretty odd, eh? You may be right.
One would expect truecrypt to attract more notice now that it is being audited. They just raised $16,000 to put it through serious auditing. This is a crazy time to nuke the project, unless they got an NSL.
The audit revealed the true encryption algorithm was ROT13. Further development was deemed impossible without breaking backward compatibility.
The audit revealed the true encryption algorithm was ROT13. Further development was deemed impossible without breaking backward compatibility.
Those were security people. They certainly knew that these days you need triple-rot13 to be truly secure.The problem the audit found was that the second step wasn't a decryption step, as triple-rot13 requires, but an additional encryption step.
The "U.S." to "United States" is more than probably an artifact of changing Visual Studio versions.
You can see the same happening here, for example: https://chromium.googlesource.com/webm/webmdshow/+ /74379b419a791c5d81f1120c0f23e28d19cf03eb%5E!/ [googlesource.com]
I was actually going to mention the U.S. -> United States thing, but thought I would come off as being overly silly.
So here's something else: Towards the beginning of the diff, some options of a dialog box got changed (that didn't seem obviously related to the neutering), and the option to choose `No' was removed.
Secretly communicating what happened via the No seems like something to do, bothersome that I really liked TrueCrypt. :(
Technicality only: it's not a "Warrant canary" (which, if not updated, means something went wrong) but rather a "scorched-earth trap" (step on it and everything blows, nobody gets nothing, not even the attacker).
The "warrant canary" is effective because, to send the signal, you just obey an order to do nothing (I suspect, for US, there may be an amendment which protect innocent citizens against forced labor - e.g. work to introduce a backdoor against my will).
The TrueCrupt crippling is a destructive step that requires an action, there may be some "contempt of court" issues if so.
In any way, one cannot dismiss a Lavabit 2.0 scenario in progress.
I have to agree - it seems to me that the most likely scenario is LavaBit all over again:
Truecrypt has been a hugely valuable tool for millions of people. It is cross-platform and it is absolutely easy to use. I've tried other solutions out there, and no other platform independent solution is nearly as good on the usability front - and usability is critical to security applications or else people won't bother with them...
We need Truecrypt, or an equivalent replacement...
What is stopping the government's secret, already illegal, orders from including the requirement to keep the "canary" in place. They just show up, root your servers, and tell you to act like nothing happened and do NOT take down the canary notice.
The point of the canary is that if it isn't updated frequently one should assume that something is wrong.
Hmmm, stripped down site warning users of security, recommended use of proprietary software long known to be at the beck and call of various interests... High five for the red flag and ridiculous alternative methods! And after a recent report about skyrocketing encryption over chat networks. Gee, the people found out the tin hatters were RIGHT on the whole privacy thing and are looking for ways to secure themselves. It will be interesting to see the reaction when people get told they're not allowed to safeguard their own information.
If they're tricky they'll do it through a central validation committee / agency / whatever. Certified, for your protection. Maybe encryption become a thought crime and we can fully settle in to our dystopian nightmare.
** Funny side note, dystopian is apparently not a word my browser knows. Suggested change: utopian
Suggested change: utopian
Double-plus good spellchecker.
Mozilla's spellchecker has a low SAT score, it doesn't even recognize "spellchecker" :P
At least I wish the rumour / media mill had a TILT mode like old pinball machines - it should be invoked when the amount of speculation around a particular topic exceeds the availability of genuine attributable information by a certain factor (or in this case exponential factor).
Let's bring some rationality to proceedings.
What's the history?
TrueCrypt v7.1a has been around since early / mid 2012 (to my knowledge). It provides a tool for creating encrypted volumes as files as well as plausibly deniable hidden filesystems.
What's Plausible Deniability?
In this case it's an encrypted filesystem or volume that an adversary cannot prove exists. i.e. without the correct tool to access it and the correct passphrase to decrypt it, it is indistinguishable from random guff (or noise) on a hard drive. i.e. even in bastard countries (looking at you England) where failing to provide keys to unlock an encrypted volume is reason for jail time you have a way to secure your data.
Why Have TrueCrypt's Developers Remained Anonymous - That's Got To Be Dodgy Right??!
Think about this for a while. If you were developing a tool that enabled the ability to create entire computer filesystems that could be hidden from view from even the most well funded national intelligence agencies, what would you do?
If you make yourself known you open yourself to all manner of coercion from both state and criminal actors. If you coalesce as an incorporated business / not for profit / charity then you are subject to the whims of the jurisdiction(s) that a. your corporation coalesces in b. your developers and all associated reside in.
Why The Brouhaha now?
The TrueCrypt site www.truecrypt.org has been redirected to their project pages on Sourceforge at http://sourceforge.net/projects/truecrypt/ [sourceforge.net] which carries all manner of red text warning that Truecrypt 'may' contain unfixed security issues. Along with a massively cut down release of TrueCrypt v7.2 (source and binary) that offers the ability to only decrypt volumes.
What Does It All Mean?
In true QI fashion, I have to raise the table-tennis bat labelled "Nobody Knows".
OK - So Sum This Up Rather Than Wave Your "Nobody Knows" Card Around
If I must. This is speculation all the way down from here on out.
Scenario A----------Devolopers of TrueCrypt have had enough and want to give it up. Regardless how mathematically correct their choice of cryptography or perfect their implementation, the ever growing processsing power to brute force encrypted volumes means that anything encrypted today only has a limited lifetime before it is revealed. This is not anything new or stunning - encryption has never been a forever solution, always a "keep stuff safe until past the point it becomes useful" solution.Leaving the scene with a source and binary that can decrypt all previously encrypted volumes while stripping out all encryption routines is laudable. It's only a matter of time before those encryption routines become brute forceable in minimal time. Best not to offer encryption at all than offer something easily broken.
Scenario B----------One or more developers received a secret court order (I'll use the US-centric NSL abbreviation henceforth) to allow access to subvert TrueCrypt to be more compliant to an agency's probing. Rather than take that NSL on face value, the developer(s) in question have taken the project suicide route in whatever fashion open to them that can alert the userbase whilst keeping them safe from a life time in some hell hole gaol for contravening non-disclosure of a NSL.
Scenario C----------A TrueCrypt developer was sloppy and an infiltrator managed to subvert every domain associated with TrueCrypt along with code signing keys etc. etc. to be able to pull this off. Even their mailservers are bouncing all messages with recipient unknown responses.
Given the relatively long history of TrueCrypt I am inclined to A > B > C at this point.
One thing is sure, TrueCrypt has passed a point of no return with this. As the developers are anonymous, there is no way for them to reclaim their ground if this was indeed an audacious hack.
The following as points of reference:
Linux UsersLUKS and EncFS - even EncFS within LUKS devices for some layering.
Windows UsersBitLocker (version restrictions apply) if you trust MS - which you must if you run Windows.
Apple UsersNot my thing, so can't help.
I'm sorry that post is so ugly. It was formatted better but I fell foul of slaschcode's "Junk Characters" filter on submitting and was to too ired to redo it all so stripped out my lazy underlining.
But the take-away from your post is that Windows users are hosed. But I repeat myself.
Everyone is hosed if that is your take-away. Not just Windows users, Linux users, Apple users, Android users, ChromeOS users, ~everyone~.
Run encryption atop your OS, well you have to trust the OS provider and the encryption software provider.
Run OS provided encryption - now you only have one party to trust.
If the OS is complicit then it matters not a jot what the encryption layers on top of it do, you're hosed.
The actual take-away is..... file(system) encryption, fine for preventing casual thieves who purloin your mobile devices from gaining access to your files but it's not to be relied on for defeating snooping.
But then it was never supposed to be a panacea, it does what it says on the tin - encrypts your shit while at rest (for now).
Which makes me think - say you went to the bother of one of these TrueCrypt encrypted hidden partitions then installed OS of choice on it... what prevents the OS of choice giving away whatever crown jewels? Or the underlying hardware for that matter?
Maybe this is the take-away... TrueCrypt was over a decade old and pre-dated mass Internet usage, having a deniable OS then was useful. Now not so much, it is our online footprint that betrays us.
I honestly don't know.
You are largely right. The most important thing to grasp is that TILT is the new default courtesy first and foremost of the NSA: if you are running COTS hardware (as nearly everyone does) you can not trust your hardware. This fact is not because the NSA and others have hardware implants (which they have plenty of) and it is not because they could weaken specific logic gates in whatever chips you are using (they could, it's not science fiction), nor is it because of the efforts of the NSA's TAO and their software (which is likely best of the best and amazingly brilliant), it runs deeper and is because it has become more than apparent and verified that the NSA (and any other such organization whom we might not even know about) does not have any kind of apprehension against using their unlimited clout in order to sift and/or record all data in existence using any means possible.
Sure in exceptional cases that does mean they'll use the aforementioned. It has also been shown that they will use secret courts and secret court orders and "national security letters" and any "legal device" (even if illegal) and influencing industry standards (it doesn't matter all that much whether it strengthens or weakens said standards, since it is clear that what matters to them is that they'll do it to suit whatever they think is in their own interest).
There is no reason to assume that their efforts stops there! Social hacking is always easier. If one can manipulate the foundations of academic research or industry-wide best practices or technical practical solutions it is worth far more than millions of later weaknesses. If you think you own the rabbit hole you want to make it as deep as possible: all the way down for forever, because it becomes tremendously more efficient the deeper it goes, and they have the resources to do just that.
One should by now recognize that the NSA was and will always be a "bad faith" [wikipedia.org] actor (personally this is what hurts the most). This very fact is the negative ramification of the Snowden leaks that is almost suspiciously absent from the reports on the damages caused to the US: the leaks have removed any possibility of "good faith" status for the NSA (and also the US government) and this very "good faith" status was one of the most if not the most useful property/tool/attribute they had.
Why isn't this being spelled out by the reports on the consequences of the leaks? Because they're hoping people won't notice; they're trying to avoid the Streisand effect because they would like to cling on to the incorrect "good faith" status and have as many people as possible continue to assume that they are "good faith" actors.
They can't broadcast one of the most immediate and profound damages because it would exacerbate the troubling truth: they are not the "good guys", they are evil, and they represent the doom of humanity just like the Nazis and the Commies did only with vastly improved technology.
The classic solution to the problem of a needle in a haystack is to set fire to the haystack. It won't matter that the initial motivation was to find the needles to remove and/or destroy them in order to save the hay: the solution remains the same.
Which lazy underlining?; tags should "just work" We do at least know what the underlying issue is with UTF-8 support, though we haven't managed to fix it as of yet.
I guess he didn't use proper HTML tags, but used lots of "-" or "=" for "underlining", causing a repetition filter to trigger
The entire commenting engine needs a rework. Its on the TODO list, but ENOTIME. THe problem is slashcode basically uses HTML::Validater as its comment engine, and that wasn't really meant to be used in the way we're using it. Ideally, I'd love to rewrite it to use some bbcode based system (something we've gotten requests for), and make it a bit less stupid.
Still, it does work for 95% of comments but bleh.
I don't get the advantage of bbcode. So you write [ and ] instead of < and >? I don't see the big difference (except that the bbcode keys are harder to type on my QWERTZ keyboard :-)).
Now if you supported Markdown for comments, that would IMHO be a true improvement. I have no idea whether it would be more work than bbcode, though.
I feel like an ID10T for asking, but what is Markdown?
It's a way to format texts. Unlike bbcode, it doesn't rely on tags. Some of the syntax will be familiar to people previously on Usenet (e.g. quoting by starting lines with > or emphasizing by enclosing in *asterisks*), other will be familiar to people used to Mediawiki (e.g. preformatted code a la <ecode> through indention).
See https://en.wikipedia.org/wiki/Markdown [wikipedia.org] for details.
One site I know using Markdown (with a few extensions) is Stackexchange.
I seem to remember something about plans of offering SoylentNews over NNTP; in that case, the fact that some of the Markdown syntax matches the syntax traditionally used in email and Usenet posts for the same purpose (as well as Markdown text being very readable by itself) might prove useful.
SN over NNTP is a reach goal; definately something I want to do, but no idea when it might happen. I'll look more into Markdown; thanks for the link.
Exactly right, I used oodles of === for underlining. It was my own laziness that caused the problem rather than a Soylent code issue.
Your Scenario A is complete bullshit. If it were true that TrueCrypt volumes could be brute forced, then that means that EVERYTHING can be brute forced. This is known to be false. Brute forcing 256-bit symmetric keys requires energy equivalent to that of an exploding star to do in a reasonable amount of time. Brute forcing a single 128-bit key will still require at least several hundred terawatt-hours of energy, assuming you had perfectly efficient computers capable of doing operations at the von Neumann-Landauer limit, and probably no one has those. In any case I'm pretty sure that everyone would notice if that data centre in Utah was gobbling hundreds of terawatt-hours of energy. It seems that the US consumes about 13,000 kWh of electricity per capita per year, so 100 TWh is the equivalent of a city with seven million inhabitants. It's like the energy consumption of a city almost the size of New York, and that much energy being consumed cannot go unnoticed.
I have argued on the other site why it seems unlikely that the NSA possesses some classified cryptanalytic result on AES/Rijndael (the most widely used block cipher in TrueCrypt), or that they modified the algorithm so it incorporates kleptography [wikipedia.org]. Among other arguments against the cryptanalytic breakthrough theory is that the NSA itself endorses its use by the Federal Government for classified information, when used by properly certified systems (which they've certified to contain no back doors). Against the kleptographic argument is the fact that the algorithm was designed by two Belgian cryptographers who are very well known in the academic cryptography world, and that no changes were made to the algorithm prior to it becoming a FIPS. I know from my own work being done around the time that there is no difference between Rijndael the AES candidate and AES as described in FIPS 197. I personally implemented Rijndael for an 8051-type microcontroller around 1999, and when it was announced AES in 2001, I compared FIPS 197 against Rijmen's and Daemen's spec from 1998 against which I based my code and found the two to be exactly the same. My old code also produced correct results against the known answer test vectors that are part of the standard. Kleptography seems rather unlikely given that.
Scenarios B or C are thus far more likely explanations than A in my mind.
EVERYTHING can be brute forced. That is not bullshit and it is not false.
NOTHING is impenetrable to brute force.
If you are a cryptographer you know the above to be true. Brute force implies throwing every possible solution at a problem - one will prove to be the key that unlocks the door.
I will leave aside your subsequent points about AES/Rjindael as I am not qualified to speak to their mathematical correctness nor TrueCrypt's implementation of them.
My point in Scenario A was that if TrueCrypt's developers had decided to cease work then from a security perspective it would be good practise to leave available source and binaries that can decrypt previous versions' encrypted volumes. Deleting the encryption code simply serves to prevent future adopters of the software using it to encrypt their files.
If one is walking away from supporting / developing an encryption software then it makes sense to do this. It is a delineation.
Regardless of the power / compute requirement to brute force something today, who knows what advances are around the corner. Hence why I alluded to only relying on encryption to keep your stuff safe for a finite period. It doesn't actually matter what that finite period is, but it is finite. Surely it is better to get users used to the concept rather than lull them into an encrypt now, secure forever utopia that may or may not be the case.
So where's the supernova you can harness for the energy needed to break a 256-bit key? Or can you wait until the heat death of the universe? The theoretical minimum amount of energy based on arguments for computing using the known laws of physics requires at least that much energy to brute force a 256-bit symmetric key. Perhaps it might be feasible for a Kardashev Type 3 civilisation, but for us puny type 0 civilisations it is far beyond the realm of feasibility. As Bruce Schneier [schneier.com] put it:
These numbers have nothing to do with the technology of the devices; they are the maximums that thermodynamics will allow. And they strongly imply that brute-force attacks against 256-bit keys will be infeasible until computers are built from something other than matter and occupy something other than space.
Sure, anything can be brute forced. It just isn't practical to do so, which makes it practically bullshit to even try.
Sure, anything can be brute forced. It just isn't practical to do so, which makes it practically bullshit to even try.
That's just it though. You're missing the bigger point. Nobody is even trying to brute force anything .
At least not anymore. Once the permutations so strongly exceeded total processing power, attackers simply had no choice but to stop. They didn't even brute force Enigma the way you allude to. Take a deeper look at how Enigma was attacked. They had enough processing power during WWII to brute force Enigma *AFTER* they first reduced the keyspace with nifty mathematical analysis.
The real game, and real attack surfaces, are sophisticated analysis of the gestalt view of the ciphertext. It's a known process by which probabilities are understood, and the effective keyspace is reduced to a more viable level that can be brute forced in time periods acceptable to governments and LEO.
NOTHING is impenetrable to brute force
Properly implemented OTP is mathematically proven to be immune to brute force attacks. You can literally generate ANY plaintext as long as it's the same length as the OTP ciphertext, and have absolutely no way whatsoever of knowing that you guessed the correct key. The key itself is supposed to be high entropy from preferably non-deterministically generated numbers. There is no math involved other than modular addition, and even then, it's a 1:1 relationship between each and every single bit of the plaintext and key. That's it. There is NO relationship between the 2nd bit and the millionth bit. Assuming a truly random key it's impossible to state beyond a reasonable doubt you found the key.
That's the most dangerous part of OTP. Information bias can lead you to assume that a generated plaintext from your chosen key was what you are looking for.
What do you want me to have been guilty of? Child pron? Just take any CP image bump it up against the ciphertext, obtain your key, and then claim the extra stuff was padding designed to confuse analysis. Industrial espionage? Same thing. A manifesto saying you are the one responsible for the bombs? Just as easy.
OTP is perfection as far as the method (maybe a slight addition to prevent stream attacks) is concerned. What is not perfected yet is the key exchange, and the enormously ridiculous requirement that key size be exactly the same length as the plaintext.
Otherwise, yes, OTP is specifically known to be immune toinfinite processing power.
Brute forcing 256-bit symmetric keys requires energy equivalent to that of an exploding star to do in a reasonable amount of time. Brute forcing a single 128-bit key will still require at least several hundred terawatt-hours of energy, assuming you had perfectly efficient computers capable of doing operations at the von Neumann-Landauer limit, and probably no one has those.
That's not entirely correct. In fact, it's much worse. You're correct about the physical limitations of classical computing though.
Bruce Schneier, Applied Cryptography, pp157-8:
One of the consequences of the second law of thermodynamics is that a certain amount of energy is necessary to represent information. To record a single bit by changing the state of a system requires an amount of energy no less than kT, where T is the absolute temperature of the system and k is the Boltzman constant. (Stick with me; the physics lesson is almost over.)Given that k = 1.38Ã—10-16 erg/Â°Kelvin, and that the ambient temperature of the universe is 3.2Â°Kelvin, an ideal computer running at 3.2Â°K would consume 4.4Ã—10-16 ergs every time it set or cleared a bit. To run a computer any colder than the cosmic background radiation would require extra energy to run a heat pump.Now, the annual energy output of our sun is about 1.21Ã—1041 ergs. This is enough to power about 2.7Ã—1056 single bit changes on our ideal computer; enough state changes to put a 187-bit counter through all its values. If we built a Dyson sphere around the sun and captured all its energy for 32 years, without any loss, we could power a computer to count up to 2192. Of course, it wouldn't have the energy left over to perform any useful calculations with this counter.But that's just one star, and a measly one at that. A typical supernova releases something like 1051 ergs. (About a hundred times as much energy would be released in the form of neutrinos, but let them go for now.) If all of this energy could be channeled into a single orgy of computation, a 219-bit counter could be cycled through all of its states.These numbers have nothing to do with the technology of the devices; they are the maximums that thermodynamics will allow. And they strongly imply that brute-force attacks against 256-bit keys will be infeasible until computers are built from something other than matter and occupy something other than space.(emphasis mine)
One of the consequences of the second law of thermodynamics is that a certain amount of energy is necessary to represent information. To record a single bit by changing the state of a system requires an amount of energy no less than kT, where T is the absolute temperature of the system and k is the Boltzman constant. (Stick with me; the physics lesson is almost over.)
Given that k = 1.38Ã—10-16 erg/Â°Kelvin, and that the ambient temperature of the universe is 3.2Â°Kelvin, an ideal computer running at 3.2Â°K would consume 4.4Ã—10-16 ergs every time it set or cleared a bit. To run a computer any colder than the cosmic background radiation would require extra energy to run a heat pump.
Now, the annual energy output of our sun is about 1.21Ã—1041 ergs. This is enough to power about 2.7Ã—1056 single bit changes on our ideal computer; enough state changes to put a 187-bit counter through all its values. If we built a Dyson sphere around the sun and captured all its energy for 32 years, without any loss, we could power a computer to count up to 2192. Of course, it wouldn't have the energy left over to perform any useful calculations with this counter.
But that's just one star, and a measly one at that. A typical supernova releases something like 1051 ergs. (About a hundred times as much energy would be released in the form of neutrinos, but let them go for now.) If all of this energy could be channeled into a single orgy of computation, a 219-bit counter could be cycled through all of its states.
These numbers have nothing to do with the technology of the devices; they are the maximums that thermodynamics will allow. And they strongly imply that brute-force attacks against 256-bit keys will be infeasible until computers are built from something other than matter and occupy something other than space.(emphasis mine)
This was written nearly 20 years ago. Since then, quantum cryptanalysis has become a lot more feasible. A proof of concept could be less than 10 years away from being public and in the known literature.
Schneier himself also states that the the whole point of cryptanalysis is to find shortcuts [schneier.com].
So relying on the brute force strength of 256-bit symmetric key alone is hubris. It's not simply a matter of permutations (which to be pedantic will be 1/2th of an order reduced in practice), but involves probabilities and attack surfaces beyond encrypted data at rest.
AES 256-bit has already been reduced by 7 orders below it's permutations. New attacks in the known literature go further than that. As the man said, "attacks only get better, not worse". We don't even begin to know what the NSA really has up it's sleeve. They really do employ the world's best cryptographers.
Cryptanalysis of at-rest ciphertext is only one tool. That usually involves working on the method itself. What about the RNG data being provided? It's almost *never* TRNG. Since, it isn't TRNG, that now allows the CSPRNG to be compromised. Knowing what CSPRNG was used, or likely used, could once again be used to identify valuable shortcuts shaving off the effective keyspace by several orders, or more.
Regardless of method and CSPRNG, you still have a weakness in key exchange. It matters not that your key was 256-bit. If the probabilities of the key exchange collapse that down to a 40-bit key, you're fucked.
I respect that you have implemented and coded the encryption algorithms and have experience, but I don't think it's anything but hubris to say that a 256-bit symmetric key encrypted ciphertext requires more energy than several stars to brute force. Seems reckless really.
In order to do that, you also need to show me a really impressive CSPRNG, or a massively insanely fast TRNG, *and* a truly solid and novel key exchange.
If you really wanted to impress me with bullet proof encryption you would either perfect OTP (which is not), or make quantum encryption possible. Most importantly, make it possible to have quantum encrypted data at rest and not just "cycling" in an active system. I'm betting that perfect cryptography will be solved by using quantum to provide the missing links in OTP myself.
We've been through all this before many times. People saying that a method was bullet proof, only to find a surprising and novel attack that somehow reduced the effort required by 99%. The most notable of them all *is* OTP, which is *mathematically* perfect. Except, that they found in order to be perfect in practice, you could never reuse the pad data. Damn... back to the drawing board...
We are discussing here the brute forcing of the key to a TrueCrypt volume. So everything else you've written about CPRNGs and key exchange isn't really germane to the discussion, as that isn't what I'm talking about. If you can find shortcuts, well, you aren't doing brute force any more right? You might as well have just spoken about the infamous $5 wrench [xkcd.com] to extract the key, and that qualifies as "brute force" only to the facetious. Quantum cryptanalysis changes Schneier's argument about brute force only a little. Apparently there's been analysis that shows that quantum mechanics can do no more than halve the effective keyspace, so a 256-bit keyspace becomes effectively 128-bit. Not a lot of help when you can do triple encryption, and you wind up at 768 bits, or effectively a 384-bit keyspace to a quantum computer.
Why do you think the NSA seems to be beefing up its ability to use exploits and put in back doors wherever they can? They cannot beat the mathematics, so they get around it by attacking the implementation rather than the mathematics. TrueCrypt may well have been on its way to becoming the sort of software that they couldn't touch in that way, and they couldn't have that.
We are discussing here the brute forcing of the key to a TrueCrypt volume. So everything else you've written about CPRNGs and key exchange isn't really germane to the discussion, as that isn't what I'm talking about. If you can find shortcuts, well, you aren't doing brute force any more right?
No, we are talking about the same thing, even with the shortcuts. Brute forcing does not simply mean "trying all the keys that are possible". Brute force means that you don't possess a key, and you don't possess some mathematical insight that allows to you perform the reverse process with the same amount of effort regardless of key possession. Sufficient mathematical insight can allow you to break encryption without even finding the key. See frequency analysis among others IIRC. Whatever shortcuts are used, they only reduce effective keyspace. In the end, you are still brute forcing effective keyspaces. They just became much smaller.
CSPRNG is quite relevant to the discussion. Admittedly, key exchange is less important for TrueCrypt if we are going to strictly limit the discussion to it alone.
The CSPRNG is relevant because all encryption methods, including the ones employed by TrueCrypt, are used to provide random numbers that greatly influence key generation and routine encryption operations that generate ciphertext. If a CSPRNG is compromised this means you understand, and are able to better predict, the numbers that were used. A compromised CSPRNG is invaluable to ciphertext-only attacks, which is exactly the situation you present in TrueCrypt data at-rest. To say otherwise explicitly means you did NOT use a CSPRNG during ciphertext generation.
You are, in fact, talking about generating random numbers, following a method of encryption, and then generating ciphertext. How a CSPRNG is not critical in that undertaking is not something I can understand, or reasonably believe. The NSA *did* compromise a CSPRNG and paid very well (by their standards) to have it added to at least two different national standards for that very reason.
As for the quantum mechanics only reducing the effective keyspace by half, I cannot say anything about that but . I'm greatly interested in any papers that show any kind of effective limit for quantum mechanics. I don't know that is true, and by all other accounts it could allow one to slice through crypto like butter precisely because it bypasses brute force entirely in some cases.
At a very high level of abstraction, encryption is merely a simple mixing process by which it's vastly more expensive to perform de-mixing without some critical kind of knowledge. You're only arguing that a brute force defense against de-mixing without key possession is physically precluded in a very limited use case where the brute force is strictly limited to the keyspace provided by permutations alone. That's an oversimplification, and ignores whole hosts of ciphertext-only attacks, and the methods by which the keyspace for brute forcing, is reduced. Many of which focus on the random numbers used by the method, and not the method itself.
It's almost always about reducing effective keyspaces to be brute forced. That typically involves every step of the process, which by definition, includes random number generation and key exchange (where appropriate).
Just had to make a few simplifying assumptions. Cryptosystems are complicated beasts, as you evidently understand as well as I do, and any vulnerability in any one area can and probably will eventually be discovered and exploited if someone cares to do so. The OP I replied to was talking about brute force attacks, and was saying how those would all eventually become feasible given advances in computing power, and speculated that that could be one reason why TrueCrypt was abandoned, just when an auditing project has gone underway that is intended to ensure that the code is solid. I didn't buy it, and considered the OP's other two speculations to be far more likely, because a simple brute force attack on the underlying block ciphers is infeasible with our current knowledge of mathematics and physics. That was all I was trying to argue, and you had to go and muddy the waters with all this talk about other components of cryptosystems that might be the source of vulnerability. :) I was not trying to argue the about the security or lack thereof of TrueCrypt as a whole.
Note that I never disputed the assertion that there might be vulnerabilities in TrueCrypt that result in its being insecure. This is actually highly probable, as the codebase is complicated and has not been fully audited as of this writing, and the true authors are anonymous. But the fact that these vulnerabilities probably exist doesn't sound to me like a reason for the maintainers of TrueCrypt to completely abandon their work the way they have, which is the real topic of the discussion. Something smells very fishy here, and what's happened to their site makes something like an NSL or its equivalent a more likely explanation.
And you were asking for a citation for why I say quantum mechanics can probably only cut the key search space in half. Here it is: Bennett C.H., Bernstein E., Brassard G., Vazirani U., The strengths and weaknesses of quantum computation. [arxiv.org] SIAM Journal on Computing 26(5): 1510-1523 (1997).
I didn't buy it, and considered the OP's other two speculations to be far more likely, because a simple brute force attack on the underlying block ciphers is infeasible with our current knowledge of mathematics and physics. That was all I was trying to argue, and you had to go and muddy the waters with all this talk about other components of cryptosystems that might be the source of vulnerability. :)
Well, I do agree that right now attempting the entire keyspace without any shortcuts is highly likely precluded by physics. That's what has provided all the momentum to work against the other components.
I just to tend to disagree with the assertions that we should rest on our laurels so to speak and concentrate all of our efforts on shoring up implementations. To be completely honest, it just smells fishy and full of hubris. I'm suspicious and skeptical by nature and any time somebody seems to be saying something is near perfect and astronomical I tend to immediately wonder why it's being said, not that it was said. I have to muddy things up :)
Something smells very fishy here, and what's happened to their site makes something like an NSL or its equivalent a more likely explanation.
Now on that, we agree completely. It's far more likely IMO, that government got involved at a low level to attack implementation or other components and backdoor it, as we are talking about spying, logistics, and human behavior, not mathematics.
Thank you very much for the citation. I got some reading to do...
quantum mechanics can do no more than halve the effective keyspace, so a 256-bit keyspace becomes effectively 128-bit.
2**128 != 2...
Unless, you measure keyspace in number of bits needed to represent all possible keys, rather than the number of all possible keys.
I don't buy your energy limits (reversible computing can theoretically run on arbitrary low energy), but I agree on your conclusion that brute-forcing is impossible, just on different grounds:
Imagine you had a million computers where each single one is as powerful as the currently most powerful supercomputer. [top500.org] Let's further assume you can run it at peak performance. Imagine in addition that you have a super efficient algorithm which can test a single key as fast as multiplying two floats (so that the FLOPS are directly translated into Keys/s). So your million supercomputers are able to test about 5.5*10^19 keys per second.
The average number of tests you need to brute force a key (assuming you have no quantum computer, of course) is half the number of available keys. So for 128 bit keys, it's 2^127 ≈ 1.7*10^38 tests. That is, to brute force a 128 bit key, you'd need 3.1*10^18 seconds, or about 10^11 years. That's in the order of magnitude of the age of the universe.
To brute force a 256 bit key with the same setup, you'll need 2^128 times as much time, that is, more than 10^38 times the age of the universe.
OK, so there's Moore's law. Well, let's assume that it will hold indefinitely (which it certainly won't), and let's ignore that it's actually about transistor density. So if the available speed doubles every 1.5 years, this means that in 90 years, you'll be able to crack a 128 bit key in a year (but then, whatever I encrypt, I won't care about whether it is cracked in 90 years; and anyway, nobody will consider it important enough to spend a year on decrypting it), and in 192 years you will be able to crack it in a second (still assuming you have a million of the world's most capable supercomputers of the time). Of course, at that time the 256 bit key will still need a time comparable to the age of the universe. So unless you care about whether someone will read your stuff 400 years in the future, even unrealistically optimistic assumptions about computing power will keep your 256 bit encryption safe from brute-forcing.
OK, so what if we actually get a quantum computer? Well, since we are talking about brute-forcing, the algorithm of choice would be the Grover algorithm. The Grover algorithm gives a square root improvement, so it effectively halves the key length. Now the operations per second of a quantum computer may be different from a classical computer, but I think it can be assumed that it is never faster than a classical computer in that metric (after all, you can do classical computing on a quantum computer). So at worst the quantum computer would brute force the key as fast as a classical computer would for half the key length. In other words, even with quantum computers, your 256 bit key should be safe against brute-forcing for the next 200 years.
Note that all that only refers to brute-forcing; other methods to break the encryption may of course make your key unsafe much earlier (like e.g. public key encryption based on prime factorization is vulnerable to quantum computers using the Shor algorithm).
A recent speech to the International Association of Privacy Professionals [privacyassociation.org], by biologist Peter Watts. He called it "A Suicide Bomber's Guide to Online Privacy" [rifters.com]. His suggestion: If you can't guarantee privacy, it's better to follow a scorched earth policy and leave nothing for surveillance to find.
Seems like an appropriate speech for the times...
Bruce Schneier commented [schneier.com] on this . The talk itself is here [rifters.com](you linked to a blog post about the talk). And there are more comments on an SN journal entry [soylentnews.org]. There's more to the talk than just the scorched earth idea.
If Truecrypt were free software, anybody wanting to continue developing could do so.
Given that to sue you for copyright infringement, the anonymous authors would have to reveal their identity, and furthermore would have to find out your identity, I guess another anonymous group taking over TrueCrypt development would be fairly secure against such lawsuits.
Because Truecrypt is free software, anybody wanting to continue developing can do so.
FTFY. Or am I missing something?
You don't do much homework, do you?
Does anybody out here know the developers and ask them? That seem the only way to know what's happened.
Assuming someone answered to your post, claimed to know the developers and told you the real reason was X, then how would you know that you can trust this message? After all, anyone could just claim this, and there's no way to check because, after all, you'd need to know the original authors to ask them whether that person posts the truth.
Not to mention that that other person, no matter whether posting the truth or not, would certainly also hide his/her identity, in order to avoid a visit from a three latter agency who would like to know the identity of the original developers.
I didn't know TrueCrypt's developers hide their identity. Whatever their reason is I don't think I'd trust a critical piece of software written by someone under disguise, not even if I can read the source code and compile it myself. It's just too easy to slip some nasty piece of code into it and get it unnoticed (some of them are called bugs and it may take years to find). Knowing the authors and their history is important for building trust. Lucky me I never used TrueCrypt.
I have always found that true identity is not required for trust. I completely agree with you on history though. Knowing true identity does allow you to find the person if they betray the trust. But the trust is still betrayed.
Whatever their reason is I don't think I'd trust a critical piece of software written by someone under disguise, not even if I can read the source code and compile it myself.
So basically, you're impossible to satisfy? Or would you rather that we disassembled the end binary to look for compiler backdoors?
If you can compile it yourself using GCC, either the public official build of GCC is backdoored (in which case we might as well just give up anyway), or the program actually does indeed do what the source code says.
You *do* know how programming works, don't you? The source code is *kind of* related to how the end result behaves.
Auditing a non trivial piece of software is difficult. Knowing the author gives some extra hints. Given the same code base I bet we would look at TrueCrypt in a different way if it turns out that its author is
A) A well known cryptology researcherB) Works for a big company with rumored links with a three letter agencyC) Works for a three letter agency
It might not be rational (after all the code is there for inspection) but wouldn't we?Would we perform the audit again if it turned out to be case B, just in case we missed something (we always do)? Again and again if it were C?
They already did an audit. It was clean.
If we knew who the devs were, it would make it much more likely that a TLA had infiltrated them. If nobody knows who they are, it's a lot harder to find them and force them to do anything.
Computer use is so fundamentally rooted in trust issues that nobody but a computing idiot savant can completely trust their own system. And even then, they'd have to write all their own software (including hardware drivers...) so there's plenty of room for just plain bugs.
They'd have to build their own hardware too, at least to a chip level. Resisters are probably safe enough, you can test them easily.
The developers could sign the message with their key.
However given the sort of message they have already released I doubt they are going to do such a thing.
To be fair, this also applied to China and Russia. Just stick to Made in EU open source encryption, communication equipment and operating systems until the ninth circuit bans NSLs. Right now, even if you trust a US based developer, he can't legally disclose backdoors and information he supplies to his government. So, just don't risk it.
That was the reason why OpenBSD was developed in Canada. Back then (before 1996 or so), it was illegal to export greater than 40bit encryption outside the US, so Theo refused to use any US resources to store OpenBSD.
What is sad, is that the most tinfoil-hatted of us was finally revealed to be right!
TrueCrypt Discontinued, Alien Abduction?
TrueCrypt Discontinued, Datcontinued?
TrueCrypt Discontinued, OMG Ponies?
TrueCrypt Discontinued, Somali Pirates?
TrueCrypt Discontinued, To Be Continued?
TrueCrypt Discontinued, received SIGINT?
Even if just for Linux. Bitlocker..... Hmmmm.... No. If they are anonymous. Why don't they tell all.
Is there any harm or increased risk in just continuing to use the version I have installed?It does what I want it to do in the current version... does this news just mean a stop to further updates?
By: Anon | 05/2014
Fiction: Do you remember the scene near the end of the movie Scarface where the group of criminals conspired in an attempt to remove an individual speaking out against them before he spoke at the UN? (UN - IIRC)
Reality: Do you remember the individual who died just shortly prior to speaking out about pacemakers (and possibly other technology) and how they are vulnerable to hacker attacks?
Possibility: Sn0wd3n and/or others about to deliver a speech which mentions the useful tool TrueCrypt to a wider audience - TrueCrypt project dies.
I'm interested in the results of the complete TC code audit, but give this comparison some thought.
However, I was concerned about the project when releases ceased after 7.1a. There were steady releases up until that time and I'm curious if 7.1a was released as low hanging fruit with a backdoor and the site was allowed to operate for a few years before closing shop when the hunger for enough interesting people who downloaded/used TC was satisfied.
TrueCrypt WTF @ Bruce Schneier bloghttps://www.schneier.com/blog/archives/2014/05/tru ecrypt_wtf.html [schneier.com]
Also contains TC posts:https://www.schneier.com/blog/archives/2014/05/fri day_squid_bl_426.html [schneier.com]
This is an alternative, compatible implementation of truecrypt. I'm surprised it is not more widely known and supported.
Also, cryptsetup-LUKS can now mount truecrypt containers.