So, one thing I do on Linux a lot is run loopback containers with encryption - this gives me a simple way of isolating data out and transferring it between machines without worrying about full disk/partition encryption.
How to do this on Linux is covered in places like this.
How to do this on BSD doesn't seem to be covered as well - so my best guess for loopback devices is below, culled from various sources.
It's a WIP, fairly untested as is, I have no idea how secure this actually is, the instructions are prone to editing, and if you actually use it for anything you're insane :)
First off a plain loopback
# Plain loopback device
dd if=/dev/zero of=tmp.dat bs=1024k count=1024mdconfig -l
## The number we use for "-u" is the first number not in the list,
## using "0" here
mdconfig -a -t vnode -f tmp.dat -u 0
bsdlabel -w -B md0 auto ## Probably don't need the -B here...
newfs -m 0 md0a
mount /dev/md0a /media/## Then unmount with
umount /media
mdconfig -d -u 0## And remount with
mdconfig -a -t vnode -f tmp.dat -u 0
mount /dev/md0a /media/
Now the encrypted version...
## Encrypted loopback, using geli
dd if=/dev/zero of=crypt.dat bs=1024k count=1024mdconfig -l
## The number we use is the first number not in the list,
## using 0 here
mdconfig -a -t vnode -f crypt.dat -u 0# Make a keyfile, passphrase it and associate with the device
dd if=/dev/random of=volume.key bs=64 count=1
geli init -s 4096 -K volume.key /dev/md0
geli attach -k volume.key /dev/md0## Give us md0.eli - Have a Paranoia Moment
dd if=/dev/urandom of=/dev/md0.eli bs=1m## And make stuff
newfs /dev/md0.eli
mount /dev/md0.eli /media## Then unmount/disconnect with
umount /media
geli detach md0.eli
mdconfig -d -u 0## And remount with
mdconfig -a -t vnode -f crypt.dat -u 0
geli attach -k volume.key /dev/md0
mount /dev/md0.eli /media
And making sure that persists
# Testing it....
mdconfig -a -t vnode -f crypt.dat -u 0
geli attach -k volume.key /dev/md0
mount /dev/md0.eli /mediadd if=/dev/random of=/media/test.dat bs=1M count=100
md5 /media/test.dat## MD5 (test.dat) = "whatever"
umount /media
geli detach md0.eli
mdconfig -d -u 0
reboot## Wait for it.... Log back in and....
mdconfig -a -t vnode -f crypt.dat -u 0
geli attach -k volume.key /dev/md0
mount /dev/md0.eli /media
md5 /media/test.dat## and check the md5sum matched
Minor disclaimer; wrote this about a year ago, and Google will have changed some of the details - you didn't used to get anything about MISRA at all from the initial searches
You see, MISRA is symptomatic of the big problem with software development.
Go to Google. Type in MISRA; you'll see references to Misra C, links to the homepage, we'll wander over there in a moment. So far so good.
Now type in "MISRA Evidence" - see any evidence MISRA works? You'll find [2], and we'll chase that in a second, but basically - Nope. Maybe it's just buried; throw in "MISRA Evidence language"; nothing on the early pages - if you go through you'll find a nice set of papers from Les Hatton on safe language subsets[0] which review MISRA itself effectively but no hard data as to the comparative usefulness, and in fact seems to come squarely down on the side of "was useless, is now actively harmful" [0a, 0b].
Now, how about "MISRA peer reviewed research"? Nope.
"MISRA language peer reviewed research"? Wait... Oh, it's a press release. No data or citations. An offer to maybe give me a white paper, if I send them my email address. Bzzzzt
"MISRA C peer reviewed research"? Citeseer? The same problems.
The MISRA site (http://www.misra.org.uk)? A lot of offers to sell me Official Specification Documents, Training Programs and Tools. But actual evidence that MISRA works? Citations to peer reviewed journals? Raw data? Not So much.
Now, by digging around you can find a couple of evaluations of MISRA as a coding guideline, and you can find some studies which imply something like MISRA might be a net win when combined with other techniques [1], but no direct cost/benefit to say "if you invest X on MISRA compliance, you will gain Y".
The persistent might go back to the MISRA bulletin boards where somebody asked directly if there was any study to back up the effectiveness of MISRA[2]: In addition to opinions (but no evidence), one posting pushed the idea that the companies using it aren't publishing results because they "are not research places" and "are busy" making software. Imagine for a moment if your local Hospital came out with that one?
"We're feeding them mercury. We don't know if it works, but these people are just busy getting better, and we don't have time to do research. And Mercury is Shiny!"
And this isn't some shiny new niche development idea; It's been in widespread use for over 15 *years* There should be volumes of hard data here, from direct studies across multiple industries to toolset impacts to literature reviews to raw data and meta analysis. We should be swamped with this stuff, not digging through the fifth page of Google or Citeseer and offering up our email addresses for something that's maybe vaguely relevant.
So here's a hypothesis; Studies with actual, real world, hard published data show that:
* Language Pitfalls have a minor impact when compared to other issues[3]
* Defects are, at best, weakly correlated with specific language choices.[4][5]
* Defect rates have a curvilinear relationship to the number of lines of code, with a clear increase as program module size becomes large. [6]
So - MISRA attempts to resolve a minor issue by doing something which is not correlated with the problems it claims to solve and which results in higher SLOC and therefore pushes an increase in defect rates. (This indirectly agrees with the conclusions of [0b])
Is that true? Or could it be that MISRA actually works? Or is it ineffective either way? How much time should we spend on MISRA & associated tools? How much effort in training? How much of that time & money could be spent on other tasks & training? How effective would that time & expenditure be in comparison?
Until somebody collects actual hard data then I don't know, and you don't know. Even the people prepared to sell you tools and training don't appear to know, or at least won't say exactly how in public, (but then again, they make sure to get paid either way). Right now the only real analysis I can find says avoid it, and nobody is asking for anything better.
Why? Well, managers I've spoken to go for MISRA, because it's easy; You trust the claims, buy a spec, book training for a few coders, tick a box. Done. This is far, far easier than fixing the schedule, or locking down requirements, or trying to understand problems in the architecture, or recruiting better developers, or persuading HR to pay more for better developers, or resourcing adequately up-front, or any one of the vast number of other issues: They're hard to achieve, the returns are viewed uncertain, they're politically difficult, and will take too long anyway.
So we go with the Shiny, be it MISRA or Agile or New Language Framework of The Week, and wonder why we have so much information on casualties, but nothing on how well the Shiny works.
[0] http://www.leshatton.org/index_SA.html
[0a] Hatton http://www.leshatton.org/Documents/SCSC_MISRAv2.pdf
[0b] Hatton http://www.leshatton.org/Documents/MISRA_comp_1105.pdf
[1] Hatton & Pfleeger http://www.leshatton.org/Documents/IEEEComputer1-97.pdf
[2] http://www.misra.org.uk/forum/viewtopic.php?f=56&t=710
[3] Perry, http://users.ece.utexas.edu/~perry/work/papers/1010-DP-ms25.pdf
[4] Hatton http://www.leshatton.org/Documents/FFF_IEE397.pdf,
[5] Mayer http://mayerdan.com/ruby/2012/11/11/bugs-per-line-of-code-ratio/)
[6] http://www.developer.com/tech/article.php/10923_3644656_2/Software-Quality-Metrics.htm