Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Submission Feedback

Posted by AnonTechie on Thursday May 07 2015, @08:13AM (#1203)
1 Comment
/dev/random

I was pleasantly surprised to receive feedback on why my submission was rejected. Thank you editors. You guys are swell.

Playing with Encrypted Loopback Containers on BSD (sort of)

Posted by tonyPick on Thursday November 27 2014, @10:35AM (#832)
1 Comment
Code

So, one thing I do on Linux a lot is run loopback containers with encryption - this gives me a simple way of isolating data out and transferring it between machines without worrying about full disk/partition encryption.

How to do this on Linux is covered in places like this.

How to do this on BSD doesn't seem to be covered as well - so my best guess for loopback devices is below, culled from various sources.

It's a WIP, fairly untested as is, I have no idea how secure this actually is, the instructions are prone to editing, and if you actually use it for anything you're insane :)

First off a plain loopback

# Plain loopback device
dd if=/dev/zero of=tmp.dat bs=1024k count=1024

mdconfig -l
## The number we use for "-u" is the first number not in the list,
## using "0" here
mdconfig -a -t vnode -f tmp.dat -u 0
bsdlabel -w -B md0 auto ## Probably don't need the -B here...
newfs -m 0 md0a
mount /dev/md0a /media/

## Then unmount with
umount /media
mdconfig -d -u 0

## And remount with
mdconfig -a -t vnode -f tmp.dat -u 0
mount /dev/md0a /media/

Now the encrypted version...

## Encrypted loopback, using geli
dd if=/dev/zero of=crypt.dat bs=1024k count=1024

mdconfig -l
## The number we use is the first number not in the list,
## using 0 here
mdconfig -a -t vnode -f crypt.dat -u 0

# Make a keyfile, passphrase it and associate with the device
dd if=/dev/random of=volume.key bs=64 count=1
geli init -s 4096 -K volume.key /dev/md0
geli attach -k volume.key /dev/md0

## Give us md0.eli - Have a Paranoia Moment
dd if=/dev/urandom of=/dev/md0.eli bs=1m

## And make stuff
newfs /dev/md0.eli
mount /dev/md0.eli /media

## Then unmount/disconnect with
umount /media
geli detach md0.eli
mdconfig -d -u 0

## And remount with
mdconfig -a -t vnode -f crypt.dat -u 0
geli attach -k volume.key /dev/md0
mount /dev/md0.eli /media

And making sure that persists

# Testing it....

mdconfig -a -t vnode -f crypt.dat -u 0
geli attach -k volume.key /dev/md0
mount /dev/md0.eli /media

dd if=/dev/random of=/media/test.dat bs=1M count=100
md5 /media/test.dat

## MD5 (test.dat) = "whatever"

umount /media
geli detach md0.eli
mdconfig -d -u 0
reboot

## Wait for it.... Log back in and....
mdconfig -a -t vnode -f crypt.dat -u 0
geli attach -k volume.key /dev/md0
mount /dev/md0.eli /media
md5 /media/test.dat

## and check the md5sum matched

A rant about MISRA - the big problem with software

Posted by tonyPick on Tuesday October 14 2014, @07:27AM (#726)
0 Comments
Code

Minor disclaimer; wrote this about a year ago, and Google will have changed some of the details - you didn't used to get anything about MISRA at all from the initial searches

You see, MISRA is symptomatic of the big problem with software development.

Go to Google. Type in MISRA; you'll see references to Misra C, links to the homepage, we'll wander over there in a moment. So far so good.

Now type in "MISRA Evidence" - see any evidence MISRA works? You'll find [2], and we'll chase that in a second, but basically - Nope. Maybe it's just buried; throw in "MISRA Evidence language"; nothing on the early pages - if you go through you'll find a nice set of papers from Les Hatton on safe language subsets[0] which review MISRA itself effectively but no hard data as to the comparative usefulness, and in fact seems to come squarely down on the side of "was useless, is now actively harmful" [0a, 0b].

Now, how about "MISRA peer reviewed research"? Nope.
"MISRA language peer reviewed research"? Wait... Oh, it's a press release. No data or citations. An offer to maybe give me a white paper, if I send them my email address. Bzzzzt
"MISRA C peer reviewed research"? Citeseer? The same problems.

The MISRA site (http://www.misra.org.uk)? A lot of offers to sell me Official Specification Documents, Training Programs and Tools. But actual evidence that MISRA works? Citations to peer reviewed journals? Raw data? Not So much.

Now, by digging around you can find a couple of evaluations of MISRA as a coding guideline, and you can find some studies which imply something like MISRA might be a net win when combined with other techniques [1], but no direct cost/benefit to say "if you invest X on MISRA compliance, you will gain Y".

The persistent might go back to the MISRA bulletin boards where somebody asked directly if there was any study to back up the effectiveness of MISRA[2]: In addition to opinions (but no evidence), one posting pushed the idea that the companies using it aren't publishing results because they "are not research places" and "are busy" making software. Imagine for a moment if your local Hospital came out with that one?

"We're feeding them mercury. We don't know if it works, but these people are just busy getting better, and we don't have time to do research. And Mercury is Shiny!"

And this isn't some shiny new niche development idea; It's been in widespread use for over 15 *years* There should be volumes of hard data here, from direct studies across multiple industries to toolset impacts to literature reviews to raw data and meta analysis. We should be swamped with this stuff, not digging through the fifth page of Google or Citeseer and offering up our email addresses for something that's maybe vaguely relevant.

So here's a hypothesis; Studies with actual, real world, hard published data show that:
* Language Pitfalls have a minor impact when compared to other issues[3]
* Defects are, at best, weakly correlated with specific language choices.[4][5]
* Defect rates have a curvilinear relationship to the number of lines of code, with a clear increase as program module size becomes large. [6]

So - MISRA attempts to resolve a minor issue by doing something which is not correlated with the problems it claims to solve and which results in higher SLOC and therefore pushes an increase in defect rates. (This indirectly agrees with the conclusions of [0b])

Is that true? Or could it be that MISRA actually works? Or is it ineffective either way? How much time should we spend on MISRA & associated tools? How much effort in training? How much of that time & money could be spent on other tasks & training? How effective would that time & expenditure be in comparison?

Until somebody collects actual hard data then I don't know, and you don't know. Even the people prepared to sell you tools and training don't appear to know, or at least won't say exactly how in public, (but then again, they make sure to get paid either way). Right now the only real analysis I can find says avoid it, and nobody is asking for anything better.

Why? Well, managers I've spoken to go for MISRA, because it's easy; You trust the claims, buy a spec, book training for a few coders, tick a box. Done. This is far, far easier than fixing the schedule, or locking down requirements, or trying to understand problems in the architecture, or recruiting better developers, or persuading HR to pay more for better developers, or resourcing adequately up-front, or any one of the vast number of other issues: They're hard to achieve, the returns are viewed uncertain, they're politically difficult, and will take too long anyway.

So we go with the Shiny, be it MISRA or Agile or New Language Framework of The Week, and wonder why we have so much information on casualties, but nothing on how well the Shiny works.

[0] http://www.leshatton.org/index_SA.html
[0a] Hatton http://www.leshatton.org/Documents/SCSC_MISRAv2.pdf
[0b] Hatton http://www.leshatton.org/Documents/MISRA_comp_1105.pdf
[1] Hatton & Pfleeger http://www.leshatton.org/Documents/IEEEComputer1-97.pdf
[2] http://www.misra.org.uk/forum/viewtopic.php?f=56&t=710
[3] Perry, http://users.ece.utexas.edu/~perry/work/papers/1010-DP-ms25.pdf
[4] Hatton http://www.leshatton.org/Documents/FFF_IEE397.pdf,
[5] Mayer http://mayerdan.com/ruby/2012/11/11/bugs-per-line-of-code-ratio/)
[6] http://www.developer.com/tech/article.php/10923_3644656_2/Software-Quality-Metrics.htm

Practice Does Not Make Perfect:

Posted by AnonTechie on Monday September 29 2014, @09:14AM (#690)
2 Comments
/dev/random

What makes someone rise to the top in music, games, sports, business, or science? This question is the subject of one of psychology’s oldest debates. In the late 1800s, Francis Galton—founder of the scientific study of intelligence and a cousin of Charles Darwin—analyzed the genealogical records of hundreds of scholars, artists, musicians, and other professionals and found that greatness tends to run in families. For example, he counted more than 20 eminent musicians in the Bach family. (Johann Sebastian was just the most famous.) Galton concluded that experts are “born.” Nearly half a century later, the behaviorist John Watson countered that experts are “made” when he famously guaranteed that he could take any infant at random and “train him to become any type of specialist [he] might select—doctor, lawyer, artist, merchant-chief and, yes, even beggar-man and thief, regardless of his talents.”

The experts-are-made view has dominated the discussion in recent decades. To test this idea, Swedish psychologist K. Anders Ericsson and colleagues recruited violinists from an elite Berlin music academy and asked them to estimate the amount of time per week they had devoted to deliberate practice for each year of their musical careers. Based on these findings, Ericsson and colleagues argued that prolonged effort, not innate talent, explained differences between experts and novices. These findings filtered their way into pop culture. They were the inspiration for what Malcolm Gladwell termed the “10,000 Hour Rule” ( http://gladwell.com/outliers/the-10000-hour-rule/ ) in his book Outliers.

However, recent research has demonstrated that deliberate practice, while undeniably important, is only one piece of the expertise puzzle—and not necessarily the biggest piece. In the first study ( http://www.ncbi.nlm.nih.gov/pubmed/17201516 ) to convincingly make this point, the cognitive psychologists Fernand Gobet and Guillermo Campitelli found that chess players differed greatly in the amount of deliberate practice they needed to reach a given skill level in chess. For example, the number of hours of deliberate practice to first reach “master” status (a very high level of skill) ranged from 728 hours to 16,120 hours. This means that one player needed 22 times more deliberate practice than another player to become a master.

In concrete terms, what this evidence means is that racking up a lot of deliberate practice is no guarantee that you’ll become an expert. Other factors matter.

http://www.slate.com/articles/health_and_science/science/2014/09/malcolm_gladwell_s_10_000_hour_rule_for_deliberate_practice_is_wrong_genes.single.html

[Related Abstract]: http://www.ncbi.nlm.nih.gov/pubmed/?term=(Macnamara+and+Hambrick)

The iCloud Flaw That Could Have Caused the Nude Celeb Leaks.

Posted by AnonTechie on Monday September 01 2014, @01:08PM (#629)
1 Comment
News

Over the weekend, there's been a slew of images released showing celebrities in varying states of undress. Now, it appears that a flaw in iCloud could be responsible for the images making their way online.

On Monday, a Python script emerged on Github (which we’re not linking to as there is evidence a fix by Apple is not fully rolled out) that appears to have allowed malicious users to ‘brute force’ a target account’s password on Apple’s iCloud, thanks to a vulnerability in the Find my iPhone service. Brute force attacks are where a malicious user uses a script to repeatedly guess passwords to attempt to discover the correct one.

http://thenextweb.com/apple/2014/09/01/this-could-be-the-apple-icloud-flaw-that-led-to-celebrity-photos-being-leaked/

http://www.independent.co.uk/life-style/gadgets-and-tech/is-apples-icloud-safe-after-leak-of-jennifer-lawrence-and-other-celebrities-nude-photos-9703142.html

Protecting privacy also means preserving democracy:

Posted by AnonTechie on Monday September 01 2014, @01:04PM (#628)
0 Comments
News

What impact does the proliferation of new mobile technologies have? How does the sharing of personal data over the Internet threaten our society? Interview with Professor Jean-Pierre Hubaux, a specialist in communication networks and privacy protection, a major field of IT security.
Jean-Pierre Hubaux as a professor at the EPFL's School of Computer and Communication Sciences. During the last decade, Jean-Pierre Hubaux and his team at the Laboratory for Computer Communications and Applications have focused their research efforts on privacy protection, in particular for mobile communication networks (and notably geolocation) and personal data (with genomic data as an application example).
http://actu.epfl.ch/news/protecting-privacy-also-means-preserving-democra-2/

200

Posted by lhsi on Thursday August 21 2014, @10:30AM (#594)
4 Comments
Soylent

I hit the total of 200 submitted stories yesterday. I'm not sure what the actual number is, as the submissions page only shows the last 3ish months. Whoo, arbitrary milestones!

Track who is buying US politicians with "Greenhouse" browser

Posted by AnonTechie on Thursday August 14 2014, @02:13PM (#577)
0 Comments
News

Nicholas Rubin, a 16-year-old programmer from Seattle, has created a browser add-on that makes it incredibly easy to see the influence of money in US politics. Rubin calls the add-on Greenhouse, and it does something so brilliantly simple that once you use it you'll wonder why news sites didn't think of this themselves.

Greenhouse pulls in campaign contribution data for every Senator and Representative, including the total amount of money received and a breakdown by industry and size of donation. It then combines this with a parser that finds the names of Senators and Representatives in the current page and highlights them. Hover your mouse over the highlighted names and it displays their top campaign contributors.

In this sense, Greenhouse adds another layer to the news, showing you the story behind the story. In politics, as in many other things, if you want to know the why behind the what, you need to follow the money. And somewhat depressingly, in politics it seems that it's money all the way down.

http://arstechnica.com/tech-policy/2014/08/track-whos-buying-politicians-with-greenhouse-browser-add-on/

If you want to participate or just follow along, you can install Greenhouse for Firefox, Chrome, and Safari over at http://allaregreen.us/ Grab the add-on and then follow @allaregreen on Twitter.

US MAN LEFT IN DEA HOLDING CELL FOR DAYS

Posted by AnonTechie on Wednesday July 09 2014, @07:03AM (#532)
4 Comments
News

Four U.S. Drug Enforcement Administration employees saw or heard a handcuffed San Diego student locked in a cell for five days without food or water, but did nothing because they assumed someone else was responsible, investigators said Tuesday. The Justice Department's inspector general faulted several DEA employees for their handling of the April 2012 incident that left Daniel Chong in grave physical health, cost the agency a $4.1 million settlement and led to nationwide changes in the agency's detention policies. The employees told investigators they found nothing unusual in their encounters with Chong and assumed whoever put him in the cell would return for him shortly. Chong, then 23, ingested methamphetamine, drank his own urine to survive and cut himself with broken glasses while he was held.

http://bigstory.ap.org/article/4-dea-employees-encountered-man-forgotten-cell

This Land Is Their Land:

Posted by AnonTechie on Monday July 07 2014, @10:26AM (#528)
0 Comments
/dev/random

Between 1776 and the present, the United States dispossessed Indians of more than 1.5 billion acres, nearly an eighth of the habitable world ( http://invasionofamerica.ehistory.org/#0 ). For most of that same period, the native population was in a free fall, dropping from perhaps 1.5 million people when Thomas Jefferson wrote the Declaration of Independence to a low of 237,000 in 1900. After the native population and its land base bottomed out, American sports teams began adopting Indian-themed names.

Today, the Braves, Indians, Blackhawks, Seminoles, Chiefs, and the Washington NFL team claim to honor native peoples with iconography such as Chief Wahoo, arrowheads, and tomahawks. It is easy to assert that the name of your favorite team expresses solidarity with the survivors of the long, sordid history of Indian dispossession. But what if sports lore included the specifics of how the U.S. acquired the land below your team's home field ?

http://www.slate.com/articles/sports/sports_nut/2014/07/washington_nfl_team_tribal_land_the_braves_chiefs_and_dan_snyder_s_franchise.html