Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

On my linux machines, I run a virus scanner . . .

  • regularly
  • when I remember to enable it
  • only when I want to manually check files
  • only on my work computers
  • never
  • I don't have any linux machines, you insensitive clod!

[ Results | Polls ]
Comments:25 | Votes:216

posted by martyb on Sunday September 25 2016, @10:44PM   Printer-friendly
from the changing-tide? dept.

The Washington Post reports that a police officer has been charged with "first-degree manslaughter" after the on-duty, fatal shooting of Terence Crutcher in Tulsa, Oklahoma. If found guilty, she would be imprisoned for at least four years. KJRH-TV has a transcript of the district attorney's press conference.


Original Submission

posted by janrinok on Sunday September 25 2016, @08:52PM   Printer-friendly
from the good-news-for-some-bad-news-for-others dept.

EU ministers demand complete restart of the controversial trade deal that has sparked mass protests across the continent. European Union ministers today admitted that a giant EU-US trade deal is dead in its current form, with drastic change needed to salvage any hope of a deal going ahead.

The Transatlantic Trade and Investment Partnership [TTIP] has sparked a widespread backlash and now lies in tatters in the wake of massive protests across the continent.

Austrian Economy Minister Reinhold Mitterlehner said that the pact now has, "such negative connotations", that the best hope was to "completely relaunch with a new name after the US elections. Mitterlehner also demanded "more transparency and clearer objectives." Negotiations for the free-trade zone have so far been held behind closed doors.

Slovak economy minister Peter Ziga, was similarly pessimistic, saying that a "new start or some new approach [was] needed, while EU trade commissioner " Cecilia Malmstroem said the likelihood of a deal was "becoming smaller and smaller", as she entered the talks.

Several EU representatives blamed US intransigence for the gridlock. The deal now has "only a small chance of success unless the United States starts to give a bit of ground," Belgian Finance Minister Didier Reynders said.

Public services, especially the NHS [National Health Service], are in the firing line. One of the main aims of TTIP is to open up Europe's public health, education and water services to US companies. This could essentially mean the privatisation of the NHS. The European Commission has claimed that public services will be kept out of TTIP. However, according to The Huffington ost, the UK Trade Minister Lord Livingston has admitted that talks about the NHS were still on the table

[...] The EU has admitted that TTIP will probably cause unemployment as jobs switch to the US, where labour standards and trade union rights are lower. It has even advised EU members to draw on European support funds to compensate for the expected unemployment. Examples from other similar bi-lateral trade agreements around the world support the case for job losses. The North American Free Trade Agreement (NAFTA) between the US, Canada and Mexico caused the loss of one million US jobs over 12 years, instead of the hundreds of thousands of extra that were promised


Original Submission

posted by martyb on Sunday September 25 2016, @07:37PM   Printer-friendly
from the here's-looking-at-U-niverse dept.

Submitted via IRC for crutchy

The world's largest radio telescope began searching for signals from stars and galaxies and, perhaps, extraterrestrial life Sunday in a project demonstrating China's rising ambitions in space and its pursuit of international scientific prestige.

Beijing has poured billions into such ambitious scientific projects as well as its military-backed space program, which saw the launch of China's second space station earlier this month.

Measuring 500 meters in diameter, the radio telescope is nestled in a natural basin within a stunning landscape of lush green karst formations in southern Guizhou province. It took five years and $180 million to complete and surpasses that of the 300-meter Arecibo Observatory in Puerto Rico, a dish used in research on stars that led to a Nobel Prize.

The official Xinhua News Agency said hundreds of astronomers and enthusiasts watched the launch of the Five-hundred-meter Aperture Spherical Telescope, or FAST, in the county of Pingtang.

Researchers quoted by state media said FAST would search for gravitational waves, detect radio emissions from stars and galaxies and listen for signs of intelligent extraterrestrial life.

"The ultimate goal of FAST is to discover the laws of the development of the universe," Qian Lei, an associate researcher with the National Astronomical Observatories of the Chinese Academy of Sciences, told state broadcaster CCTV.

Source: http://phys.org/news/2016-09-china-world-largest-radio-telescope.html

There's also a video about it on YouTube.


Original Submission

posted by martyb on Sunday September 25 2016, @05:41PM   Printer-friendly
from the Say-"What?" dept.

A decade ago, we in the free and open-source community could build our own versions of pretty much any proprietary software system out there, and we did. Publishing, collaboration, commerce, you name it. Some apps were worse, some were better than closed alternatives, but much of it was clearly good enough to use every day.

But is this still true? For example, voice control is clearly going to be a primary way we interact with our gadgets in the future. Speaking to an Amazon Echo-like device while sitting on my couch makes a lot more sense than using a web browser. Will we ever be able to do that without going through somebody's proprietary silo like Amazon's or Apple's? Where are the free and/or open-source versions of Siri, Alexa and so forth?

The trouble, of course, is not so much the code, but in the training. The best speech recognition code isn't going to be competitive unless it has been trained with about as many millions of hours of example speech as the closed engines from Apple, Google and so forth have been. How can we do that?

[...] Who has a plan, and where can I sign up to it?

Perhaps a distributed computing project (along the lines of Folding@Home, SETI, etc.) would be a viable approach?


Original Submission

posted by martyb on Sunday September 25 2016, @03:47PM   Printer-friendly
from the room-for-further-improvement dept.

It seems that every time researchers estimate how often a medical mistake contributes to a hospital patient's death, the numbers come out worse.

[...] In 2010, the Office of Inspector General for Health and Human Services said that bad hospital care contributed to the deaths of 180,000 patients in Medicare alone in a given year.

Now comes a study in the current issue of the Journal of Patient Safety that says the numbers may be much higher — between 210,000 and 440,000 patients each year who go to the hospital for care suffer some type of preventable harm that contributes to their death, the study says.

That would make medical errors the third-leading cause of death in America, behind heart disease, which is the first, and cancer, which is second.

The new estimates were developed by John T. James, a toxicologist at NASA's space center in Houston who runs an advocacy organization called Patient Safety America. James has also written a book about the death of his 19-year-old son after what James maintains was negligent hospital care.

Asked about the higher estimates, a spokesman for the American Hospital Association said the group has more confidence in the IOM's estimate of 98,000 deaths. ProPublica asked three prominent patient safety researchers to review James' study, however, and all said his methods and findings were credible.

[...] Dr. David Mayer, the vice president of quality and safety at Maryland-based MedStar Health, said people can make arguments about how many patient deaths are hastened by poor hospital care, but that's not really the point. All the estimates, even on the low end, expose a crisis, he said.

"Way too many people are being harmed by unintentional medical error," Mayer said, "and it needs to be corrected."

The story describes additional studies that were performed and then solicited feedback from other doctors who supported the view that the 98,000 figure underreports the problem and that the situation warrants further investigation, reporting, and action.

Have any Soylentils personally experienced or observed medical mistakes that had an adverse outcome? Alternatively, has anyone experienced a medical triumph in the face of very poor odds for a positive outcome? What about medical treatments in countries besides the US?


Original Submission

posted by martyb on Sunday September 25 2016, @01:51PM   Printer-friendly
from the single-point-of-failure dept.

The World Socialist Web Site reports

The entire US territory of Puerto Rico suffered a blackout beginning [September 21] after a fire caused a substation to break down. The plant had not been repaired in decades and the cause of the fire is unclear, although a lightning storm is thought to be responsible.

Puerto Rico Governor Alejandro Garcia Padilla told reporters Friday morning [September 23] that 75 percent of the island's 1.5 million homes and businesses had electricity restored, and that the entire system would be returned to normal only by Saturday, 72 hours after the power went out. During the press conference at the island's emergency management center, the lights went out briefly prompting laughter from the assembled reporters. Padilla was forced to admit that periodic blackouts and shortages would still occur as the demand for electricity increases.

The blackout shut down the entire island of 3.5 million people.

[...] Authorities warned that tropical storms could still knock out power lines and black out areas that had power restored. An estimated 250,000 people don't have access to water.

Temperatures were recorded at 100 degrees Fahrenheit on Friday, causing many Puerto Ricans to sleep outdoors for the third night in a row. Residents formed long lines outside of grocery stores to get ice, a precious commodity, and recharge their cell phones.

Hotels in the capital San Juan offered special rates to island residents but were soon booked up. At least one person died from carbon monoxide poisoning after fixing up a personal power generator in their home. An elderly man was also taken to the hospital after spending the night in a stuck elevator, and at least four police officer were hit by cars while trying to direct traffic; they are all expected to recover.

While local power outages are common in Puerto Rico, an island-wide blackout is extremely rare.

[...] The Electric Power Authority, which oversees the Aguirre power plant in the southern town of Salinas, is still investigating what caused the fire. Two transmission lines were knocked down, causing circuit breakers to automatically shut down as a safety measure, affecting the broader power grid.

Additional Coverage:
CNN
NPR
USA Today


Original Submission

posted by cmn32480 on Sunday September 25 2016, @11:56AM   Printer-friendly
from the what-goes-up... dept.

China confirmed in a press conference, that Tiangong-1, their first space station put into orbit in 2011, will re-enter and burn up in the atmosphere sometime in late 2017. There seems to be some uncertainty in when it will re-enter the atmosphere, which leads one to believe that the station is not under orbital control and that it will come back to Earth in the same manner that Skylab did in 1979.


Original Submission

posted by cmn32480 on Sunday September 25 2016, @10:27AM   Printer-friendly
from the HA-HA! dept.

I always find the various authentication experiences to be more annoying than reassuring, but until now I've always managed to defeat whatever bizarre scheme a web site has created.

Yes, I'm fan of "Reset Password."

Microsoft though has stopped me dead by refusing me access to an outlook.com [account] even though I have the email address and password.

About three years ago someone established an outlook.com email for an organization. They passed the login info on to me. I subsequently just accessed it via Gmail for the next two years.

Today I tried to log in to outlook.com make some changes. They apparently feel that I am not who I say I am and demand some kind of "authentication."

After a half an hour of repeatedly submitting "Verification Forms" (Names, Birthdate, City, Postal Code, Captchas, Previous passwords....," entering numerous PINs, and generally jumping through hoops, I have concluded that I will never ever access this account again.

Best of all the email quoted below offers no way that I can appeal this to some kind of living being.

Is this the worst authentication disaster ever? Is there any logical reason why you would make it impossible for your customers to ever recover an account?

[Continues...]

We recently received a request to recover your Microsoft account *****@outlook.com. Unfortunately, our automated system has determined that the information you provided was not sufficient for us to validate your account ownership. Microsoft takes the security and privacy of our customers very seriously, and our commitment to protecting your personal information requires that we take the utmost care in ensuring that you are the account owner.

Please submit a new account verification form

At this point, your best option is to submit a new form with as much accurate information as you can gather. The more information you can include in the form, the better the chance you'll have of regaining access to your account. We've included a few tips below to help you fill out the form as completely and accurately as possible.

> Submit a new form

Helpful tips for filling out another form:

Answer as many questions as you can.
Use the information you provided when you created the account, or last updated it.
Submit the form from a computer you frequently use.
You will be asked to list recently used email addresses and the subject lines from recent emails. Ask for help from family members, friends, or business contacts to confirm their email addresses and tell you the subject lines of the last three emails they sent you.
Make sure to use the correct domain for your account, such as hotmail.com, live.com, or outlook.com. Keep in mind that your email address may be country specific. For example, if you created your account in Sweden, your domain would be "hotmail.co.se" rather than "hotmail.com".

Ready?

> Submit a new form

Thank you,
Microsoft Support Team

Microsoft Corporation
One Microsoft Way
Redmond, WA 98052
USA


Original Submission

posted by martyb on Sunday September 25 2016, @08:37AM   Printer-friendly
from the take-this-job-and... dept.

Most everybody has been there: you've decided to quit your job and now you have to inform your employer that you're leaving. So what is the best way to resign?

Turns out, there are generally seven ways in which people quit their jobs, and there are two key factors that determine whether a person resigns in a positive way or in a way that could have damaging consequences for the business, new research from Oregon State University shows.

[...] Through a series of studies, including interviews with employees and employers, the researchers found that generally, employees quit in one of seven ways:

  • By the book: These resignations involve a face-to-face meeting with one's manager to announce the resignation, a standard notice period, and an explanation of the reason for quitting.
  • Perfunctory: These resignations are similar to "by the book" resignations, except the meeting tends to be shorter and the reason for quitting is not provided.
  • Grateful goodbye: Employees express gratitude toward their employer and often offer to help with the transition period.
  • In the loop: In these resignations, employees typically confide in their manager that they are contemplating quitting, or are looking for another job, before formally resigning.
  • Avoidant: This occurs when employees let other employees such as peers, mentors, or human resources representatives know that they plan to leave rather than giving notice to their immediate boss.
  • Bridge burning: In this resignation style, employees seek to harm the organization or its members on their way out the door, often through verbal assaults.
  • Impulsive quitting: Some employees simply walk off the job, never to return or communicate with their employer again. This can leave the organization in quite a lurch, given it is the only style in which no notice is provided.

The by the book and perfunctory resignations are the most common, but roughly one in 10 employees quits in bridge-burning style. Avoidant, bridge burning and impulsive quitting are seen as potentially harmful resignation styles for employers.

In addition, the researchers found that managers were particularly frustrated by employees who resigned using bridge burning, avoidant or perfunctory styles, so employees who want to leave on good terms should avoid those styles, Klotz said.

Have any Soylentils seen employees quit in notable or epic ways?


Original Submission

posted by martyb on Sunday September 25 2016, @07:05AM   Printer-friendly
from the car-company-namesake dept.

For fans of Nikola Tesla and historians, a link has turned up to the files the FBI compiled on the brilliant scientist and inventor. The files are in three parts of about 250 pages each. Submit a story to Soylent if you turn up any interesting nuggets!


Original Submission

posted by martyb on Sunday September 25 2016, @05:25AM   Printer-friendly

I ran across this book a month or so ago when I was reading about Illinois' upcoming bicentennial. It's a speculative fiction look at the year 2018 from a 1918 standpoint. Its author, Vachel Lindsay, was a very popular poet at the turn of his century, and this is his only novel. He started writing it in 1904, hoping to publish by Illinois' centennial, but it was finally published in 1920.

The Golden Book of Springfield is speculative fiction, but rather than science fiction, this is future fantasy fiction. The Golden Book flies into a room, but it isn't a drone as someone in our time would presume, but magic. His war is fought on horseback with swords. He has a character shouting from a platform, Lindsay being unaware that in a hundred years a thing called a "bullhorn" would be commonplace.

He does introduce a very few innovations, such as a "lens gun," "a new kind of" movie projector, and the "corn-dragon engine," which isn't an internal combustion engine running on ethanol or methanol, but a new kind of steam locomotive. A "corn-dragon" was a steam train driving past a cornfield, the "dragon" being what was called then "smokestack lightning".

It's surrealism, and going by what Wikipedia has to say, some of the first, although a college history class told me it started in Germany. It's about his and friends of his (whether the friends are real or fictional) dreams of his city a hundred years in his future, two years in my future as I write this.

It's also a ghost story, written by a ghost. It's about the 1918 Vachel Lindsay haunting our present. It reads "And my bones crumble through the century, like last year's autumn leaves. Then there is, alternating with drouth, bitter frost. And roots wrap my heart and brain. And there is sleep.

"Then a galloping and gay shrieking, away on the road, to the East of Oak Ridge! And though I am six feet beneath the ground the eyes of the soul are given me. I see wonderful young horsewomen..."

Lindsay committed suicide in 1931. If that's not a ghost story, what is?

JNCF asked "I find myself wondering how reasonable you think your depiction of 23rd century Mars is." The answer is a definite, loud, "HELL NO." Reading this book (or any old SF) will tell you why.


Original Submission

posted by janrinok on Sunday September 25 2016, @03:47AM   Printer-friendly
from the beam-me-up dept.

The Fraunhofer Institute for Organic Electronics, Electron Beam and Plasma Technology FEP now has the technological means of applying electron beams very flexibl[y] to 3-D objects through use of its new electron wand of the Swiss company ebeam by COMET.

Electron beams are useful in many different applications. They reliably sterilize seed, can weld small structures precisely and reliable, and cure decorative paint. Usually this involves either planar, flexible, or slightly curved surfaces. However, applying electron beams homogeneously to 3-D objects of any size or shape has not been so simple up to now.

Scientists at Fraunhofer FEP have now elegantly combined an electron wand with a six-axis robotic manipulator in order to be able to treat substrates with complex shapes as well as spherical objects, for example.

"The electron wand remains stationary in this process", explains Javier Portillo, a scientist at Fraunhofer FEP. "The manipulator rotates the objects within the irradiation zone in a way, that the surface will be treated homogeneously. This creates the maximum degree of freedom when applying an electron beam to a 3-D-object."

Normally you need several electron-beam sources to treat 3-D objects. Homogeneous application does not take place reliably everywhere in this process. The process of multiaxial moving the object within the electron treatment zone can hereby generate advantages. The application of electron beams to optical components is also conceivable.


Original Submission

posted by janrinok on Sunday September 25 2016, @02:09AM   Printer-friendly
from the how-it's-done dept.

Following up on NCommander's recent "original content" series, I decided to write up a bit about a recent thesis I supervised. Full disclaimer: I supervised this thesis and I'd like to see the thesis results being used more widely.

Introduction
In 2013, the nation-wide exams for primary schools somehow leaked onto the streets and were being traded over the internet. One newspaper reporting on it showed a blurred page of the exam questions 5 days before the exam was to take place. This made me wonder: would it be possible to recover the exam questions from that blurred photo?

Fast-forward one bachelor thesis investigating this (resulting in a 3/4ths complete plugin) and a few years. A team of BSc students is interested in picking up the deblurring project. Previous experience showed that asking one student to pick this up for a BSc thesis is a lot, but at my current institute BSc theses are done in teams. Challenging, but why not?

The original assignment was to implement Cho's algorithm for deblurring [Cho et al 2013] as a GIMP plugin. The previous bachelor thesis had found this algorithm as the best deblurring algorithm for recovering text. However, time marches on. During the literature review phase, the team came across some advances in deblurring. Moreover, the algorithm's description in the paper was incomplete, and patented. (Interestingly enough, the patent did not clarify the incompleteness.) There was a new algorithm by Pan et al [Pan et al 2014] that was simpler, faster, and: open source. However, the original was coded in Matlab, which is (1) proprietary, (2) not freely available, and (3) not in much use by people who want to edit pictures.

So, the team investigated both algorithms in great (and hairy) detail, and implemented Pan et al's algo as an open source GIMP plugin. This required a working understanding of the maths involved (which is not explicitly taught in the Bachelor programme). Moreover, the end result is a sleek piece of work, showcasing the team's CS creds as well.

Below, a tiny bit about how blurring works, and how to deblur. I'll skip most of the maths, promised.

[Continues ...]

Blurring
Blurring is an operation on a digital image. We can represent each pixel by a value (rgb, grayscale, etc.). For example (marking one pixel for blurring):

145 167 222 255 255 255
109 112 216 255 243 248
124 131 143 128 232 201
145 167 140 198 196 203
113 125 132 162 178 193

For each pixel in the output image, its value is computed by a weighted average of the neighboring pixels in the input image. In typical Gaussian blur, the pixel under consideration is given the heighest weight, and weights for pixels further away lower with distance to this pixel.
There are two parameters: how many neighbors to consider, and how to distribute the weights. For example, if you take into account all neighbors one pixel away (including diagonal), you could assign the central pixel a weight of 40%, the neighbors up/down/left/right weights of 10%, and the diagonal neighbors weights of 0.05%:

0.05 0.10 0.05
0.10 0.40 0.10
0.05 0.10 0.05

This matrix is called the kernel. The one here is for a Gaussian blur ("a bit vagued up" if you will). If we use it to blur the marked pixel (original value: 143), we put the kernel's center on the pixel under consideration and then just multiply all weights and add the result together. Overlaying results in the following:

0.05 * 112 0.10 * 216 0.05 * 255
0.10 * 131 0.40 * 143 0.10 * 128
0.05 * 167 0.10 * 140 0.05 * 198

And adding all these together should result in 155.3 (rounded to 155).

Of course, other kernel sizes and other distributions are possible. Another typical class of blur is motion blur - the blur you'd get in the background if you take a photo of a racecar while keeping the racecar focused. While weights in a Gaussian blur trail off (sort of) circular around the pixel that will be blurred, in motion blurring the main weights are distributed in the direction of motion - usually left to right or vice versa.

How to deblur
Blurring can be expressed as: Y = X * K + n.
Here, X is the input image, Y the output image, K the kernel, n a noise factor and '*' is a mathematical operation called convolution. Deblurring is then a matter of inverting this, i.e. deconvolution. Deblurring if you don't know the kernel used originally is called blind deconvolution. There's tons of works about this. What we're interested in is recovering text from blurred images. So perhaps there's a way to use specific properties of text to "help" the blind deconvolution?

This is what Cho et al's algorithm did. It used the fact that text has high contrast (e.g. black on white). Of course, not all such stark contrasts are text. Cho et al used an algorithm to determine the "stroke width" - the width of each stroke of the letters. If this was fairly consistent, then this was probably a letter.

Of course, if you're looking for strokes in the picture, your process slows down quite a bit. Interestingly enough, a follow-up algorithm by Pan et al does not use specific features of text, but still improves upon the algorithm of Cho et al.

Both algorithms follow roughly the same outline: First, a kernel is estimated and then a preliminary deblur image is computed. The preliminary result is used to improve the kernel estimate, etc. etc.. Finally, the final kernel estimate is applied to the original input image -- the preliminary images help reveal details about contrast, but ruin the image and thus cannot be used for this step.

There's a lot more to this (Fourier transforms, gradients, Laplacian deconvolution, etc), but that goes beyond an already lengthy article. If you're interested, read the thesis or see the papers.

About the plugin
You might think it's easy to convert code from Matlab to a GIMP plugin. Well... no.
You see, Matlab comes packed to the brim with mathematical functions. GIMP plugins are coded in C, C++, or Python. So you have to use libraries for the maths functions you need. Unfortunately, some implementations of the mathematical functions needed give different results than others. The team had to discover that the hard way, and then either update their calls if they could track down the problem or implement the function themselves if they couldn't.

But that's just the mathematics - how does it work, you ask?
Well, the end result is a sleek-looking, mature plugin. You select an area, run the filter, and up pops a dialog window where you can set some parameters (e.g. kernel size estimate), and even toggle a preview function. What impressed me is that the team managed to implement a meaningful progress bar. Since the algorithm loops, and the amount of processing in each loop is not universally determined, this was pretty tricky. They ended up running a lot of test cases with timing controls embedded in the code, and found out how much progress was made in each step.
Sometimes, the progress bar pauses for a bit longer than anticipated - basically when the next portion uses more time than the generic estimate allowed.

All in all, I encourage you to download the plugin and play with it. I'll try to be around in the comments when this story runs.

Final words and links
- The GIMP plugin is available from here
- Pan et al's project page

In conclusion: there is a free, open source GIMP plugin available. The plugin could be (significantly) sped up by making use of GPU processing - but this is a finished academic project, so don't look to me for that. Interesting to note is that the project took a bit of a side step: Pan's algorithm (like Cho's) focuses mainly on deblurring uniform motion blur, not on deblurring artificially induced blur.
Given that the main difference between these is the kernel used for blurring, it's possible to update the plugin to be able to inject your own specially crafted kernel estimate to be used for deblurring. Basically, instead of the algorithm estimating a blur kernel, you just supply it. That would enable it to be used for the originally intended purpose.

Be that as it may, I'm (more than) happy with the result, and hope that others will make use of this plugin.


Original Submission

posted by janrinok on Sunday September 25 2016, @12:39AM   Printer-friendly
from the can't-be-good-at-everything dept.

Every study ranking nations by health or living standards invariably offers Scandinavian social democracies a chance to show their quiet dominance. A new analysis published this week—perhaps the most comprehensive ever—is no different. But what it does reveal are the broad shortcomings of sustainable development efforts, the new shorthand for not killing ourselves or the planet, as well as the specific afflictions of a certain North American country.

Iceland and Sweden share the top slot with Singapore as world leaders when it comes to health goals set by the United Nations, according to a report published in the Lancet . Using the UN's sustainable development goals as guideposts, which measure the obvious (poverty, clean water, education) and less obvious (societal inequality, industry innovation), more than 1,870 researchers in 124 countries compiled data on 33 different indicators of progress toward the UN goals related to health.

The massive study emerged from a decade-long collaboration focused on the worldwide distribution of disease. About a year and a half ago, the researchers involved decided their data might help measure progress on what may be the single most ambitious undertaking humans have ever committed themselves to: survival. In doing so, they came up with some disturbing findings, including that the country with the biggest economy (not to mention, if we're talking about health, multibillion-dollar health-food and fitness industries) ranks No. 28 overall, between Japan and Estonia.

[...]

The voluminous work that went into the paper may make measuring the UN goals on health seem even more daunting: The researchers were able so far to evaluate just 70 percent of the health-related indicators called for by the UN.

It may not be pretty, but "we have no chance of success if we can't agree on what's critical," said Linda Fried, dean of the Mailman School of Public Health at Columbia University.


Original Submission

posted by janrinok on Saturday September 24 2016, @11:04PM   Printer-friendly
from the if-only-I-could-do-it-over dept.

Vint Cerf is considered a father of the internet, but that doesn't mean there aren't things he would do differently if given a fresh chance to create it all over again.

"If I could have justified it, putting in a 128-bit address space would have been nice so we wouldn't have to go through this painful, 20-year process of going from IPv4 to IPv6," Cerf told an audience of journalists Thursday during a press conference at the Heidelberg Laureate Forum in Germany.

IPv4, the first publicly used version of the Internet Protocol, included an addressing system that used 32-bit numerical identifiers. It soon became apparent that it would lead to an exhaustion of addresses, however, spurring the creation of IPv6 as a replacement. Roughly a year ago, North America officially ran out of new addresses based on IPv4.

For security, public key cryptography is another thing Cerf would like to have added, had it been feasible.

Trouble is, neither idea is likely to have made it into the final result at the time. "I doubt I could have gotten away with either one," said Cerf, who won a Turing Award in 2004 and is now vice president and chief internet evangelist at Google. "So today we have to retrofit."


Original Submission