Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

When transferring multiple 100+ MB files between computers or devices, I typically use:

  • USB memory stick, SD card, or similar
  • External hard drive
  • Optical media (CD/DVD/Blu-ray)
  • Network app (rsync, scp, etc.)
  • Network file system (nfs, samba, etc.)
  • The "cloud" (Dropbox, Cloud, Google Drive, etc.)
  • Email
  • Other (specify in comments)

[ Results | Polls ]
Comments:54 | Votes:85

posted by cmn32480 on Thursday January 04 2018, @11:42PM   Printer-friendly
from the gotta-be-hip dept.

Nvidia's updated license for NVIDIA GeForce Software bans most usage of gaming-oriented GPUs in data centers, except for the purpose of "blockchain processing":

Nvidia has banned the use of its GeForce and Titan gaming graphics cards in data centers – forcing organizations to fork out for more expensive gear, like its latest Tesla V100 chips. The chip-design giant updated its GeForce and Titan software licensing in the past few days, adding a new clause that reads: "No Datacenter Deployment. The SOFTWARE is not licensed for datacenter deployment, except that blockchain processing in a datacenter is permitted."

In other words, if you wanted to bung a bunch of GeForce GPUs into a server box and use them to accelerate math-heavy software – such as machine learning, simulations and analytics – then, well, you can't without breaking your licensing agreement with Nvidia. Unless you're doing trendy blockchain stuff.

A copy of the license in the Google cache, dated December 31, 2017, shows no mention of the data center ban. Open the page today, and, oh look, data center use is verboten. To be precise, the controversial end-user license agreement (EULA) terms cover the drivers for Nvidia's GeForce GTX and Titan graphics cards. However, without Nvidia's proprietary drivers, you can't unlock the full potential of the hardware, so Nv has you over a barrel.

It's not just a blow for people building their own servers and data centers, it's a blow for any computer manufacturer – such as HPE or Dell – that hoped to flog GPU-accelerated servers, using GTX or Titan hardware, much cheaper than Nvidia charges for, say, its expensive DGX family of GPU-accelerated servers. A DGX-1 with Tesla V100 chips costs about $150,000 from Nvidia. A GeForce or Titan-powered box would cost much less albeit with much less processing power.

NVIDIA's DGX-1 product page.

Also at DataCenter Knowledge.


Original Submission

posted by martyb on Thursday January 04 2018, @09:56PM   Printer-friendly
from the up-in-smoke dept.

U.S. Attorney General Jeff Sessions will reportedly rescind the Cole Memo (DoJ), effectively ending the moratorium on enforcing cannabis prohibition in states where it has been legalized:

Attorney General Jeff Sessions will roll back an Obama-era policy that gave states leeway to allow marijuana for recreational purposes.

Two sources with knowledge of the decision confirmed to The Hill that Sessions will rescind the so-called Cole memo, which ordered U.S. attorneys in states where marijuana has been legalized to deprioritize prosecution of marijuana-related cases.

The Associated Press first reported the decision.

Sessions, a vocal critic of marijuana legalization, has hinted for months that he would move to crack down on the growing cannabis market.

Republican Senator Cory Gardner says he will hold up the confirmation process for DoJ nominees:

Sen. Cory Gardner (R-Colo.) threatened on Thursday to start holding up the confirmation process for White House Justice Department nominees unless Attorney General Jeff Sessions reverses a decision to roll back a policy allowing legalized recreational use of marijuana in some states.

Gardner said in a series of tweets that Sessions had told him before he was confirmed by the Senate that he would not change an Obama-era policy that discouraged federal prosecutors from pursuing marijuana-related offenses in states where the substance had been legalized. Colorado is one of those states.

[...] The Justice Department's reversal of the Cole memo on Thursday came three days after California's new law allowing recreational marijuana use went into effect.

Other politicians have reacted strongly to the news.

Previously: New Attorney General Claims Legal Weed Drives Violent Crime; Statistics be Damned
4/20: The Third Time's Not the Charm
Jeff Sessions Reboots the Drug War
According to Gallup, American Support for Cannabis Legalization is at an All-Time High
Opioid Commission Drops the Ball, Demonizes Cannabis
Recreational Cannabis Goes on Sale in California

Related: Attorney General Nominee Jeff Sessions Backs Crypto Backdoors


Original Submission

posted by Fnord666 on Thursday January 04 2018, @08:23PM   Printer-friendly
from the a-little-bit-at-a-time? dept.

With the recent brouhaha about vulnerabilities in many relatively recent processors, I got to thinking back to the time when I first started programming. Back then, things seemed so much simpler and much more straightforward.

To start off the new year, I thought it might be interesting to find out how people got their start in programming.

My first exposure to programming was by means of a Teletype over a dialup line using an acoustical coupler to a PDP-8 computer running TSS/8 and which had 24 KB of RAM. At the time, Star Trek ToS was on the air, and I thought this was the new, big thing. I was quickly disappointed by it not measuring up to anything like what I saw on TV, but I saw it had promise. Started with BASIC (and FOCAL). Later on was exposed to a PDP-11 running RSTS/E and programmed in BASIC+ as well as some Pascal.

As for owning a computer, the first one I bought was an OSI[*] Challenger 4P with a whopping 4KB of RAM!

From those humble beginnings, I ate up everything I could lay my hands on and later worked for a wide variety of companies that ranged in size from major internationals to tiny startups. Even had a hand in a project for Formula 1!

So, my fellow Soylentils, how did you get started programming? Where has it taken you?

[*] One day when my girlfriend came over and saw the OSI logo on my computer her eyes got huge! You see, The Six Million Dollar Man was on television at that time, and she suddenly suspected I was connected to the "Office of Scientific Intelligence"!


Original Submission

posted by Fnord666 on Thursday January 04 2018, @06:50PM   Printer-friendly
from the surprising-to-no-one dept.

From Security Week we have a report that nearly a quarter-million people have had Personally Identifiable Information (PII) compromised by the Department of Homeland Security:

The privacy incident involved a database used by the DHS Office of the Inspector General (OIG) which was stored in the DHS OIG Case Management System.

The incident impacted approximately 247,167 current and former federal employees that were employed by DHS in 2014. The exposed Personally identifiable information (PII) of these individuals includes names, Social Security numbers, birth dates, positions, grades, and duty stations.

Individuals (both DHS employees and non-DHS employees) associated with DHS OIG investigations from 2002 through 2014 (including subjects, witnesses, and complainants) were also affected by the incident, the DHS said.

The PII associated with these individuals varies depending on the documentation and evidence collected for a given case and could include names, social security numbers, alien registration numbers, dates of birth, email addresses, phone numbers, addresses, and personal information provided in interviews with DHS OIG investigative agents.

The data breach wasn’t the result of an external attack, the DHS claims. The leaked data was found in an unauthorized copy of the DHS OIG investigative case management system that was in the possession of a former DHS OIG employee.

The data breach was discovered on May 10, 2017, as part of an ongoing criminal investigation conducted by DHS OIG and the U.S. Attorney’s Office.

“The privacy incident did not stem from a cyber-attack by external actors, and the evidence indicates that affected individual’s personal information was not the primary target of the unauthorized exfiltration,” DHS explained.

No word on whether or not the copy was encrypted in any fashion. Is this a genuine issue, or just the result of an employee making a local copy of the DHS case management system for working at from home?


Original Submission

posted by martyb on Thursday January 04 2018, @05:17PM   Printer-friendly
from the Pew!-Pew! dept.

In July this year it will be 40 years since the ultra popular video game Space Invaders first hit the arcades:

Arcade historians know that 1978 was a big year in arcade games and Taito knows it too since they released a game that year that put them on the map. Unfortunately Taito hasn’t done anything earth-shaking in the past few years but they will certainly be celebrating the 40th anniversary of Space Invaders throughout 2018. They’ve started by launching this special website commemorating the original; if you frequent modern arcades then you likely will have come across the new Space Invaders Frenzy by Raw Thrills.

For the record, the arcade version was released in July of 1978.

Any other Soylentils remember when this first arrived in the arcades?


Original Submission

posted by martyb on Thursday January 04 2018, @03:44PM   Printer-friendly
from the C++ dept.

The US National Academy of Engineering has announced that Bjarne Stroustrup will receive the 2018 Charles Stark Draper Prize for Engineering for his creation of C++ while at Bell Labs. The language C++, to put it mildly, is widely used. The prize will be formally awarded on February 20th in Washington, DC.

Here is Bjarne's home page and his Wikipedia page.


Original Submission

posted by cmn32480 on Thursday January 04 2018, @02:11PM   Printer-friendly
from the what-if-you-can't-program-your-way-out-of-a-paper-bag? dept.

Agile Development is hip. It's hot. All the cool kids are doing it.

But it doesn't work.

Before I get into why this "Agile" stuff is horrible, let's describe where Agile/Scrum can work. It can work for a time-sensitive and critical project of short duration (6 weeks max) that cross-cuts the business and has no clear manager, because it involves people from multiple departments. You can call it a "Code Red" or call it a Scrum or a "War Room" if you have a physical room for it.

Note that "Agile" comes from the consulting world. It suits well the needs of a small consulting firm, not yet very well-established, that lands one big-ticket project and needs to deliver it quickly, despite changing requirements and other potential bad behavior from the client. It works well when you have a relatively homogeneous talent level and a staff of generalists, which might also be true for an emerging web consultancy.

As a short-term methodology when a firm faces an existential risk or a game-changing opportunity, I'm not opposed to the "Code Red"/"crunch time"/Scrum practice of ignoring peoples' career goals and their individual talents. I have in mind that this "Code Red" state should exist for no more than 6 weeks per year in a well-run business. Even that's less than ideal: the ideal is zero. Frequent crises reflect poorly on management.


Original Submission

posted by janrinok on Thursday January 04 2018, @12:36PM   Printer-friendly
from the innovators-or-gamblers dept.

The CBC reports, http://www.cbc.ca/news/business/bitcoin-s-gender-divide-could-be-a-bad-sign-experts-say

Bitcoin, and the world of cryptocurrency, is a boys' club, say some experts, and that should be cause for concern.

Google Analytics results put the divide at 96.57 per cent men to 3.43 per cent women: https://coin.dance/stats/gender.

That's a huge red flag to Duncan Stewart, research director of Deloitte Canada's technology division. "It isn't merely that the value has risen as far and as fast as it has; it's the fact that it's 97 per cent men — that is, in and of itself, a potential danger sign," he says. "There are studies out there that suggest men are predisposed towards bubbles in a way that women are not."

Stewart made his case in a recent online post about the subject: https://www.linkedin.com/pulse/bitcoin-bubble-gender-split-says-probably-duncan-stewart/?trackingId=LlXWi2rCxUW0itfA92%2BhSQ%3D%3D

Stewart said he "cannot think of any security, currency or asset class in history that shows that extreme a gender divide and has been sustainable."

[...] Iliana Oris Valiente is a rarity in the cryptocurrency world. She has emerged as a female leader in this space and was recently chosen to lead consulting firm Accenture's global blockchain innovation division. Oris Valiente doesn't buy into the theory that an outsized amount of male interest in a particular asset in and of itself creates a bubble. "If we have primarily men involved in building the businesses and being the early-stage investors, they're likely to share the new tidbits and the new deals with their own established networks."

But without a major catalyst, she doesn't see the gender divide in this field narrowing anytime soon.


Original Submission

posted by janrinok on Thursday January 04 2018, @10:43AM   Printer-friendly
from the mish-mash-mesh dept.

Submitted via IRC for Fnord666_

Getting WiFi to every corner of your home is made much easier these days with a mesh network, which uses a specialized router and individual nodes that can configure themselves. Companies like Netgear, Samsung and ASUS all have kits of varying price that can help you make one in your own home, but you generally have to purchase a whole new set of devices to make it work. Now, ASUS is offering AiMesh, a system that uses your current ASUS routers to create a mesh network without pricey extra hardware.

Since you're using routers that you already own to create a mesh network, you can decide which one is the primary and which will act as nodes. You simply find the router with the best capabilities, drop it in a central location, then use the built-in software to configure the network.

AiMesh only runs on routers from ASUS, though.

Source: https://www.engadget.com/2018/01/03/asus-mesh-wifi-aimesh/


Original Submission

posted by martyb on Thursday January 04 2018, @08:35AM   Printer-friendly
from the scratch-an-itch dept.

I love FOSS, and even though it doesn't work as a model for everything, there are some kinds of applications that just seem to be a perfect fit.

I think one such application is software for CAD as it relates to construction and land surveying(my trade). Much of the design and record data from the field must be accessible for decades and this fact alone builds a strong case for using open formats. Unfortunately, and much to the chagrin of all of the surveyors I know, there seems to be a slow push by the software side of the industry away from using the open formats of old toward proprietary formats. A lot of this is caused by the ever increasing complexity (and reinventing of the wheel) of design software; however, when it comes to boots the ground, not much has changed with means and methods. There are only so many ways to accomplish what we do and most of it has already been optimized. The result of this push toward proprietary formats and overkill software has been the abandonment of good, functional, and simple proprietary software that just worked. Many of the companies that created this good software no longer exist because they have been embraced and extinguished by larger players. There is a growing reality that the only option to keep work going is to pay many 1000's of dollars a year per person for what should be a fairly simple piece of software. This is not the kind of software that would require a lot of support.

So my question is this: What is the best way for me to begin a successful FOSS project like this?

For the record I am not a programmer, but I dabble from time to time. I could foresee it being a fairly easy sell to convince the powers that be to throw some money (one time cost) at a development team to create for us what we need. Between the different companies and contacts that I know in the industry, a sort of corporate crowd funding effort is not far fetched. Why the heck isn't this already done for all the standard corporate software, rather than paying needless licensing fees into perpetuity? Sometimes software just becomes stable. A FOSS solution would be a godsend to smaller mom and pop operations and I think it could cure some of my resentment of people constantly breaking good things for the sake of "progress".

BTW, I have looked at some of the existing open source CAD software and found it all pretty wanting. Could requesting special functionality from these developers be a better route than starting from scratch? Thanks in advance!


Original Submission

posted by Fnord666 on Thursday January 04 2018, @07:02AM   Printer-friendly
from the gold-diggers dept.

On January 25, Global-scale Observations of the Limb and Disk (GOLD) will become NASA's first science instrument to launch aboard a geostationary commercial satellite:

The Global-scale Observations of the Limb and Disk (GOLD) mission is an instrument launching on a commercial satellite to inspect from geostationary orbit the dynamic intermingling of space and Earth's uppermost atmosphere. GOLD will seek to understand what drives change in this region where terrestrial weather in the lower atmosphere interacts with the tumult of solar activity from above and Earth's magnetic field. Resulting data will improve forecasting models of space weather events that can impact life on Earth, as well as satellites and astronauts in space.

NASA will hold a press conference about the mission at 1 PM EST on Thursday.

The mission will study the thermosphere and ionosphere using a far-ultraviolet imaging spectrograph. Richard Eastes from the Florida Space Institute at the University of Central Florida leads the mission.

The SES-14 commercial payload will replace NSS-806, a communications satellite covering Latin America, the Iberian peninsula, Canary Islands, Western Europe and much of Eastern Europe.


Original Submission

posted by Fnord666 on Thursday January 04 2018, @05:27AM   Printer-friendly
from the Open-the-pod-bay-door,-HAL dept.

At Venture Beat:

From Microsoft's accidentally racist bot to Inspirobot's dark memes, AI often wanders into transgressive territories. Why does this happen, and can we stop it?

Ispirobot seems very interesting.

Another example of AI gone awry is Inspirobot. Created by Norwegian artist and coder Peder Jørgensen, the inspirational quote-generating AI creates some memes that would be incredibly bleak if the source weren't a robot. News publications called it an AI in crisis or claimed the bot had "gone crazy." Inspirobot's transgression differs from Tay's, though, because of its humor. Its deviance serves as entertainment in a world that has a low tolerance of impropriety from people, who should know better.

What the bot became was not the creator's intention by a long shot. Jørgensen thinks the cause lies in the bot's algorithmic core. "It is a search system that compiles the conversations and ideas of people online, analyzes them, and reshapes them into the inspirational counterpoints it deems suitable," he explained. "Given the current state of the internet, we fear that the bot's mood will only get worse with time."

The creators' attempts to moderate "its lean towards cruelty and controversy" so far have only seemed "to make it more advanced and more nihilistic."


Original Submission

posted by Fnord666 on Thursday January 04 2018, @03:54AM   Printer-friendly
from the all-we-are-is-dust-in-the-wind dept.

The dips in brightness observed at Tabby's star are still probably caused by dust, and not alien megastructures:

For the last two years, astronomers all over the world have been eagerly observing what is hailed as "the most mysterious star in the Universe," a stellar object that wildly fluctuates in brightness with no discernible pattern — and now they may finally have an answer for its weird behavior. Scientists are fairly certain that a bunch of dust surrounding the star is to blame. And that means that the more tantalizing explanation — alien involvement — is definitely not the cause.

It's the most solid solution yet that astronomers have come up with for this star's odd ways. Named KIC 8462852, the star doesn't act like any star we've ever seen before. Its light fluctuations are extreme, dimming by up to 20 percent at times. And its dips don't seem to repeat in a predictable way. That means something really big and irregular is passing in front of this star, leading scientists to suggest a number of possible objects that could be blocking the star's light — from a family of large comets to even "alien megastructures" orbiting the star.

Also at Sky & Telescope and Discover Magazine.

The First Post-Kepler Brightness Dips of KIC 8462852

We present a photometric detection of the first brightness dips of the unique variable star KIC 8462852 since the end of the Kepler space mission in 2013 May. Our regular photometric surveillance started in October 2015, and a sequence of dipping began in 2017 May continuing on through the end of 2017, when the star was no longer visible from Earth. We distinguish four main 1-2.5% dips, named "Elsie," "Celeste," "Skara Brae," and "Angkor", which persist on timescales from several days to weeks. Our main results so far are: (i) there are no apparent changes of the stellar spectrum or polarization during the dips; (ii) the multiband photometry of the dips shows differential reddening favoring non-grey extinction. Therefore, our data are inconsistent with dip models that invoke optically thick material, but rather they are in-line with predictions for an occulter consisting primarily of ordinary dust, where much of the material must be optically thin with a size scale <<1um, and may also be consistent with models invoking variations intrinsic to the stellar photosphere. Notably, our data do not place constraints on the color of the longer-term "secular" dimming, which may be caused by independent processes, or probe different regimes of a single process.

Previously: Dust the Likely Cause of Tabby's Star Dimming


Original Submission

posted by Fnord666 on Thursday January 04 2018, @02:21AM   Printer-friendly
from the what-long-term-side-effects dept.

New drug approvals hit 21-year high in 2017

U.S. drug approvals hit a 21-year high in 2017, with 46 novel medicines winning a green light -- more than double the previous year -- while the figure also rose in the European Union.

The EU recommended 92 new drugs including generics, up from 81, and China laid out plans to speed up approvals in what is now the world's second biggest market behind the United States.

Yet the world's biggest drugmakers saw average returns on their research and development spending fall, reflecting more competitive pressures and the growing share of new products now coming from younger biotech companies. Consultancy Deloitte said last month that projected returns at 12 of the world's top drugmakers were at an eight-year low of only 3.2 percent.

Many of the drugs receiving a green light in 2017 were for rare diseases and sub-types of cancer, which often target very small populations, although they can cost hundreds of thousands of dollars. Significantly, the U.S. drug tally of 46 does not include the first of a new wave of cell and gene therapies from Novartis, Gilead Sciences and Spark Therapeutics that were approved in 2017 under a separate category.

Food and Drug Administration Commissioner Scott Gottlieb has indicated that it might be time to revise the Orphan Drug Act of 1983.


Original Submission

posted by Fnord666 on Thursday January 04 2018, @01:04AM   Printer-friendly
from the small-price-to-pay dept.

UPDATE 2: (martyb)

This still-developing story is full of twists and turns. It seems that Intel chips are definitely implicated (AFAICT anything post Pentium Pro). There have been various reports, and denials, that AMD and ARM are also affected. There are actually two vulnerabilities being addressed. Reports are that a local user can access arbitrary kernel memory and that, separately, a process in a VM can access contents of other virtual machines on a host system. These discoveries were embargoed for release until January 9th, but were pre-empted when The Register first leaked news of the issues.

At this time, manufacturers are scrambling to make statements on their products' susceptibility. Expect a slew of releases of urgent security fixes for a variety of OSs, as well as mandatory reboots of VMs on cloud services such as Azure and AWS. Implications are that there is going to be a performance hit on most systems, which may have cascading follow-on effects for performance-dependent activities like DB servers.

To get started, see the very readable and clearly-written article at Ars Technica: What’s behind the Intel design flaw forcing numerous patches?.

Google Security Blog: Today's CPU vulnerability: what you need to know.
Google Project Zero: Reading privileged memory with a side-channel, which goes into detail as to what problems are being addressed as well as including CVEs:

So far, there are three known variants of the issue:

  • Variant 1: bounds check bypass (CVE-2017-5753)
  • Variant 2: branch target injection (CVE-2017-5715)
  • Variant 3: rogue data cache load (CVE-2017-5754)

Before the issues described here were publicly disclosed, Daniel Gruss, Moritz Lipp, Yuval Yarom, Paul Kocher, Daniel Genkin, Michael Schwarz, Mike Hamburg, Stefan Mangard, Thomas Prescher and Werner Haas also reported them; their [writeups/blogposts/paper drafts] are at:

During the course of our research, we developed the following proofs of concept (PoCs):

  1. A PoC that demonstrates the basic principles behind variant 1 in userspace on the tested Intel Haswell Xeon CPU, the AMD FX CPU, the AMD PRO CPU and an ARM Cortex A57 [2]. This PoC only tests for the ability to read data inside mis-speculated execution within the same process, without crossing any privilege boundaries.
  2. A PoC for variant 1 that, when running with normal user privileges under a modern Linux kernel with a distro-standard config, can perform arbitrary reads in a 4GiB range [3] in kernel virtual memory on the Intel Haswell Xeon CPU. If the kernel's BPF JIT is enabled (non-default configuration), it also works on the AMD PRO CPU. On the Intel Haswell Xeon CPU, kernel virtual memory can be read at a rate of around 2000 bytes per second after around 4 seconds of startup time. [4]
  3. A PoC for variant 2 that, when running with root privileges inside a KVM guest created using virt-manager on the Intel Haswell Xeon CPU, with a specific (now outdated) version of Debian's distro kernel [5] running on the host, can read host kernel memory at a rate of around 1500 bytes/second, with room for optimization. Before the attack can be performed, some initialization has to be performed that takes roughly between 10 and 30 minutes for a machine with 64GiB of RAM; the needed time should scale roughly linearly with the amount of host RAM. (If 2MB hugepages are available to the guest, the initialization should be much faster, but that hasn't been tested.)
  4. A PoC for variant 3 that, when running with normal user privileges, can read kernel memory on the Intel Haswell Xeon CPU under some precondition. We believe that this precondition is that the targeted kernel memory is present in the L1D cache.

According to a report in Barron's:

AMD said through a spokesperson:

There is a lot of speculation today regarding a potential security issue related to modern microprocessors and speculative execution. As we typically do when a potential security issue is identified, AMD has been working across our ecosystem to evaluate and respond to the speculative execution attack identified by a security research team to ensure our users are protected.

To be clear, the security research team identified three variants targeting speculative execution. The threat and the response to the three variants differ by microprocessor company, and AMD is not susceptible to all three variants. Due to differences in AMD’s architecture, we believe there is a near zero risk to AMD processors at this time. We expect the security research to be published later today and will provide further updates at that time.

UPDATE 1: (takyon)

Intel statement.

Original story follows

It appears that all modern Intel processors contain a hardware-level security flaw. Details are being suppressed while patches are developed, but it appears that a user process can put a reference to a privileged address in speculative execution, and thereby bypass privilege restrictions.

Since simply marking certain areas of the virtual memory space as privileged is insecure, operating systems will have to completely isolate their kernels, i.e., remove them from the virtual memory space of user processes. This will make context switching more expensive. Presumably, every OS call will now require swapping out the virtual memory map and flushing the page tables cache in the processor. Twice: one to go to the kernel, and once to return to the user process. This will be a significant performance hit: depending on the application, up to 30%.

This is apparently an Intel-specific bug; processors by other manufacturers are not affected. However, they may still suffer the performance hit: The changes to the OS are substantial, and it seems unlikely that OS manufacturers will, in the long-term, maintain two completely different kernel-access strategies for different processors.

Previously: Major Hardware Bug Quietly Being Patched in the Open

A bug that affects Intel processors requires Kernel Page Table Isolation in order to be mitigated:

It is understood the bug is present in modern Intel processors produced in the past decade. It allows normal user programs – from database applications to JavaScript in web browsers – to discern to some extent the layout or contents of protected kernel memory areas.

The fix is to separate the kernel's memory completely from user processes using what's called Kernel Page Table Isolation, or KPTI. At one point, Forcefully Unmap Complete Kernel With Interrupt Trampolines, aka FUCKWIT, was mulled by the Linux kernel team, giving you an idea of how annoying this has been for the developers.

The fix could dramatically lower performance for some workloads:

The Python Sweetness blog, which first reported on the bug, said the vulnerability was first identified by developers working on the Linux kernel, though it also affects Windows operating systems. It added that a number of major security patches for the Linux kernel have been pushed out over the Christmas and New Year holidays, which are likely to be an attempt to fix the new bug. These kernel updates have provided a workaround that can prevent attackers from exploiting the bug, but the problem is that doing so comes at a heavy cost. In technical terms, the fix involves using Kernel Page-Table Isolation or PTI to restrict processes so they can only access their own memory area. However, PTI also affects low-level features in the hardware, resulting in a performance hit of up to 35 percent for older Intel processors.

[...] "Urgent development of a software mitigation is being done in the open and recently landed in the Linux kernel, and a similar mitigation began appearing in NT kernels in November," the Python Sweetness blog reported Monday. "In the worst case the software fix causes huge slowdowns in typical workloads." These slowdowns were highlighted by Brad Spengler, lead developer of grsecurity, which is a set of patches for the Linux kernel which emphasize security enhancements. According to HotHardWare, Spengler said an Intel Core i7-3770S GPU will take a 34 percent performance hit, while the new Intel Core i7-6700 will run 29 percent slower.

The Linux fix can be disabled, which you will likely want to do if you use an AMD processor.

December 19, 2017: Intel's CEO Just Sold a Lot of Stock

Did Krzanich use that money to buy AMD stock?

[See also: How-To Geek, Phoronix, and Security Week. --martyb]

An Anonymous Coward contributed Prefetch Side-Channel Attacks: Bypassing SMAP and Kernel ASLR.


Original Submission 1

Original Submission 2

posted by Fnord666 on Thursday January 04 2018, @12:48AM   Printer-friendly
from the eat-more-Bambi dept.

Deer are regularly hunted across the United States, but some people pay exorbitant prices for imported deer meat:

Wintertime is a special time of year at Cafe Berlin, located just a few blocks from the Capitol building in Washington, D.C. This is when they roll out their menu of wild game, such as deer, wild boar, and quail. Regular customers have come to expect it. "They ask, weeks in advance, 'When does the wild game menu start? When does it start?'" says James Watson, one of the restaurant's chefs. And the star of that menu is venison. The restaurant serves venison ribs, venison loin, even venison tartar. It's food that takes your mind back to old European castles, where you can imagine eating like aristocracy.

You won't see venison in ordinary supermarkets. At Wagshall's, a specialty food shop in Washington, I found venison loin selling for $40 a pound. This venison comes from farms, usually from a species of very large deer called red deer. Much of it is imported from New Zealand.

Yet there's a very different side to this luxury meat. Less than two hours drive from Washington, Daniel Crigler has a whole freezer full of venison that he got for free. Crigler's home in central Virginia is surrounded by woodlands full of white-tail deer. For Crigler, they are venison on the hoof. And he loves hunting. "I love the outdoors. I love being out. But I also like to eat the meat," he says, chuckling. It's pretty much the only red meat he eats. And as he shows off the frozen cuts of venison in his freezer, this crusty man reveals his inner epicurean. "That's a whole loin, right there," he says. "What I like to do with that is split it open, fill it full of blue cheese, wrap it up in tin foil and put it on the grill for about an hour and a half."

And here's the odd thing about this meat, so scarce and expensive in big cities; so abundant if you're a hunter in Madison County, Virginia. Hunters like Crigler kill millions of deer every year in America, but the meat from those animals can't be sold: It hasn't been officially approved by meat inspectors. Also, the government doesn't want hunters to make money from poaching. Yet hunters are allowed to give it away, and many do. As a result, venison occupies a paradoxical place in the world of food. It's a luxury food that turns up in notably non-luxurious places.

Related: Arby's is Selling Venison Sandwiches in Six Deer-Hunting States
Deer in Multiple U.S. States Test Positive for Chronic Wasting Disease, Leading to Restrictions


Original Submission