2019-01-01 00:00:00 ..
2019-07-22 13:28:32 UTC
2019-07-22 15:30:13 UTC
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Cyber-confrontation between the U.S. and Russia is increasingly turning to critical civilian infrastructure, particularly power grids, judging from recent press reports. The typically furtive conflict went public last month, when The New York Times reported U.S. Cyber Command's shift to a more offensive and aggressive approach in targeting Russia's electric power grid.
The report drew skepticism from some experts and a denial from the administration, but the revelation led Moscow to warn that such activity presented a "direct challenge" that demanded a response. WIRED magazine the same day published an article detailing growing cyber-reconnaissance on U.S. grids by sophisticated malware emanating from a Russian research institution, the same malware that abruptly halted operations at a Saudi Arabian oil refinery in 2017 during what WIRED called "one of the most reckless cyberattacks in history."
Although both sides have been targeting each other's infrastructure since at least 2012, according to the Times article, the aggression and scope of these operations now seems unprecedented.
[...] Washington and Moscow share several similarities related to cyber-deterrence. Both, for instance, view the other as a highly capable adversary. U.S. officials fret about Moscow's ability to wield its authoritarian power to corral Russian academia, the private sector, and criminal networks to boost its cyber-capacity while insulating state-backed hackers from direct attribution.
Moscow sees an unwavering cyber-omnipotence in the U.S., capable of crafting uniquely sophisticated malware like the 'Stuxnet' virus, all while using digital operations to orchestrate regional upheaval, such as the Arab Spring in 2011. At least some officials on both sides, apparently, view civilian infrastructure as an appropriate and perhaps necessary lever to deter the other.
Whatever their similarities in cyber-targeting, Moscow and Washington faced different paths in developing capabilities and policies for cyberwarfare, due in large part to the two sides' vastly different interpretations of global events and the amount of resources at their disposal.
A gulf in both the will to use cyber-operations and the capacity to launch them separated the two for almost 20 years. While the U.S. military built up the latter, the issue of when and where the U.S. should use cyber-operations failed to keep pace with new capabilities. Inversely, Russia's capacity, particularly within its military, was outpaced by its will to use cyber-operations against perceived adversaries.
[...] By no means should the Kremlin's activity go unanswered. But a leap from disabling internet access for Russia's 'Troll Farm' to threatening to blackout swaths of Russia could jeopardize the few fragile norms existing in this bilateral cyber-competition, perhaps leading to expanded targeting of nuclear facilities.
The U.S. is arriving late to a showdown that many officials in Russian defense circles saw coming a long time ago, when U.S. policymakers were understandably preoccupied with the exigencies of counterterrorism and counterinsurgency.
Washington could follow Moscow's lead in realizing that this is a long-term struggle that requires innovative and thoughtful solutions as opposed to reflexive ones. Increasing the diplomatic costs of Russian cyber-aggression, shoring-up cyber-defenses, or even fostering military-to-military or working-level diplomatic channels to discuss cyber redlines, however discretely and unofficially, could present better choices than apparently gambling with the safety of civilians that both sides' forces are sworn to protect.
Submitted via IRC for Bytram
If you're thinking about resetting your Windows PC with a local account, save yourself some frustration and consider upgrading to the Windows 10 May 2019 Update first.
Our experiences with the October 2018 Update nearly convinced us that local accounts were gone for good. They're not, thank goodness, but Out-of-the-Box Experience (OOBE) in that version pushes you particularly hard toward using a Microsoft account. We discovered two workarounds, though, to allow you to log in as you wish.
[...] Over time, Microsoft has tacitly encouraged you ever more to create a Microsoft account, but it's never actually blocked you from creating a local one. It comes damn close in the October 2018 Update, however. Even worse, it begs you to connect your PC to the Internet—but never warns you that once you do, the local account option will never be displayed.
In the May 2019 Update, Microsoft seems to have relaxed its tactics. But only a small fraction of users, or about 6 percent, appear to have access to the friendlier version. That estimate comes from AdDuplex, which tracks versioning as part of its ad network. According to AdDuplex, about a third of Windows users remain on the October 2018 Update, also known as 1809.
Microsoft changes up little elements of Windows from time to time, even "A/B" testing some features with some users and not with others. (Generally this happens more often in the Windows 10 Insider program.) PC makers also tweak their own factory-installed builds of Windows 10. In short, Windows 10 experiences differ by user, by PC, and by the version of Windows 10 they've installed.
With many users still stuck on the October 2018 Update or earlier versions, it's worth knowing that you'll probably want to upgrade straight through to the May 2019 Update if you prefer the local account option.
Remember, Microsoft is hoping to attract a billion users to Windows 10, and it's making money by luring them into its services and subscription model. Because a Microsoft account is the best way to do that, it's worth keeping an eye on how Microsoft "encourages" you to sign up and use one.
Rosen Law Firm, a global investor rights law firm, announces it has filed a class action lawsuit on behalf of purchasers of the securities of Netflix, Inc. (NFLX) from April 17, 2019 through July 17, 2019, inclusive (the "Class Period"). The lawsuit seeks to recover damages for Netflix investors under the federal securities laws.
[...] According to the lawsuit, defendants throughout the Class Period made false and/or misleading statements and/or failed to disclose that: (1) Netflix would not be able to gain its expected target number of new subscribers in the second quarter of 2019; (2) Netflix would also lose subscribers from the United States in the second quarter of 2019; and (3) as a result, defendants' public statements were materially false and misleading at all relevant times. When the true details entered the market, the lawsuit claims that investors suffered damages.
This is in addition to the investigation by the Schall Law Firm. I guess Rosen beat them to the punch.
The credit bureau Equifax will pay at least $650 million and potentially significantly more to end an array of state, federal and consumer claims over a data breach two years ago that exposed the sensitive information of more than 148 million people. The breach was one of the most potentially damaging in an ever-growing list of digital thefts.
The settlement, which was announced on Monday and still needs court approval, would be the largest ever paid by a company over a data breach. The deal requires Equifax to put a minimum of $380.5 million into a restitution fund for American consumers who file claims showing that they were financially harmed.
A portion of that money will pay for lawyers' fees, but at least $300 million must go to victims, according to settlement documents filed in federal court in Atlanta. If the initial cash is depleted, the company will add up to $125 million more to settle consumers' claims, bringing the total fund size to more than $500 million.
Also at: Ars Technica.
Lawsuits Aim Billions in Fines at Equifax and Ad-Targeting Companies
The True Cost of a Data Breach
Equifax Admits 2.5 Million More Americans Were Affected by Cyber Theft
Equifax Data Breach Could Affect 143 Million Americans [Updated]
Scientists have confirmed that viruses can kill marine algae called diatoms and that diatom die-offs near the ocean surface may provide nutrients and organic matter for recycling by other algae, according to a Rutgers-led study.
The study in the journal Nature Microbiology[$] also revealed that environmental conditions can accelerate diatom mortality from viral infection, which is important for understanding how diatoms influence carbon cycling and respond to changes in the oceans, including warming waters from climate change.
Diatoms, which are single-celled algae that generate about 20 percent of the Earth's oxygen, help store carbon dioxide, a key greenhouse gas, in the oceans.
[...] Diatoms take up dissolved silicon from the environment and turn it into glass for their cell walls. But most of the surface waters where diatoms live have low silicon levels, so these findings suggest viral infection may play an important role in controlling diatom populations globally.
Despite the past year’s global focus on GDPR and other data privacy regulations designed to give consumers more power over their data, more than half (55 percent) of consumers still don’t know how brands are using their data, according to the Acquia survey of more than 1,000 U.S.-based consumers.
On top of that, 65 percent don’t even know which brands are using their data.
Additional key findings from the survey include:
- 59 percent of consumers wait at least a month before sharing any personal data with brands
- 49 percent of respondents are more comfortable giving personal information to brands with a physical store presence
- 65 percent of respondents would stop using a brand that was dishonest about how it was using their data
California’s CCPA data privacy law and Maine’s Internet privacy protection bill, some of the most restrictive in the nation, are standing behind the consumers who want to understand and control their data – and other states are following. Brands trying to reach those consumers will need to act accordingly, and the stakes are high.
Acquia’s research found that consumers are not willing to give brands a second chance to protect the integrity of their data. This means that businesses have only one chance to make sure their customers know that their personal information, and their privacy, is in safe hands.
From the RedHat bug discussion:
A flaw was found in the Linux kernels implementation of IPMI (remote baseband access) where an attacker with local access to read /proc/ioports may be able to create a use-after-free condition when the kernel module is unloaded. The use after-free condition may result in privilege escalation. Investigation is ongoing.
See https://security-tracker.debian.org/tracker/CVE-2019-11811 for a lot of other distro links (the Source section at the top).
Over the last several weeks, some of the most prominent digital companies like Google, Cloudflare, Amazon and most recently Apple experienced issues with the services they are offering. While the types of services each of these companies differ, the common thread between these incidents was that they were a direct result of problems with the Border Gateway Protocol (BGP)—the protocol that more than any other technology makes the Internet a reality. Of course the other commonality across these incidents was that they were quite costly for the affected companies and their users.
BGP events such as these are meticulously investigated and reported at least internally by each organization, and in some cases quite publicly. However, in the aftermath of all the analysis and hand-wringing about the vulnerable state of the Internet, not much ever seems to happen in the big picture to prevent further routing problems from recurring. That is the situation we find ourselves, decades after BGP’s inception.
Now, it’s not that there are no norms or built-in mechanisms for doing and making BGP right on the Internet. Over the years, methods such as maximum prefix limits, Internet Route Registry (IRR) based filtering and Resource Public Key Infrastructure (RPKI) have been defined and implemented. For more information on some of these methods, check out our earlier post on Best Practices to Combat Route Leaks and Hijacks.
Yet all of these best practice methods suffer from the same fundamental limitation—there’s no way to make these practices binding on all the networks that make up the Internet. The only way that best practices grow on the Internet is through social promotion and business pressure.
To that end, RIPE held a RPKI deployathon in March, a much-needed event that gave hands-on experience with RPKI technology to those who needed it the most – network engineers and operators. RPKI proponents have been active to raise awareness. In fact, if there was one positive thing that emerged as a result of recent outages, it was the fact that Border Gateway Protocol protection mechanisms got some real exposure, but especially RPKI.
[...] Even though this was a very small scale and inadvertent event, it showcases how effective RPKI-based route filtering is.
Wide-scale adoption of RPKI will go a long way to cleaning up Internet routing and make it more secure. How can you help? If you’re a provider, implement strict filtering based on RPKI. If you’re an enterprise, put strict routing announcement filtering based on RPKI down as a requirement in your RFIs or RFPs for ISP services. The more market pressure ISPs receive, the more they’ll be motivated to adopt best practices that benefit everyone.
Researchers at the U.S. Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) have developed a graphene device that's thinner than a human hair but has a depth of special traits.
It easily switches from a superconducting material that conducts electricity without losing any energy, to an insulator that resists the flow of electric current, and back again to a superconductor - all with a simple flip of a switch. Their findings were reported today in the journal Nature.
"Usually, when someone wants to study how electrons interact with each other in a superconducting quantum phase versus an insulating phase, they would need to look at different materials. With our system, you can study both the superconductivity phase and the insulating phase in one place," said Guorui Chen, the study's lead author and a postdoctoral researcher in the lab of Feng Wang, who led the study.
Humans will make pets of nearly anything. Unbeknownst to most of us, giant leeches are kept not just by hospitals, but also by loving pet owners who care for and feed them.
"They're amazing, curious creatures that grow like crazy and make wonderful pets," leech keeper Ariane Khomjani told ScienceAlert.
He explained how individual leeches have their own unique personalities, with some being more adventurous and others more shy.
"Some like to try and sneak a feed more often than others, haha! But once they're full, they're content to sit and rest for a bit out of water if handled gently," he said.
Giant leeches of the variaty Ariane keeps (the massive Hirudinaria manillensis or 'buffulo' leech) need only be fed once a year, although it is recommended to feed them 2-3 times that often.
"Leeches are used post-operatively in patients who have had digit reattachment or muscle or flap surgery," nurse Julie Smolders from South Western Sydney Local Health District told ScienceAlert.
"The leeches are applied to the site and suck away the congested blood to allow for blood flow to the peripheries to keep the surgical site viable."
Unfortunately for the leeches, which could be considered 'used needles that can walk', they are classified as 'single-use only' by the FDA and are promptly disposed of after they do their job and drop off. (Humans can be an ungrateful lot.)
If the idea of keeping one of these little Draculas intrigues you, but you've no interest of offering yourself up as a meal, there are various accounts online of pet leeches being fed raw liver or heated blood from the butcher.
Feeding this way potentially allows the little guys to Liv Moore places than if they had to feed exclusively on the living.
Submitted via IRC for SoyCow1984
In a swift 3-0 vote Thursday, a panel of judges in a New York federal appeals court upheld the August 2017 conviction of Martin Shkreli. The infamous ex-pharmaceutical CEO is currently serving a seven-year prison sentence for fraud stemming from what prosecutors had described as a Ponzi-like scheme.
Shkreli, 36, must continue to serve his sentence and also still forfeit more than $7.3 million in assets, the judges affirmed.
The judges' ruling came just three weeks after hearing arguments in the appeal—rather than the normal period of months, Bloomberg notes. The ruling was also an unusually short seven pages.
In it, the panel rejected Shkreli's argument that the judge in his trial, US District Judge Kiyo Matsumoto, confused jurors with the wording of some of their instructions on how to deliberate the case.
"The instruction given here correctly stated the law," the panel said in its decision. "As such, we disagree with Shkreli that exclusion of additional language describing an element not required for the charged crime constituted a prejudicial error."
Quantum information processing promises to be much faster and more secure than what today's supercomputers can achieve, but doesn't exist yet because its building blocks, qubits, are notoriously unstable.
Purdue University researchers are among the first to build a gate - what could be a quantum version of a transistor, used in today's computers for processing information - with qudits. Whereas qubits can exist only in superpositions of 0 and 1 states, qudits exist in multiple states, such as 0 and 1 and 2. More states mean that more data can be encoded and processed.
The gate would not only be inherently more efficient than qubit gates, but also more stable because the researchers packed the qudits into photons, particles of light that aren't easily disturbed by their environment. The researchers' findings appear in npj Quantum Information.
The gate also creates one of the largest entangled states of quantum particles to date - in this case, photons. Entanglement is a quantum phenomenon that allows measurements on one particle to automatically affect measurements on another particle, bringing the ability to make communication between parties unbreakable or to teleport quantum information from one point to another, for example.
The more entanglement in the so-called Hilbert space - the realm where quantum information processing can take place - the better.
Previous photonic approaches were able to reach 18 qubits encoded in six entangled photons in the Hilbert space. Purdue researchers maximized entanglement with a gate using four qudits - the equivalent of 20 qubits - encoded in only two photons.
[...] Next, the team wants to use the gate in quantum communications tasks such as high-dimensional quantum teleportation as well as for performing quantum algorithms in applications such as quantum machine learning or simulating molecules.
Rocket scientists at Purdue University in west Lafayette, Indiana have come up with a new approach to plasma thrusters which will potentially increase their reliability and efficiency making them more suitable for softball sized nanosatellites, which are becoming more and more common.
Plasma thrusters have traditionally used one of two approaches to fuel. A solid propellant, usually Teflon (polytetrafluoroethylene, that is ablated and vaporized and then passed through a field that accelerates it.
The problem is that this ablation is a hit-and-miss process. The rate is difficult to control, and this can make the thrust non-uniform. Also, the Teflon surface sometimes breaks down and ejects debris in the form of macroparticles that interfere with the engine operation.
What's more, the igniter that triggers the flashover process can become damaged over time. All these problems ultimately limit the efficiency of the solid-fuel plasma thrusters to less than 15%.
The other common way is to store the propellant as a gas. This increases the efficiency of a plasma thruster by up to 70%.
But these systems are bulky and complex, and the gas itself has a significantly larger volume than an equivalent solid mass. That makes it hard to build into a nanosat.
According to lead author Adam Patel, these issues can be addressed by storing the propellant as a liquid, which "could potentially overcome several disadvantages associated with traditional pulsed plasma thruster devices"
The team has built and, using a vacuum chamber, tested a proof-of-principle micro-propulsion system fed by liquid propellant. The liquid they used was pentaphenyl trimethyl trisiloxane (C33H34O2Si3), a viscous liquid with low vapor pressure that is also an excellent dielectric.
The advantage of this kind of igniter is that the threshold voltage is always the same, and so the amount of energy required for flashover is always limited. This limits the potential damage to the flashover assembly over time.
In tests, Patel and co used the igniter for upwards of 1.5 million flashover events without observing any significant damage to the device. Other designs can sometimes fail after only 400 firing cycles.
The test device was able to generate an exhaust velocity of 32km/sec and 5.8 Newtons of thrust making it a potentially (not)solid option for future nanosats.
arxiv.org/abs/1907.00169 : Liquid-Fed Pulsed Plasma Thruster for Propelling Nanosatellites
Victims of the ZeroFucks ransomware don't have to pay the ransom, they only need to download the decryptor form[sic] the link below:
[...] ZeroFucks ransomware encrypts files with AES-256 and replaces the extension in the filename with ".zerofucks" (i.e. "myphoto.jpg" is changed to " myphoto.zerofucks".
The note left on systems infected by this ransomware reads, in part:
"All your important files have been encrypted. If you want your files back, you need to pay €400 in Bitcoins. After the payment is received, we will give you access to unlock your files. Click on the Payment button to get more info." reads ransom note
Emsisoft's Decryptors for these and fifty other ransomware families are available at https://www.emsisoft.com/decrypter/.
If you have an old system or drive lying around that was ransomwared and want to see if there is a free decryptor for it, steps to identify the ransomware and an extensive list of free ransomware decryptors is available at https://heimdalsecurity.com/blog/ransomware-decryption-tools/.
Google Chrome 76 will close a loophole that websites use to detect when people use the browser's Incognito Mode.
Over the past couple of years, you may have noticed some websites preventing you from reading articles while using a browser's private mode. The Boston Globe began doing this in 2017, requiring people to log in to paid subscriber accounts in order to read in private mode. The New York Times, Los Angeles Times, and other newspapers impose identical restrictions.
Chrome 76 - which is in beta now and is scheduled to hit the stable channel on July 30 - prevents these websites from discovering that you're in private mode. Google explained the change yesterday in a blog post titled, "Protecting private browsing in Chrome."
Today, some sites use an unintended loophole to detect when people are browsing in Incognito Mode. Chrome's FileSystem API is disabled in Incognito Mode to avoid leaving traces of activity on someone's device. Sites can check for the availability of the FileSystem API and, if they receive an error message, determine that a private session is occurring and give the user a different experience.
With the release of Chrome 76 scheduled for July 30, the behavior of the FileSystem API will be modified to remedy this method of Incognito Mode detection.
Using the Chrome 76 beta today, I confirmed that the Boston Globe, New York Times, and Los Angeles Times were unable to detect that my browser was in private mode. However, all three sites were able to detect private mode in Safari for Mac, Firefox, and Chrome 75.
Google acknowledged that websites might find new loopholes to detect private mode, but it pledged to close those, too. "Chrome will likewise work to remedy any other current or future means of Incognito Mode detection," Google's blog post said. [...]