Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2021-01-01 to 2021-06-30
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$1096.15
31.3%

Covers transactions:
2021-01-01 06:28:29 ..
2021-04-13 15:27:03 UTC
(SPIDs: [1509..1557])
Last Update:
2021-04-14 14:00:32 UTC --martyb


Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What is the resolution of the screen on the primary system you use to access SoylentNews?

  • 1024x768
  • 1280x720
  • 1600x1200
  • 1920x1080
  • 2560x1440
  • 3840x2160
  • Other, specify in comments
  • I don't have a computer you insensitive clod!

[ Results | Polls ]
Comments:88 | Votes:283

posted by martyb on Monday April 05, @09:54PM   Printer-friendly [Skip to comment(s)]

New Play Store rules block most apps from scanning your entire app list:

Google is finally taking steps to limit what applications can scan the application list on your device.

Google has announced another privacy restriction for Play Store apps. Starting this summer, Android 11's new Query_All_Packages permission will be flagged as "sensitive" on the Play Store, meaning Google's review process will restrict it to apps the company feels really need it. Query_All_Packages lets an app read your entire app list, which can contain all sorts of sensitive information, like your dating preferences, banking information, password management, political affiliation, and more, so it makes sense to lock it down.

On a support page, Google announced, "Apps that have a core purpose to launch, search, or interoperate with other apps on the device may obtain scope-appropriate visibility to other installed apps on the device." Google has another page that lists allowable use cases for Play Store apps querying your app list, including "device search, antivirus apps, file managers, and browsers." The page adds that "apps that must discover any and all installed apps on the device, for awareness or interoperability purposes may have eligibility for the permission." For apps that have to interact with other apps, Google wants developers to use more scoped app-discovery APIs (for instance, all apps that support x feature) instead of just pulling the entire app list.

There's also an exception for financial apps like banking apps and P2P wallets, which the page says "may obtain broad visibility into installed apps solely for security-based purposes." We assume this means scanning for root apps. The new policy also states that "[a]pp inventory data queried from Play-distributed apps may never be sold nor shared for analytics or ads monetization purposes."

When will this apply to all applications?

Today's restriction is a great example: the Query_All_Packages permission was added in Android 11, so it only applies to apps targeting Android 11's API level, which is "API Level 30." The Play Store's restrictions, naturally, also only apply to apps targeting API level 30 and up, which probably isn't many apps right now. Shortly after Android 11 is one year old, though (in November 2021), the Play Store will make API level 30 the minimum API level for updating apps, so the permission and the new restrictions will apply to every currently maintained app in the store.


Original Submission

posted by martyb on Monday April 05, @07:27PM   Printer-friendly [Skip to comment(s)]
from the abusing-GitHub-for-fun-and-profit dept.

GitHub Actions being actively abused to mine cryptocurrency on GitHub servers

GitHub Actions is currently being abused by attackers to mine cryptocurrency using GitHub's servers in an automated attack.

GitHub Actions is a CI/CD solution that makes it easy to automate all your software workflows and setup periodic tasks.

The particular attack adds malicious GitHub Actions code to repositories forked from legitimate ones, and further creates a Pull Request for the original repository maintainers to merge the code back, to alter the original code. But, an action is not required by the maintainer of the legitimate project for the attack to succeed.

BleepingComputer also observed the malicious code loads a misnamed crypto miner npm.exe from GitLab and runs it with the attacker's wallet address. Additionally, after initially reporting on this incident, BleepingComputer has come across copycat attacks targeting more GitHub projects in this manner.

Here is how it works:

The attack involves first forking a legitimate repository that has GitHub Actions enabled. It then injects malicious code in the forked version, and files a Pull Request for the original repository maintainers to merge the code back. But, in an unexpected twist, the attack does not need the maintainer of the original project to approve the malicious Pull Request.

Perdok says that merely filing the Pull Request by the malicious attacker is enough to trigger the attack. This is especially true for GitHub projects that have automated workflows setup to validate incoming Pull Requests via Actions. As soon as a Pull Request is created for the original project, GitHub's systems would execute the attacker's code which instructs GitHub servers to retrieve and run a crypto miner.

It looks like the validation of the Pull request is what triggers execution of the cryptominer. I wonder how long Github Actions will run a task before killing it?


Original Submission

posted by martyb on Monday April 05, @06:48PM   Printer-friendly [Skip to comment(s)]

We had two Soylentils write in with this breaking news. See other reports at Ars Technica, BBC, and c|net.

Supreme Court rules in Google's favor in copyright dispute with Oracle

Supreme Court rules in Google's favor in copyright dispute with Oracle over Android software:

The Supreme Court on Monday sided with Google against Oracle in a long-running copyright dispute over the software used in Android, the mobile operating system.

The court's decision was 6-2. Justice Amy Coney Barrett, who was not yet confirmed by the Senate when the case was argued in October, did not participate in the case.

The case concerned about 12,000 lines of code that Google used to build Android that were copied from the Java application programming interface developed by Sun Microsystems, which Oracle acquired in 2010. It was seen as a landmark dispute over what types of computer code are protected under American copyright law.

Oracle had claimed at points to be owed as much as $9 billion, while Google claimed that its use of the code was covered under the doctrine of fair use and therefore not subject to copyright liability. Android is the most popular mobile operating system in the world.

See also:
Supreme Court hands Google a victory in a multibillion-dollar case against Oracle

In addition to resolving a multibillion-dollar dispute between the tech titans, the ruling helps affirm a longstanding practice in software development. But the Court declined to weigh in on the broader question of whether APIs are copyrightable.

Justices wary of upending tech industry in Google v. Oracle Supreme Court fight

Several of the other justices, including Chief Justice John Roberts, suggested they were sympathetic to Oracle's copyright claims.

Still, they appeared reluctant to rule in Oracle's favor because of arguments made by leading computer scientists and Microsoft, in friend-of-the-court briefs, that doing so could upend the industry.

GOOGLE LLC v. ORACLE AMERICA, INC.

https://www.supremecourt.gov/opinions/20pdf/18-956_d18f.pdf

Held: Google's copying of the Java SE API, which included only those lines of code that were needed to allow programmers to put their accrued talents to work in a new and transformative program, was a fair use of that material as a matter of law. Pp. 11–36.


Original Submission #1Original Submission #2

posted by mrpg on Monday April 05, @05:00PM   Printer-friendly [Skip to comment(s)]
from the sure-why-not dept.

An Exploding Star 65 Light-Years Away From Earth May Have Triggered a Mass Extinction:

Life was trying, but it wasn't working out. As the Late Devonian period dragged on, more and more living things died out, culminating in one of the greatest mass extinction events our planet has ever witnessed, approximately 359 million years ago.

The culprit responsible for so much death may not have been local, scientists say. In fact, it might not have even come from our Solar System.

[...] In their new work, Fields and his team explore the possibility that the dramatic decline in ozone levels coinciding with the Late Devonian extinction might not have been a result of volcanism or an episode of global warming.

Instead, they suggest it's possible the biodiversity crisis exposed in the geological record could have been caused by astrophysical sources, speculating that the radiation effects from a supernova (or multiple) approximately 65 light-years from Earth may have been what depleted our planet's ozone to such disastrous effect.

Journal Reference:
Brian D. Fields, Adrian L. Melott, John Ellis, et al. Supernova triggers for end-Devonian extinctions [open], Proceedings of the National Academy of Sciences (DOI: 10.1073/pnas.2013774117)


Original Submission

posted by mrpg on Monday April 05, @02:27PM   Printer-friendly [Skip to comment(s)]
from the ohoh dept.

Evidence of Antarctic glacier's tipping point confirmed for first time:

Researchers have confirmed for the first time that Pine Island Glacier in West Antarctica could cross tipping points, leading to a rapid and irreversible retreat which would have significant consequences for global sea level.

Pine Island Glacier is a region of fast-flowing ice draining an area of West Antarctica approximately two thirds the size of the UK. The glacier is a particular cause for concern as it is losing more ice than any other glacier in Antarctica.

Currently, Pine Island Glacier together with its neighbouring Thwaites glacier are responsible for about 10% of the ongoing increase in global sea level.

Scientists have argued for some time that this region of Antarctica could reach a tipping point and undergo an irreversible retreat from which it could not recover. Such a retreat, once started, could lead to the collapse of the entire West Antarctic Ice Sheet, which contains enough ice to raise global sea level by over three metres.

Journal Reference:
Rosier, Sebastian H. R., Reese, Ronja, Donges, Jonathan F., et al. The tipping points and early warning indicators for Pine Island Glacier, West Antarctica [open], The Cryosphere (DOI: https://doi.org/10.5194/tc-15-1501-2021)


Original Submission

posted by mrpg on Monday April 05, @12:00PM   Printer-friendly [Skip to comment(s)]
from the don't-get-too-close-to-earth-buddy dept.

Interstellar visitor Borisov could be 1st truly pristine comet yet seen:

Comet 2I/Borisov is the 2nd known object to pass near our sun from outside our solar system. Its 2019 pass near our sun might have been its first-ever interaction with a star. If so, it's among the most pristine, or unspoiled, objects yet known.

We know of only two interstellar visitors – that is, visitors from other star systems – to our solar system. They are 1I/'Oumuamua and 2I/Borisov. 'Oumuamua gets a lot of press as a strangely-shaped traveler that might be anything from a piece of an exoplanet to an alien spacecraft. The lesser known 2I/Borisov is more clearly a comet that might have originated near a red dwarf star. Its chemical signature suggests it may never have interacted with a star before.


Original Submission

posted by mrpg on Monday April 05, @09:30AM   Printer-friendly [Skip to comment(s)]
from the I-[heart]-u dept.

Even with regular exercise, astronaut's heart left smaller after a year in space:

DALLAS – March. 29, 2021 – With NASA preparing to send humans to Mars in the 2030s, researchers are studying the physical effects of spending long periods in space. Now a new study by scientists at UT Southwestern shows that the heart of an astronaut who spent nearly a year aboard the International Space Station shrank, even with regular exercise, although it continued to function well.

The results were comparable with what the researchers found in a long-distance swimmer who spent nearly half a year trying to cross the Pacific Ocean.

The study, published today in Circulation, reports that astronaut Scott Kelly, now retired, lost an average of 0.74 grams – about three-tenths of an ounce – per week in the mass of his heart's left ventricle during the 340 days he spent in space, from March 27, 2015, to March 1, 2016. This occurred despite a weekly exercise regimen of six days of cycling, treadmill, or resistance work.

Journal Reference:
James P. MacNamara, Katrin A. Dias, Satyam Sarma, et al. Cardiac Effects of Repeated Weightlessness During Extreme Duration Swimming Compared With Spaceflight, Circulation (DOI: 10.1161/CIRCULATIONAHA.120.050418)


Original Submission

posted by mrpg on Monday April 05, @06:54AM   Printer-friendly [Skip to comment(s)]
from the memorize-your-IPs dept.

Microsoft outage caused by overloaded Azure DNS servers

Microsoft has revealed that Thursday's worldwide outage was caused by a code defect that allowed the Azure DNS service to become overwhelmed and not respond to DNS queries.

At approximately 5:21 PM EST on Thursday, Microsoft experienced a global outage that prevented users from accessing or signing into numerous services, including Xbox Live, Microsoft Office, SharePoint Online, Microsoft Intune, Dynamics 365, Microsoft Teams, Skype, Exchange Online, OneDrive, Yammer, Power BI, Power Apps, OneNote, Microsoft Managed Desktop, and Microsoft Streams.

The service was so wide-spread within Microsoft's infrastructure that even their Azure status page, which is used to provide outage info, was inaccessible.

Microsoft's eventually resolved the outage at approximately 6:30 PM EST, with some services taking a bit longer to function again properly. At the time, Microsoft stated that the outage was caused by a DNS issue but did not provide further information.


Original Submission

posted by martyb on Monday April 05, @04:32AM   Printer-friendly [Skip to comment(s)]

Malware attack is preventing car inspections in eight US states:

A malware cyberattack on emissions testing company Applus Technologies is preventing vehicle inspections in eight states, including Connecticut, Georgia, Idaho, Illinois, Massachusetts, Utah, and Wisconsin.

On Tuesday, March 30th, vehicle emissions testing platform Applus Technologies suffered a "malware" attack that caused them to disconnect their IT systems.

"Unfortunately, incidents such as this are fairly common and no one is immune," said Darrin Greene, CEO of the US entity, Applus Technologies, Inc. "We apologize for any inconvenience this incident may cause. We know our customers and many vehicle owners rely on our technology and we are committed to restoring normal operations as quickly as possible."

[...] "Unfortunately, we cannot provide a timetable. We do know it will not be a matter of hours or days. We will routinely update the return to service status as additional information becomes available. It is important to note that we want to make sure we have resolved all issues before restarting the system in order to avoid any additional delays or inconvenience once the program is back up and running."

Status updates are available at https://www.applustech.com/.

So, they identified 7 of the 8 affected states... would it have been that much harder to list them all? What was the 8th state?


Original Submission

posted by Fnord666 on Sunday April 04, @11:49PM   Printer-friendly [Skip to comment(s)]

Latest EmDrive tests at Dresden University shows "impossible Engine" does not develop any thrust

After tests in NASA laboratories had initially stirred up hope that the so-called EmDrive could represent a revolutionary, fuel-free alternative to space propulsion, the sobering final reports on the results of intensive tests and analyzes of three EmDrive variants by physicists at the Dresden University of Technology (TU Dresden) are now available. Grenzwissenschaft-Aktuell.de (GreWi) has exclusively interviewed the head of studies Prof. Dr. Martin Tajmar about the results.

As the team led by Prof. Tajmar reported last weekend at the "Space Propulsion Conference 2020 + 1" (which was postponed due to the Corona pandemic) and published in three accompanying papers in the "Proceedings of Space Propulsion Conference 2020 + 1" (Paper 1, Paper 2, Paper 3), they had to confirm the previously discussed interim results, according to which the EmDrive does not develop the thrust previously observed by other teams (such as NASA's Eagleworks and others). The team also confirmed that the already measured thrust forces can be explained by external effects, as they have now been proven by Tajmar and colleagues using a highly sensitive experimental and measurement setup.

On their work on the classical EmDrive Prof. Tajmar reports to GreWi-editor Andreas Müller:

"We found out that the cause of the 'thrust' was a thermal effect. For our tests, we used NASAs EmDrive configuration from White et al. (which was used at the Eagleworks laboratories, because it is best documented and the results were published in the 'Journal of Propulsion and Power'."


Original Submission

posted by martyb on Sunday April 04, @07:04PM   Printer-friendly [Skip to comment(s)]
from the we're-telling-you-the-hole-truth dept.

Qubits comprised of holes could be the trick to build faster, larger quantum computers:

[...] [According to A/Prof Dimi Culcer (UNSW/FLEET), who led the theoretical roadmap study,] "Our theoretical studies show that a solution is reached by using holes, which can be thought of as the absence of an electron, behaving like positively-charged electrons."

In this way, a quantum bit can be made robust against charge fluctuations stemming from the solid background.

Moreover, the 'sweet spot' at which the qubit is least sensitive to such noise is also the point at which it can be operated the fastest.

"Our study predicts such a point exists in every quantum bit made of holes and provides a set of guidelines for experimentalists to reach these points in their labs," says Dimi.

Reaching these points will facilitate experimental efforts to preserve quantum information for as long as possible. This will also provide strategies for 'scaling up' quantum bits – ie, building an 'array' of bits that would work as a mini-quantum computer.

"This theoretical prediction is of key importance for scaling up quantum processors and first experiments have already been carried out," says Prof Sven Rogge of the Centre for Quantum Computing and Communication Technology (CQC2T)."

"Our recent experiments on hole qubits using acceptors in silicon already demonstrated longer coherence times than we expected," says A/Prof Joe Salfi of the University of British Columbia. "It is encouraging to see that these observations rest on a firm theoretical footing. The prospects for hole qubits are bright indeed."

Journal Reference:
Zhanning Wang, Elizabeth Marcellina, Alex. R. Hamilton, et al. Optimal operation points for ultrafast, highly coherent Ge hole spin-orbit qubits [open], npj Quantum Information (DOI: 10.1038/s41534-021-00386-2)


Original Submission

posted by martyb on Sunday April 04, @02:19PM   Printer-friendly [Skip to comment(s)]
from the it's-just-pining-for-the-fjords dept.

Back in March 2003, some of you may remember the (in)famous SCO vs IBM lawsuit claiming that Linux had code which was stolen from Unix. And that Unix was owned by SCO. (neither of those two claims were true, and the falsity of the latter was proven in court first by a bench trial, then after an appeal, by a jury, then after another appeal, just because, by another judge)

The story is very long, and I can't tell it here. SCO claimed for years to have mountains of evidence. Never showed any. After protestations from IBM, the court ordered SCO three times to produce its evidence, the third and final order was Dec 22, 2005. Eventually the court began knocking out the legs from SCO's purported "case". Eventually trial was finally set to begin Monday, September 17, 2007. After boasting loudly for years that SCO wanted its day in court, on the Friday afternoon preceding trial on Monday, SCO declared bankruptcy. How can a company remain in bankruptcy for so many years, until this very day!? Good question. It's stuck at an appeals court that hasn't touched it in years. The docket alone is hundreds(!) of boxes. I'm sure no court unfamiliar with this long and complex case is very eager to first have to read through the docket. This case is firmly in Jarndyce and Jarndyce territory here.

At some point in bankruptcy, the court separated the assets from the litigation. The assets went in one direction (Xinuos), and the lawsuit went in the other direction -- thus keeping the assets now out of reach of any possible counter claim damages from IBM.

Yesterday (March 31, 2021) Xinuos filed a new lawsuit against IBM and Red Hat.[1] [2] [3]

When I saw this a few hours ago this morning, I was skeptical it was an April fools joke. But it appears to be real. I have not read this complaint yet. I don't know if I have the stomach for it after watching SCO for so many years. (This "ongoing" case had its 18th anniversary last month, and it's now in its 19th year)

I am confident that both IBM and Red Hat (the latter recently acquired by the former) can defend themselves. Especially since IBM has all of the materials from the ongoing case.

The entire Groklaw site is still available for research into the nauseating history.[4] It is also archived in the Library of Congress.[5]

Footnotes:
[1] Xinuos sues IBM
[2] SCO Linux FUD returns from the dead
[3] Xinuos Sues IBM and Red Hat for Antitrust Violations and Copyright Infringement, Alleges IBM Has Been Misleading its Investors Since 2008
[4] Groklaw - Wikipedia
[5] Groklaw - Digging for Truth


Original Submission

posted by martyb on Sunday April 04, @09:34AM   Printer-friendly [Skip to comment(s)]
from the FABulous-spending dept.

TSMC to Spend $100B on Fabs and R&D Over Next Three Years: 2nm, Arizona Fab & More

TSMC this week has announced plans to spend $100 billion on new production facilities as well as R&D over the next three years. The world's largest contract maker of chips says that its fabs are currently working at full load, so to meet demand for its services going forward it will need (much) more capacity. Among TSMC's facilities to go online in the next three to four years are the company's fab in Arizona as well as its first 2nm-capable fab in Taiwan.

[...] TSMC's capital expenditures (CapEx) budget last year was $17.2 billion, whereas its R&D budget was $3.72 billion, or approximately 8.2% of its revenue. This year the company intends to increase its CapEx to somewhere in the range of $25 to $28 billion, which would make for a 45% to 62% year-over-year increase in that spending. The company's R&D spending will also rise as its revenue is expected to grow. In total, TSMC plans to invest around $30 billion or more on CapEx and R&D this year. Taken altogether, if the company intends to spend around $100 billion from 2021 through 2023, its expenditures in the next two years will be roughly flat with 2021, something that should please its investors.

SK Hynix to Build $106 Billion Fab Cluster: 800,000 Wafer Starts a Month

Capping off a busy week for fab-related news, South Korea authorities this week gave SK Hynix a green light to build a new, 120 trillion won ($106.35 billion) fab complex. The fab cluster will be primarily used to build DRAM for PCs, mobile devices, and servers, using process technologies that rely on extreme ultraviolet lithography (EUV). The first fab in the complex will go online in 2025.

[...] The new fabs will be used to make various types of DRAM using SK Hynix's upcoming production technologies that will use extreme ultraviolet (EUV) lithography. And with a start date still years away, we're likely looking at a fab that will be used to manufacture DDR5, LPDDR5X, and other future types of DRAM.

See also: TSMC bumps spending up 50% to meet increased demand


Original Submission

posted by martyb on Sunday April 04, @04:49AM   Printer-friendly [Skip to comment(s)]
from the all-the-better-to-track-you-with? dept.

Pixel 6 will be powered by new Google-made 'Whitechapel' chip

9to5Google can report today that Google's upcoming phones for this fall, including the presumed Pixel 6, will be among the first devices to run on the "GS101" Whitechapel chip.

[...] First rumored in early 2020, Whitechapel is an effort on Google's part to create their own systems on a chip (SoCs) to be used in Pixel phones and Chromebooks alike, similar in to how Apple uses their own chips in the iPhone and Mac. Google was said to be co-developing Whitechapel with Samsung, whose Exynos chips rival Snapdragon processors in the Android space.

Per that report, Google would be ready to launch devices with Whitechapel chips as soon as 2021. According to documentation viewed by 9to5Google, this fall's Pixel phones will indeed be powered by Google's Whitechapel platform.

[...] Putting it all together, this fall's Made by Google phones will not use chips made by Qualcomm, but will instead be built on Google's own Whitechapel hardware platform with assistance from Samsung.

Also at The Verge and XDA Developers.


Original Submission

posted by martyb on Sunday April 04, @12:03AM   Printer-friendly [Skip to comment(s)]
from the Anthropomorphise-much? dept.

Coronavirus Variant Found in France Can Evade PCR Nasal-Swab Tests:

The French ministry of health and social affairs announced Monday that among a cluster of 79 COVID-19 cases in Brittany, eight patients were infected with the new variant, but several of them tested negative.

[...] The new variant does not yet have a alphanumeric designation. But it's not the first variant that appears able to evade testing. Finnish researchers announced last month that they had identified a strain named Fin-796H with a mutation that made it difficult to detect with some nasal-swab tests, too.

An inability to accurately diagnose infected people could make it harder to curtail the virus's spread at a time when cases across Europe are already spiking.

[...] The standard molecular lab tests — known as reverse transcription polymerase chain reaction (RT-PCR) tests — hunt for an infection in a swab from a patient's nose, looking for the coronavirus's genetic code.

But according to the French Health Directorate, genetic sequencing revealed that the variant found in Brittany has several mutations on its spike protein that help it evade detection by these diagnostic tests.

Health officials in Brittany eventually confirmed some of the cases caused by the new variant by either testing the patients' blood for antibodies or collecting samples of phlegm the patients coughed up from inside their lungs and running those through a RT-PCR test.

[...] one European diagnostics company, the Novacyt Group, announced Thursday that its PCR tests can successfully detect the new variant.


Original Submission