Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 19 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

How much effort do you put into interface customization?

  • Uh, people do that?
  • As long as it supports dark mode, I'm fine.
  • What matters is ensuring I never have to relearn any keybindings.
  • I just can't stand how ugly the default syntax highlighting looks!
  • Rice, rice, baby! What do you mean, neofetch isn't a login shell?
  • I may have written my own shell/desktop/browser/Emacs.
  • Everything must be seamless. I've modded things I'll never even see.
  • Talk is cheap—I'm just posting a screenshot.

[ Results | Polls ]
Comments:26 | Votes:90

posted by Fnord666 on Monday March 02 2020, @11:15PM   Printer-friendly
from the welcome-to-the-new-world-order dept.

Arthur T Knackerbracket has found the following story:

Until the 1980s, big companies in America tended to take a paternalistic attitude toward their workforce. Many corporate CEOs took pride in taking care of everyone who worked at their corporate campuses. Business leaders loved to tell stories about someone working their way up from the mailroom to a C-suite office.

But this began to change in the 1980s. Wall Street investors demanded that companies focus more on maximizing returns for shareholders. An emerging corporate orthodoxy held that a company should focus on its "core competence"—the one or two functions that truly sets it apart from other companies—while contracting out other functions to third parties.

Often, companies found they could save money this way. Big companies often pay above the market rate for routine services like cleaning offices, answering phones, staffing a cafeteria, or working on an assembly line. Putting these services out for competitive bid helped the companies get these functions completed at rock-bottom rates, while avoiding the hassle of managing employees. It also saved them from having to pay the same generous benefits they offered to higher-skilled employees.

Of course, the very things that made the new arrangement attractive for big companies made it lousy for the affected workers. Not only were companies trying to spend less money on these services, but now there were companies in the middle taking a cut. Once a job got contracted out, it was much less likely to become a first step up the corporate ladder. It's hard to work your way up from the mailroom if the mailroom is run by a separate contracting firm.

[...] The existence of such a two-tier workplace is especially ironic in Silicon Valley, a region that takes pride in its egalitarian ethos. Former Google CEO Eric Schmidt gave a remarkably candid assessment of the situation in 2012, in a statement quoted by author Chrystia Freeland.

"Many tech companies solved this problem by having the lowest-paid workers not actually be employees. They’re contracted out," Schmidt said. "We can treat them differently, because we don’t really hire them. The person who’s cleaning the bathroom is not exactly the same sort of person. Which I find sort of offensive, but it is the way it’s done."


Original Submission

posted by Fnord666 on Monday March 02 2020, @09:24PM   Printer-friendly
from the holding-your-water dept.

A dam right across the North Sea: A defense against climate change, but primarily a warning:
[Ed Note: English version of the story follows the Dutch version - Fnord666]

A 475-km-long dam between the north of Scotland and the west of Norway and another one of 160 km between the west point of France and the southwest of England could protect more than 25 million Europeans against the consequences of an expected sea level rise of several metres over the next few centuries. The costs, 250-500 billion euros, are "merely" 0.1% of the gross national product, annualy over 20 years, of all the countries that would be protected by such a dam. That's what Dr Sjoerd Groeskamp, oceanographer at the Royal Netherlands Institute for Sea Research, calculated together with his Swedish colleague Joakim Kjellson at GEOMAR in Kiel, Germany, published this month in the scientific journal the Bulletin of the American Meterological Society. 'Besides being a possible solution, the design of such an extreme dam is mainly a warning', says Groeskamp. 'It reveals the immensity of the problem hanging over our heads.'

[...] The authors acknowledge that the consequences of this dam for North Sea wildlife would be considerable. 'The tide would disappear in a large part of the North Sea, and with it the transport of silt and nutrients. The sea would eventually even become a freshwater lake. That will drastically change the ecosystem and therefore have an impact on the fishing industry as well', Groeskamp elaborates.

[...] Ultimately, the description of this extreme dam is more of a warning than a solution, Groeskamp states. 'The costs and the consequences of such a dam are huge indeed. However, we have calculated that the cost of doing nothing against sea level rise will ultimately be many times higher. This dam makes it almost tangible what the consequences of the sea level rise will be; a sea level rise of 10 metres by the year 2500 according to the bleakest scenarios. This dam is therefore mainly a call to do something about climate change now. If we do nothing, then this extreme dam might just be the only solution.'

Sjoerd Groeskamp, Joakim Kjellsson. NEED The Northern European Enclosure Dam for if climate change mitigation fails. Bulletin of the American Meteorological Society, 2020; DOI: 10.1175/BAMS-D-19-0145.1


Original Submission

posted by martyb on Monday March 02 2020, @07:36PM   Printer-friendly
from the cat-and-mouse dept.

Don't run your 2FA authenticator app on these smartphones:

Aaron Turner and Georgia Weidman emphasized that using authenticator apps, such as Authy or Google Authenticator, in two-factor authentication was better than using SMS-based 2FA. But, they said, an authenticator app is useless for security if the underlying mobile OS is out-of-date or the mobile device is otherwise insecure.

[...] The problem is that if an attacker or a piece of mobile malware can get into the kernel of iOS or Android, then it can do anything it wants, including presenting fake authenticator-app screens.

[...] And don't think iOS devices are safer than Android ones -- they're not. There are just as many known exploits for either one, and Weidman extracted the encryption keys from an older iPhone in a matter of seconds onstage.

The iPhone's Secure Enclave offers "some additional security, but the authenticator apps aren't using those elements," said Weidman, founder and chief technology officer of Washington-area mobile security provider Shevirah, Inc. "iOS is still good, but Android's [security-enhanced] SELinux is the bane of my existence as someone who's building exploits."

"We charge three times as much for an Android pentest than we charge for an iOS one," Turner said, referring to an exercise in which hackers are paid by a company to try to penetrate the company's security. "Fully patched Android is more difficult to go after."

[...] In short, "we need to move away from usernames and passwords," Turner said.

[...] Turner [said] "I am fundamentally opposed to using biometrics because it's non-revocable," he said, citing a famous case from Malaysia in which a man's index finger was cut off by a gang to steal the man's fingerprint-protected Mercedes. "Fingerprint readers are biometric toys."

The only form of two-factor authentication without security problems right now, Turner said, is a hardware security key such as a Yubikey or Google Titan key.

"I've got two Yubikeys on me right now," Turner said. "Hardware separation is your friend."


Original Submission

posted by Fnord666 on Monday March 02 2020, @05:41PM   Printer-friendly
from the droned-out-for-a-second dept.

New FAA drone rule is a giant middle finger to aviation hobbyists:

More than 34,000 people have deluged the Federal Aviation Administration with comments over a proposed regulation that would require almost every drone in the sky to broadcast its location over the Internet at all times. The comments are overwhelmingly negative, with thousands of hobbyists warning that the rules would impose huge new costs on those who simply wanted to continue flying model airplanes, home-built drones, or other personally owned devices.

"These regulations could kill a hobby I love," wrote Virginian Irby Allen Jr. in a comment last week. "RC aviation has brought my family together and if these regulations are enacted we will no longer be able to fly nor be able to afford the hobby."

The new regulations probably wouldn't kill the hobby of flying radio-controlled airplanes outright, but it could do a lot of damage. Owners of existing drones and model airplanes would face new restrictions on when and where they could be used. The regulations could effectively destroy the market for kit aircraft and custom-designed drones by shifting large financial and paperwork burdens on the shoulders of consumers.

"I think it's going to be harmful to the community and harmful to the growth of the UAS industry," said Greg Reverdiau, co-founder of the Pilot Institute, in a Friday phone interview. He wrote a point-by-point critique of the FAA proposal that has circulated widely among aviation hobbyists.

The new rules are largely designed to address safety and security concerns raised by law enforcement agencies. They worry that drones flying too close to an airport could disrupt operations or even cause a crash. They also worry about terrorists using drones to deliver payloads to heavily populated areas.

To address these concerns, the new FAA rule would require all new drones weighing more than 0.55 pounds to connect over the Internet to one of several location-tracking databases (still to be developed by private vendors) and provide real-time updates on their location. That would enable the FAA or law enforcement agencies to see, at a glance, which registered drones are in any particular area.

But critics say the rules impose massive costs on thousands of law-abiding Americans who have been quietly flying model airplanes, quad-copters, and other small unmanned aircraft for years—and in many cases decades.


Original Submission

posted by Fnord666 on Monday March 02 2020, @03:50PM   Printer-friendly
from the how-many-vulnerabilities-did-they-add? dept.

Intel Patched Over 230 Vulnerabilities in Its Products in 2019:

Intel patched over 230 vulnerabilities in its products last year, but less than a dozen impacted its processors, according to the company's 2019 Product Security Report.

Intel said it learned of 236 vulnerabilities in 2019, including 144 discovered internally by its employees. Internally discovered issues included 61% of the vulnerabilities rated high severity, and 75% of those rated critical. In total, 4 flaws were rated critical and 81 were classified as high severity.

Three quarters of the vulnerabilities reported by external researchers were submitted through the company's bug bounty program.

"Combining Bug Bounty and internally found vulnerabilities, the data shows that 91% of the issues addressed are the direct result of Intel's investment in product assurance," the company wrote in its report.

The chip maker says only 11 vulnerabilities affected its CPUs, with an average yearly CVSS score of 5.02. This includes the MDS flaws named ZombieLoad, Fallout and RIDL.

"As acknowledged by security researchers and industry experts, side-channel issues are difficult to exploit and often require a level of access to the target system that would afford would be attackers more efficient and reliable methods of obtaining and exfiltrating information," Intel explained.

Of the 236 vulnerabilities found last year in the company's products, 112 affected software, 59 affected firmware and 13 impacted hardware. In the case of 52 vulnerabilities, patching required both software and firmware updates.


Original Submission

posted by Fnord666 on Monday March 02 2020, @01:59PM   Printer-friendly
from the playing-tonsil-hockey dept.

Study reveals link between income inequality and French kissing:

Income inequality may be linked to how often people French kiss, according to a worldwide study by Abertay University.

The cross-cultural research involved 2,300 participants from 13 different countries across six continents.

Respondents answered a range of questions including how often they French kissed their partner, and how important they thought kissing was.

Their study revealed that people who lived in less equal nations said they kissed their partners more often.

This correlation did not extend to other forms of intimacy such as hugging and sexual intercourse.

[...] Lead researcher Dr. Christopher Watkins, from Abertay's Division of Psychology, said: "The results of this research suggest that the environment we live in is related to differences in this particular form of romantic intimacy.

"French kissing has been shown by others to be related to the quality of a romantic relationship, and our data suggests that we do this more in environments where we have less to fall back on, where a gesture which shows commitment to a relationship would be of greater value.

"Another interesting factor is that, across the nations surveyed, kissing was considered more important at the established phase of a relationship compared to the initial stages of romantic attraction."

The study also found differences in opinions between men and women on the importance of kissing, and about what makes a good kiss.

More information:
Christopher D. Watkins et al. National income inequality predicts cultural variation in mouth to mouth kissing, Scientific Reports (2019). DOI: 10.1038/s41598-019-43267-7


Original Submission

posted by martyb on Monday March 02 2020, @12:09PM   Printer-friendly
from the cost-of-doing-business dept.

FCC issues wrist-slap fines to carriers that sold your phone-location data:

The big four mobile carriers face fines of between $12 million and $91 million each for selling their customers' real-time location data to third-party data brokers without customer consent, Federal Communications Commission Chairman Ajit Pai's office announced today.

These are "proposed" fines, meaning the carriers can dispute them and try to get them reduced or eliminated. The proposed fines are $91 million for T-Mobile, $57 million for AT&T, $48 million for Verizon, and $12 million for Sprint. That's a total of $208 million.

The FCC announcement said the carriers' punishments are for "apparently selling access to their customers' location information without taking reasonable measures to protect against unauthorized access to that information." The FCC said it also "admonished these carriers for apparently disclosing their customers' location information, without their authorization, to a third party."

Pai said that the FCC has taken "strong enforcement action" with today's proposed fines. But the two Democrats on the Republican-majority commission said the fines are too low and criticized the Pai-led FCC for secrecy during the investigation.

"The FCC's investigation is a day late and a dollar short," Democratic Commissioner Jessica Rosenworcel said in a statement. "The FCC kept consumers in the dark for nearly two years after we learned that wireless carriers were selling our location information to shady middlemen."

[...] Relative to the carriers' collective revenue, the fines are "a slap on the wrist amounting to less than one one-thousandth of their annual take," consumer-advocacy group Free Press said. Revenue in calendar year 2019 was $181.2 billion for AT&T; $131.9 billion for Verizon; $45 billion for T-Mobile; and $32.5 billion for Sprint.


Original Submission

posted by janrinok on Monday March 02 2020, @10:15AM   Printer-friendly
from the one-for-you-and-one-for-me-and-one-for... dept.

[Update 2020-03-02 08:34:00 UTC. Full disclosure: SoylentNews uses Let's Encrypt certificates.--martyb]

HTTPS for all: Let's Encrypt reaches one billion certificates issued:

Let's Encrypt, the Internet Security Research Group's free certificate signing authority, issued its first certificate a little over four years ago. Today, it issued its billionth.

The ISRG's goal for Let's Encrypt is to bring the Web up to a 100% encryption rate. When Let's Encrypt launched in 2015, the idea was pretty outré—at that time, a bit more than a third of all Web traffic was encrypted, with the rest being plain text HTTP. There were significant barriers to HTTPS adoption—for one thing, it cost money. But more importantly, it cost a significant amount of time and human effort, both of which are in limited supply.

Let's Encrypt solved the money barrier by offering its services free of charge. More importantly, by establishing a stable protocol to access them, it enabled the Electronic Frontier Foundation to build and provide Certbot, an open source, free-to-use tool that automates the process of obtaining certificates, installing them, configuring webservers to use them, and automatically renewing them.

When Let's Encrypt launched in 2015, domain-validated certificates could be had for as little as $9/year—but the time and effort required to maintain them was a different story. A certificate needed to be purchased, information needed to be filled out in several forms, then one might wait for hours before even cheap domain-validated certificates would be issued.

Once the certificate was issued, it (and its key, and any chain certificates necessary) needed to be downloaded, then moved to the server, then placed in the right directory, and finally the Web server could be reconfigured for SSL.

Every one to three years, you'd need to do the whole thing over again—perhaps only replacing the certificate and key, perhaps also replacing or adding new intermediate chain certificates.

The whole thing was (and is) frankly, a mess... and can easily result in downtime if an infrequently practiced procedure doesn't run smoothly.

[...] In June of 2017, Let's Encrypt was two years old and served its ten millionth certificate. The Web had gone from under 40% HTTPS to—in the United States—64% HTTPS, and Let's Encrypt was servicing 46 million websites.

Today, Let's Encrypt's billionth certificate has been issued, it services 192 million websites, and the United States' portion of the Internet is a whopping 91-percent encrypted. The project manages this on nearly the same staff and budget it did in 2017—it has gone from 11 full-time staff and a $2.61 million budget then to 13 full-time staff and a $3.35 million budget today.

None of this would be possible without a commitment to automation and open standards. We gushed about how easy the EFF's Certbot makes it to deploy and renew Let's Encrypt certificates—but that contribution is only possible because of Let's Encrypt's own focus on standardizing an open ACME protocol that anyone can build a client to operate.

In addition to building and publishing a stable, capable protocol, Let's Encrypt put in the work to submit and ratify it with the Internet Engineering Task Force (IETF), resulting in RFC 8555.


Original Submission

posted by janrinok on Monday March 02 2020, @08:32AM   Printer-friendly
from the they-don't-make-em-like-they-used-to dept.

It's a day for Australia as Telstra, one of the main ISPs providing internet access with the newly built NBN network, declares 100Mbps plans will no longer be sold as they cannot be used. This change has been made due to the determination that the NBN cannot deliver the speeds promised. With the original plan in tatters after the Liberal government downgraded the network components to use "Multi Technology Mix" many customers lack the physical components to connect to the NBN to be able to receive the full speeds available. While some of the initial rollout delivered fibre to the premises the Liberal government switched the rollout to use copper and existing cable systems with many customers connect via FTTN leaving a lot to be desired in terms of speed. Farewell 100Mbps, we hardly knew you.

No large scale infrastructure plan survives contact with an incoming government.


Original Submission

posted by Fnord666 on Monday March 02 2020, @04:09AM   Printer-friendly
from the yes-we-have-no-bananas-but-no-we-do-have-your-face dept.

Ars Technica has a "review" of the new Amazon Go Grocery store in Seattle, WA.

Apparently, the author's first thought was to engage in some petty theft, given that there are no cashiers or visible security guards.

The article is fairly verbose, with lots of photos of the crime scene store. Overall, the new store is just like the original Amazon Go stores, but with extra surveillance features.

From the Ars article:

Because Amazon Go Grocery revolves around the same creepy, watch-you-shop system found in smaller Amazon Go shops, I encourage anyone unfamiliar with the concept to rewind to my first look at Amazon Go from early 2018. Functionally, the newest store works identically. You can't enter the shop without entering your Amazon account credentials—complete with a valid payment method—into the Amazon Go app on either iOS or Android. Which, of course, means you can't enter the store without an Internet-connected smart device.

Once the app has your Amazon information, it will generate a unique QR code. Tap this onto a gated kiosk's sensor, and after a pause, a gate will open. During this brief pause, the shop's cameras capture your likeness and begin tracking your every step and action.
[...]
Where AGG differs is its selection, which is simply bigger and more diverse. Instead of limiting its healthiest options to pre-made meals, AGG goes further to include a refrigerated wall of raw meat and seafood, a massive stock of fruits, and a wall of veggies. The latter receives the same automated water-spritzing process you'd expect from a standard grocer. (See? Amazon knows how lettuce works.)
[...]
The store's massive bathroom hallway is lined with sensors and cameras, but the bathrooms themselves do not appear to have any form of camera or sensor inside them. (I didn't take photos inside the bathroom, because I'm not DrDisrespect. You'll have to trust me on that one.) The hallway also includes a little tray outside each bathroom door where customers are encouraged to put merchandise before using the facilities. I left the only other produce in my hand at that time, a single avocado, on that tray.
[...]
This moment included a dramatic turn to the bathroom's mirror, which is when a lightbulb went off in my head. I had taken off my jacket and put it into the backpack before entering the shop. Could I confuse the cameras with a wardrobe change?

It sure seems like it.
[...]
Surprisingly, then, my "costume change" fooled Amazon Go Grocery. Everything I picked up before ducking into the loo was charged correctly. After that, the app clearly lost track of me, which may align with the receipt's claim of a 2-hour, 23-minute shopping trip, well above the 20 minutes I was actually there. And Amazon needed another hour and a half to conclude that I had picked up those first items, ducked into a bathroom, and then was incapacitated by a jacket-wearing madman with an identical beard and haircut. I hope they catch that guy. He might be armed—with a banana!

So, what say you Soylentils? Is this the future of grocery stores? Should the author be arrested and charged with shoplifting? Would you go into a store like this just so you don't have to deal with cashiers (human or automated)?

I encourage (against current best practices) reading TFA, as I left out quite a bit of detail and the many photos have descriptive text as well. Just a crazy thought.


Original Submission

posted by janrinok on Monday March 02 2020, @02:18AM   Printer-friendly
from the in-flight-film dept.

Here's a quick overview of "documentaries" to watch before, during and after a pandemic:

The Andromeda Strain film: An early Michael Crichton adaptation which came before the Westworld film and series and the Jurassic Park film series. Like many Michael Crichton stories, factual science is extended with credible speculation. In this case, a prion-like infection has killed almost everyone in a village and the survivors are seemingly unrelated. The film is best known for its cartoonish but very photogenic indoor set which serves as the backdrop of a Level 4 Biolab. Such eloborate sets were common in the era. (Other examples include Rollerball and disaster parody/Airplane predecessor, The Big Bus.) The film features concurrent action which was a common experimental film technique in the 1960s but, nowadays, is most commonly associated with Kiefer Sutherland in the 24 series. There is also a lesser-known mini-series.

Outbreak: A rather dull film which nevertheless provides a graphic portrayal of uncontained pandemic and towns being quarantined. It would be marginally improved if the antagonists were re-cast. Possible source material for DeepFaking.

The Resident Evil film series: These films considerably advanced the tropes of amoral corporation, rogue artificial intelligence as antagonist, reality within reality, experimentation without informed consent and the horror of a medicalized vampire/zombie rabies virus. The Red Queen versus White Queen subplot dovetails with Alice in Wonderland, prey versus predator evolution and Umbrella Corp's red and white logo which - in a case of life mirroring art - was copied by Wuhan's Level 4 Biolab. Many people prefer the competent and detailed Resident Evil series of computer games which are arguably better than the Half Life series or the SCP game.

28 Days Later, 28 Weeks Later, 28 Months Later: Gritty films which borrow liberally from Resident Evil and a rich seam of British, Cold War, post-apocalyptic drama, such as Survivors. The most iconic scenes of the film - Central London without people - weren't composited or closed for filming. They were merely shot at dawn, on a Sunday morning, around the summer solstice. The first film may have influenced Black Mirror episodes: The National Anthem and White Bear.

World War Z: If Dan Brown wrote zombie stories, they'd be like World War Z. Brad Pitt's character dashes to a number of exotic locations and is swept up by events such as panic buying and stampedes from an unspecified infection which has an incubation period of about 20 seconds. (A duration which appears to have been chosen to maximize tension while staying within the one minute beat sheet of the Hero's Journey monomyth.) It builds upon the zombie mythology of Resident Evil and 28 Day Later. However, it is undermined by cheap grading, cheap compositing and reliance on flocking software.

Contagion: I haven't seen this one. It may or may not involve Gwyneth Paltrow, Harvey Weinstein, an Oscars acceptance speech, Seth MacFarlane and a vaginal steamer. Well, it probably relieves the itching of genital herpes.

Cloverfield 1, 2, 3 and 4: Found footage genre which is heavily influenced by the Blair Witch Project (obviously), Godzilla films, The Day the Earth Stood Still, any B-movie with a military Jeep and medicalized zombie films. The first film has fantastic compositing which may be of particular interest to anyone working on an augmented reality horror game.

Bad Taste: Peter Jackson's first commercial film. It was almost unfinished due to the ridiculous ending which blatently copies from a film which - to prevent spoiler - I won't mention. A particularly low budget effect was used for gun flash. Specifically, the original 16mm footage was poked with a pin. Regardless, if you've seen the carnage of Peter Jackson's orc battles then you may be curious to see the carnage of Peter Jackson's zombie fights. Additionally, Bad Taste works particularly well as a drinking game.

George A. Romero and John A. Russo films: These classics brought zombies out of a largely undifferentiated mess of vampire/mummy/voodoo/swamp monster B-movies. Unfortunately, they have been overshadowed by escalating gore and violence. This leave each classic looking more like an extended episode of the A-Team.

Pride and Prejudice and Zombies: Anything is more exciting with zombies and Jane Austen's dull novel certainly benefits. This wilfully inaccurate historical drama, a genre shared with Abraham Lincoln: Vampire Hunter and Snow White and the Seven Samurai, features square dancing and blunderbuss zombie carnage.

Any better suggestions out there?


Original Submission

posted by janrinok on Monday March 02 2020, @12:36AM   Printer-friendly
from the or-social-media dept.

First Amendment doesn't apply on YouTube; judges reject PragerU lawsuit:

YouTube is a private forum and therefore not subject to free-speech requirements under the First Amendment, a US appeals court ruled today. "Despite YouTube's ubiquity and its role as a public-facing platform, it remains a private forum, not a public forum subject to judicial scrutiny under the First Amendment," the court said.

PragerU, a conservative media company, sued YouTube in October 2017, claiming the Google-owned video site "unlawfully censor[ed] its educational videos and discriminat[ed] against its right to freedom of speech."

PragerU said YouTube reduced its viewership and revenue with "arbitrary and capricious use of 'restricted mode' and 'demonetization' viewer restriction filters." PragerU claimed it was targeted by YouTube because of its "political identity and viewpoint as a non-profit that espouses conservative views on current and historical events."

But a US District Court judge dismissed PragerU's lawsuit against Google and YouTube, and a three-judge panel at the US Court of Appeals for the 9th Circuit upheld that dismissal in a unanimous ruling today.

"PragerU's claim that YouTube censored PragerU's speech faces a formidable threshold hurdle: YouTube is a private entity. The Free Speech Clause of the First Amendment prohibits the government—not a private party—from abridging speech," judges wrote.

PragerU claimed that Google's "regulation and filtering of video content on YouTube is 'State action' subject to scrutiny under the First Amendment." While Google is obviously not a government agency, PragerU pointed to a previous appeals-court ruling to support its claim that "[t]he regulation of speech by a private party in a designated public forum is 'quintessentially an exclusive and traditional public function' sufficient to establish that a private party is a 'State actor' under the First Amendment." PragerU claims YouTube is a "public forum" because YouTube invites the public to use the site to engage in freedom of expression and because YouTube representatives called the site a "public forum" for free speech in testimony before Congress.

Appeals court judges were not convinced. They pointed to a Supreme Court case from last year in which plaintiffs unsuccessfully "tested a theory that resembled PragerU's approach, claiming that a private entity becomes a state actor through its 'operation' of the private property as 'a public forum for speech.'" The case involved public access channels on a cable TV system.

The Supreme Court in that case found that "merely hosting speech by others is not a traditional, exclusive public function and does not alone transform private entities into state actors subject to First Amendment constraints."

"If the rule were otherwise, all private property owners and private lessees who open their property for speech would be subject to First Amendment constraints and would lose the ability to exercise what they deem to be appropriate editorial discretion within that open forum," the Supreme Court decision last year continued.


Original Submission

posted by Fnord666 on Sunday March 01 2020, @10:19PM   Printer-friendly
from the only-way-left-is-to-hack-your-own-brain dept.

Science Daily reports on a new study published in the journal Nature Scientific Reports that details how researchers have created a hybrid neural network allowing both biological and artificial neurons to communicate across the internet.

During the study, researchers based at the University of Padova in Italy cultivated rat neurons in their laboratory, whilst partners from the University of Zurich and ETH Zurich created artificial neurons on Silicon microchips. The virtual laboratory was brought together via an elaborate setup controlling nanoelectronic synapses developed at the University of Southampton. These synaptic devices are known as memristors.

The Southampton based researchers captured spiking events being sent over the internet from the biological neurons in Italy and then distributed them to the memristive synapses. Responses were then sent onward to the artificial neurons in Zurich also in the form of spiking activity. The process simultaneously works in reverse too; from Zurich to Padova. Thus, artificial and biological neurons were able to communicate bidirectionally and in real time

According to Themis Prodromakis, Professor of Nanotechnology and Director of the Centre for Electronics Frontiers at the University of Southampton,

"We are very excited with this new development. On one side it sets the basis for a novel scenario that was never encountered during natural evolution, where biological and artificial neurons are linked together and communicate across global networks; laying the foundations for the Internet of Neuro-electronics. On the other hand, it brings new prospects to neuroprosthetic technologies, paving the way towards research into replacing dysfunctional parts of the brain with AI chips."

Article Reference
University of Southampton. (2020, February 26). New study allows brain and artificial neurons to link up over the web. ScienceDaily. Retrieved March 1, 2020 from www.sciencedaily.com/releases/2020/02/200226110843.htm

Journal Reference:
Alexantrou Serb, Andrea Corna, Richard George, Ali Khiat, Federico Rocchi, Marco Reato, Marta Maschietto, Christian Mayr, Giacomo Indiveri, Stefano Vassanelli, Themistoklis Prodromakis. Memristive synapses connect brain and silicon spiking neurons. Scientific Reports, 2020; 10 (1) DOI: 10.1038/s41598-020-58831-9

[Ed. Note - Original article from the University of Southampton can be found here.]


Original Submission

posted by Fnord666 on Sunday March 01 2020, @07:58PM   Printer-friendly
from the who-could-have-guessed? dept.

Printer toner linked to genetic changes, health risks in new study:

Getting printer toner on your hands is annoying. Getting it in your lungs may be dangerous.

According to a new study by West Virginia University researcher Nancy Lan Guo, the microscopic toner nanoparticles that waft from laser printers may change our genetic and metabolic profiles in ways that make disease more likely. Her findings appear in the International Journal of Molecular Sciences.

"The changes are very significant from day one," said Guo, a professor in the School of Public Health and member of the Cancer Institute.

[...] "In particular, there is one group I really think should know about this: pregnant women. Because once a lot of these genes are changed, they get passed on through the generations. It's not just you."

On the same days that the researchers assessed the rats' genes, they also measured every metabolite available in their blood.

[...] The metabolic levels that the researchers detected reinforced their other findings. The same health risks that the genetic profiles pointed to were implicated by the metabolic profiles as well.

Building on these results, Guo and her colleagues have since investigated the genomic changes that Singaporean printing company workers have experienced. In many respects, the workers' genomes changed the same ways the rats' genomes did. The results from these workers are included in a manuscript ready for submission to a journal.

"And they're very young," Guo said. "A lot of the workers ranged from 20 to their early 30s, and you're already starting to see all of these changes.

"We have to work, right? Who doesn't have a printer nowadays, either at home or at the office? But now, if I have a lot to print, I don't use the printer in my office. I print it in the hallway."

Nancy Lan Guo, et. al. Integrated Transcriptomics, Metabolomics, and Lipidomics Profiling in Rat Lung, Blood, and Serum for Assessment of Laser Printer-Emitted Nanoparticle Inhalation Exposure-Induced Disease Risks. International Journal of Molecular Sciences, 2019; 20 (24): 6348 DOI: 10.3390/ijms20246348


Original Submission

posted by janrinok on Sunday March 01 2020, @05:37PM   Printer-friendly
from the uninspiring dept.

Boeing acknowledges "gaps" in its Starliner software testing:

On Friday, during a detailed, 75-minute briefing with reporters, a key Boeing spaceflight official sought to be as clear as possible about the company's troubles with its Starliner spacecraft.

After an uncrewed test flight in December of the spacecraft, Boeing "learned some hard lessons," said John Mulholland, a vice president who manages the company's commercial crew program. The December mission landed safely but suffered two serious software problems. Now, Mulholland said, Boeing will work hard to rebuild trust between the company and the vehicle's customer, NASA. During the last decade, NASA has paid Boeing a total of $4.8 billion to develop a safe capsule to fly US astronauts to and from the International Space Station.

At the outset of the briefing, Mulholland sought to provide information about the vehicle's performance, including its life support systems, heat shield, guidance, and navigation. He noted that there were relatively few issues discovered. However, when he invited questions from reporters, the focus quickly turned to software. In particular, Mulholland was asked several times how the company made decisions on procedures for testing flight software before the mission—which led to the two mistakes.

He struggled to answer those questions, but the Boeing VP said the reason was not financial. "It was definitely not a matter of cost," Mulholland said. "Cost has never been in any way a key factor in how we need to test and verify our systems."

The first software error occurred when the spacecraft captured the wrong "mission elapsed time" from its Atlas V launch vehicle—it was supposed to pick up this time during the terminal phase of the countdown, but instead it grabbed data 11 hours off of the correct time. This led to a delayed push to reach orbit and caused the vehicle's thrusters to expend too much fuel. As a result, Starliner did not dock with the International Space Station.

The second error, caught and fixed just a few hours before the vehicle returned to Earth through the atmosphere, was due to a software mapping error that would have caused thrusters on Starliner's service module to fire in the wrong manner. Specifically, after the service module separated from the capsule, it would not have performed a burn to put the vehicle into a disposal burn. Instead, Starliner's thrusters would have fired such that the service module and crew capsule could have collided.

NASA and Boeing have been conducting a joint assessment of these software problems, and they're expected to report their findings in a week, on March 6. But on Friday, Mulholland was prepared to discuss two issues with Boeing's software verification that the company intends to fix.

First of all, he acknowledged the company did not run integrated, end-to-end tests for the whole mission. For example, instead of running a software test that encompassed the roughly 48-hour period from launch through docking to the station, Boeing broke the test into chunks. The first chunk ran from launch through the point at which Starliner separated from the second stage of the Atlas V booster. Unfortunately for Boeing engineers, the mission elapsed timing error occurred just after this point in time. "If we would have run the integrated test through the first orbital insertion burn time frame, we would have seen that we missed the burn," Mulholland said.


Original Submission