Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
From the following story:
Amazon has still not provided any useful information or insights into the DDoS attack that took down swathes of websites last week, so let's turn to others that were watching.
One such company is digital monitoring firm Catchpoint, which sent us its analysis of the attack in which it makes two broad conclusions: that Amazon was slow in reacting to the attack, and that tardiness was likely the result of its looking in the wrong places.
Even though cloud providers go to some lengths to protect themselves, the DDoS attack shows that even a company as big as Amazon is vulnerable. Not only that but, thanks to the way that companies use cloud services these days, the attack has a knock-on impact.
"A key takeaway is the ripple effect impact when an outage happens to a third-party cloud service like S3," Catchpoint noted.
The attack targeted Amazon's S3 - Simple Storage Service - which provides object storage through a web interface. It did not directly target the larger Amazon Web Services (AWS) but for many companies the end result was the same: their websites fell over.
[...] Amazon responded by rerouting packets through a DDoS mitigation service run by Neustar but it took hours for the company to respond. Catchpoint says its first indications that something was up came five hours before Amazon seemingly noticed, saying it saw "anomalies" that it says should have served as early warnings signs.
When it had resolved the issue, Amazon said the attack happened "between 1030 and 1830 PST," but Catchpoint's system shows highly unusual activity from 0530. We should point out that Catchpoint sells monitoring services for a living so it has plenty of reasons to highlight its system's efficacy, but that said, assuming the graphic we were given [PDF] is accurate - and we have double-checked with Catchpoint - it does appear that Amazon was slow to recognize the threat.
Catchpoint says the problem is that Amazon - and many other organizations - are using an "old" way of measuring what's going on. They monitor their own systems rather than the impact on users.
"It is critical to primarily focus on the end-user," Catchpoint argues. "In this case, if you were just monitoring S3, you would have missed the problem (perhaps, being alerted first by frustrated users)."
-- submitted from IRC
Updated information on the Mars Insight Lander's Mole Mission.
As previously reported, the burrowing instrument on the Mars Insight Lander dubbed the 'mole' ran into trouble back in February.
Various efforts since, including most recently applying pressure to the instrument with the lander's arm scoop, were undertaken to help the little instrument out, and the most recent effort seemed to be succeeding. The lander managed another 3cm of progress! indicating that it had not encountered an impenetrable rock layer after all.
Sadly for the little spade that should, over the weekend the NASA InSight team tweeted the following discouraging news:
"Mars continues to surprise us. While digging this weekend the mole backed about halfway out of the ground. Preliminary assessment points to unexpected soil properties as the main reason. Team looking at next steps.
The stick like probe is supposed to dig its way down to a depth of about 5 meters and take temperature readings.
An image of the issue is here.
Submitted via IRC for Fnord666
PHP Bug Allows RCE on NGINX Servers
A buffer underflow bug in PHP could allow remote code-execution (RCE) on targeted NGINX servers.
First discovered during a hCorem Capture the Flag competition in September, the bug (CVE-2019-11043) exists in the FastCGI directive used in some PHP implementations on NGINX servers, according to researchers at Wallarm.
PHP powers about 30 percent of modern websites, including popular web platforms like WordPress and Drupal – but NGINX servers are only vulnerable if they have PHP-FPM enabled (a non-default optimization feature that allows servers to execute scripts faster). The issue is patched in PHP versions 7.3.11, 7.2.24 and 7.1.33, which were released last week.
In a Monday posting, Wallarm researchers said that the bug can be exploited by sending specially crafted packets to the server by using the “fastcgi_split_path” directive in the NGINX configuration file. That file is configured to process user data, such as a URL. If an attacker creates a special URL that includes a “%0a” (newline) byte, the server will send back more data than it should, which confuses the FastCGI mechanism.
“In particular, [the bug can be exploited] in a fastcgi_split_path directive and a regexp trick with newlines,” according to Wallarm security researcher Andrew Danau, who found the bug. “Because of %0a character, NGINX will set an empty value to this variable, and fastcgi+PHP will not expect this....[as a result], it’s possible to put [in] arbitrary FastCGI variables, like PHP_VALUE.”
Submitted via IRC for Fnord666
New method promises advances in 3D printing, manufacturing and biomedical applications
Although both 3D printers and traditional manufacturers already use droplets to carefully add material to their products, the new jet method offers greater flexibility and precision than standard techniques, the researchers said. For example, delivering droplets with jets allows for extremely small sizes and allows designers to change droplet sizes, shapes and dispersion, as well as patterns of droplets, on the fly.
"A key aspect is the simplicity of the method," said Pierre-Thomas Brun, an assistant professor of chemical and biological engineering at Princeton and the lead researcher. "You draw something on the computer, and you can create it."
In an article published Oct. 28 in the Proceedings of the National Academy of Sciences, the researchers describe how to control the dispersion of drops from a thin jet of liquid. They were able to inject calibrated droplets of glycerin into a liquid polymer to demonstrate placement over three dimensions -- a key requirement for manufacturing. By curing the polymer, the researchers were able to affix the droplets in desired locations. Although the researchers used glycerin for the experiment, they said the method would work with a wide variety of substances commonly used in manufacturing and research.
Journal Reference:
Lingzhi Cai, Joel Marthelot, and P.-T. Brun. Unbounded microfluidics: Rayleigh-Plateau instability of a viscous jet in a viscous bath. PNAS, 2019 DOI: 10.1073/pnas.1914270116
Submitted via IRC for Fnord666
Uber sues Los Angeles to keep scooter location data private
Los Angeles wants a peek at the location data collected by the Uber scooters in its city. The company, better known for its ride-hailing service, doesn't want to give up the information, and is taking legal action to keep the data private.
On Monday, Uber filed a lawsuit against Los Angeles after months of refusing to give the Department of Transportation access to its scooter location data. In September 2018, LADOT instituted a requirement for all scooter companies to provide location data on the vehicles. The city said it was for city planning purposes.
Los Angeles' "Mobility Data Specification" plan represents one way local governments are trying to wrestle with the impact of technology companies. It's caught on in other cities such as Seattle; Austin, Texas; and Louisville, Kentucky because they don't want to be caught flat-footed the same way they did when ride-hailing companies first arrived and caused traffic headaches. But the request for real-time location data on scooters is a step too far, raising serious privacy concerns, Uber argued.
"Independent privacy experts have clearly and repeatedly asserted that a customer's geolocation is personally identifiable information, and -- consistent with a recent legal opinion by the California legislative counsel -- we believe that LADOT's requirements to share sensitive on-trip data compromises our customers' expectations of data privacy and security," an Uber spokesperson said. "Therefore, we had no choice but to pursue a legal challenge, and we sincerely hope to arrive at a solution that allows us to provide reasonable data and work constructively with the City of Los Angeles while protecting the privacy of our riders."
[...] "Given that we seem to have exhausted all other avenues to find a compromise solution, tomorrow we will file a lawsuit and seek a temporary restraining order in the Los Angeles Superior Court, so that a judge will hear these concerns and prevent the Los Angeles Department of Transportation from suspending our permit to operate," Uber's director of public affairs Colin Tooze said in a letter to LADOT on Monday.
LADOT's requests for location data have also faced criticism from privacy advocates like the Center for Democracy and Technology, as well as the Electronic Frontier Foundation.
In 1930, a year into the Great Depression, John Maynard Keynes sat down to write about the economic possibilities of his grandchildren. Despite widespread gloom as the global economic order fell to its knees, the British economist remained upbeat, saying that the ‘prevailing world depression … blind[s] us to what is going on under the surface’. In his essay, he predicted that in 100 years’ time, ie 2030, society would have advanced so far that we would barely need to work. The main problem confronting countries such as Britain and the United States would be boredom, and people might need to ration out work in ‘three-hour shifts or a 15-hour week [to] put off the problem’. At first glance, Keynes seems to have done a woeful job of predicting the future. In 1930, the average worker in the US, the UK, Australia and Japan spent 45 to 48 hours at work. Today, that is still up around 38 hours.
Keynes has a legendary stature as one of the fathers of modern economics – responsible for much of how we think about monetary and fiscal policy. He is also famous for his quip at economists who deal only in long-term predictions: ‘In the long run, we are all dead.’ And his 15-hour working week prediction might have been more on the mark than it first appears.
If we wanted to produce as much as Keynes’s countrymen did in the 1930s, we wouldn’t need everyone to work even 15 hours per week. If you adjust for increases in labour productivity, it could be done in seven or eight hours, 10 in Japan (see graph below). These increases in productivity come from a century of automation and technological advances: allowing us to produce more stuff with less labour. In this sense, modern developed countries have way overshot Keynes prediction – we need to work only half the hours he predicted to match his lifestyle.
The progress over the past 90 years is not only apparent when considering workplace efficiency, but also when taking into account how much leisure time we enjoy. First consider retirement: a deal with yourself to work hard while you’re young and enjoy leisure time when you’re older. In 1930, most people never reached retirement age, simply labouring until they died. Today, people live well past retirement, living a third of their life work-free. If you take the work we do while we’re young and spread it across a total adult lifetime, it works out to less than 25 hours per week. There’s a second factor that boosts the amount of leisure time we enjoy: a reduction in housework. The ubiquity of washing machines, vacuum cleaners and microwave ovens means that the average US household does almost 30 hours less housework per week than in the 1930s. This 30 hours isn’t all converted into pure leisure. Indeed, some of it has been converted into regular work, as more women – who shoulder the major share of unpaid domestic labour – have moved into the paid labour force. The important thing is that, thanks to progress in productivity and efficiency, we all have more control over how we spend our time.
So if today’s advanced economies have reached (or even exceeded) the point of productivity that Keynes predicted, why are 30- to 40-hour weeks still standard in the workplace? And why doesn’t it feel like much has changed? This is a question about both human nature – our ever-increasing expectations of a good life – as well as how work is structured across societies.
Submitted via IRC for soylent_red
Recent years have seen a renewed interest in the clinical application of classic psychedelics in the treatment of depression and anxiety disorders. Researchers of the University of Zurich have now shown that mindfulness meditation can enhance the positive long-term effects of a single dose of psilocybin, which is found in certain mushrooms.
[...] Researchers at the University Hospital of Psychiatry Zurich have now for the first time examined the potential synergistic effects of combining mindfulness meditation and psilocybin. The scientists recruited 40 meditation experts who were taking part in a five-day mindfulness retreat. In the double-blind study, the participants were administered either a single dose of psilocybin or a placebo on the fourth day of the group retreat. Using various psychometric and neurocognitive measurements, the team of researchers were able to show that mindfulness meditation increased the positive effects of psilocybin, while counteracting possible dysphoric responses to the psychedelic experience. "Psilocybin markedly increased the incidence and intensity of self-transcendence virtually without inducing any anxiety compared to participants who received the placebo," says first author Lukasz Smigielski, who conducted the study directed by UZH professor of psychiatry Franz Vollenweider.
[...] "Our findings shed light on the interplay between pharmacological and extra-pharmacological factors in psychedelic states of mind," says Vollenweider. "They indicate that mindfulness training enhances the positive effects of a single dose of psilocybin, and can increase empathy and permanently reduce ego-centricity. This opens up new therapeutic avenues, for example for the treatment of depression, which is often accompanied by increased self-focus and social deficits."
Source: https://www.sciencedaily.com/releases/2019/10/191024075003.htm
Journal Reference: Lukasz Smigielski, Michael Kometer, Milan Scheidegger, Rainer Krähenmann, Theo Huber, Franz X. Vollenweider. Characterization and prediction of acute and sustained response to psychedelic psilocybin in a mindfulness group retreat. Scientific Reports, 2019; 9 (1) DOI: 10.1038/s41598-019-50612-3
MPs have condemned the level of IT failures at banks, warning that financial levies on firms and more regulation may be needed.
A Treasury Committee report said the frequency of online banking crashes and customer disruption was "unacceptable". The report, published on Monday, said with bank branches and cash machines closing, there was greater urgency to ensure online banking worked. Customers are being left "cashless and cut off", the report said. Firms could do much more to ensure their IT systems were resilient and to resolve complaints and compensation more quickly.
The MPs suggest the three major regulators - Financial Conduct Authority, Prudential Regulation Authority, Bank of England - do not have the staff and experience to deal with the growing number of computer failures. An increase in the financial levies on banks may be needed to ensure that the regulators are adequately funded and resourced , the report says.
MPs are also worried about the increase use of third-party providers of cloud services for computing power and data storage. "The consequences of a major operational incident at a large cloud service provider, such as Microsoft, Google or Amazon, could be significant," the report said. "There is, therefore, a considerable case for the regulation of these cloud service providers to ensure high standards of operational resilience." Cloud services "stood out as such a source of systemic risk" for the financial system, the MPs said.
Submitted via IRC for soylent_red
Researchers from Åbo Akademi University, Finland, and Umeå University, Sweden, have for the first time obtained clear evidence of the important role strategies have in memory training. Training makes participants adopt various strategies to manage the task, which then affects the outcome of the training.
Strategy acquisition can also explain why the effects of memory training are so limited. Typically, improvements are limited only to tasks that are very similar to the training task -- training has provided ways to handle a given type of task, but not much else.
A newly published study sheds light on the underlying mechanisms of working memory training that have remained unclear. It rejects the original idea that repetitive computerized training can increase working memory capacity. Working memory training should rather be seen as a form of skill learning in which the adoption of task-specific strategies plays an important role. Hundreds of commercial training programs that promise memory improvements are available for the public. However, the effects of the programs do not extend beyond tasks similar to the ones one has been trained on.
Source: https://www.abo.fi/en/news/memory-training-builds-upon-strategy-use/
Journal Reference: Daniel Fellman, Jussi Jylkkä, Otto Waris, Anna Soveri, Liisa Ritakallio, Sarah Haga, Juha Salmi, Thomas J. Nyman, Matti Laine. The role of strategy use in working memory training outcomes. Journal of Memory and Language, 2020; 110: 104064 DOI: 10.1016/j.jml.2019.104064
Intel is taking legal action against a spider's web of patent holders from SoftBank-owned Fortress Investment Group and its network of subsidiaries.
The Japanese megacorp bought the group for $3.3bn in late 2017, and Chipzilla claims Fortress has become more aggressive in an effort to justify its sales price to its new owners.
Intel is suing the company under the Sherman and Clayton antitrust acts to "prevent and restrain Defendants' anticompetitive conduct".
Intel argues in court documents (PDF) that Fortress is asserting patent rights that would not have been considered enforceable by their original owners.
The documents also claim that Fortress has no interest in licensing these patents in the normal way, but prefers to boost the value of its patent portfolio by linking worthless patents with valuable ones.
This war chest of aggregated patents, Intel alleges, allows Fortress to bring case after case against a company until it folds or pays well over the market value for the intellectual property held to stop the litigation.
This strategy, Intel claims, makes it more likely that weak or unenforceable patents are found to be valid in the courts because they are aggregated with patents that may have some merit. It also gives Fortress the opportunity to gain sets of patents that could provide alternatives to each other, which damages competition in the same way that a merger of competing companies can.
Submitted via IRC for soylent_red
When cloud payroll provider MyPayrollHR abruptly closed its doors amid fraud allegations last month, it sent its roughly 1,000 clients, many of them small businesses, into disarray as employees saw their paychecks disappear from their accounts. The fallout continued this week after MyPayrollHR's third-party processor, Cachet Financial Services, announced it was no longer handling payroll transactions.
According to multiple reports, Cachet sent out an email to clients this week stating: "With extremely heavy hearts, we regret to inform you that after Friday, October 25th, Cachet will no longer be able to process your ACH activity."
Payroll companies associated with Cachet will now have to find another way to route funds to employees' bank accounts, as the company "will not handle any further wires, effective immediately." The company did not immediately respond to Gizmodo's request for inquiry.
So my fine soylentils, does anyone work for a company that used MyPayrollHR, and if so, how has this impacted you?
Source: https://gizmodo.com/employees-continued-to-get-screwed-by-mypayrollhr-fiasc-1839383187
The US Department of Justice managed to unravel an infamous dark web child-porn website, called "Welcome to Video", leading to the arrest of 337 people in 18 countries. They managed to do this not by breaking any encryption that was used but by tracing the Bitcoin transactions the site used for payment, and following the money. From CNN's report:
For almost three years, "Welcome To Video" was a covert den for people who traded in clips of children being sexually assaulted.
There, on the darknet's largest-known site of child exploitation videos, hundreds of users from around the world accessed material that showed the sexual abuse of children as young as six months old.
Then it all began to unravel.
On Wednesday, the United States' Department of Justice (DOJ) revealed how it had followed a trail of bitcoin transactions to find the suspected administrator of the site: A 23-year-old South Korean man named Jong Woo Son.
But the case is much bigger than just one man. Over the almost three years that the site was online, users downloaded files more than one million times, according to a newly unsealed DOJ indictment. At least 23 children in the US, Spain and the United Kingdom who were being abused by the users of the site have been rescued, the DOJ said in a press release.
More coverage here, here, and here. The indictment for Jong Woo-Son is here. From Schneier on Security.
The Sydney Morning Herald reports that two Australian States now allow citizens to choose to have a licence issued in digital format and have it displayed on their smartphones.
New South Wales citizens are now finally able to display their driver's licence on their phones and use it as a form of ID at licenced premises.
But if you have a cracked screen, it may not get accepted because a clear screen is required for it to be used as valid ID; your phone must also be fully charged so that you can show your licence.
[...] The Service NSW app, which enables people to display their ID on iPhone and Android smartphones, was updated about 12pm on Monday for all NSW citizens to add their driver's licence.
Until now, only about 20,000 citizens in trial areas that included Sydney's eastern suburbs, Albury, and Dubbo were able to make use of a digital driver's licence.
[...] "Always carry your plastic card if you know you're going to need your driver licence, or if you plan to travel interstate. Ensure your phone screen is not cracked and your phone is charged," Service NSW warns. "It ... may take some time before all organisations will be ready to accept" the digital app.
While the licence is expected to be accepted at most venues, the state government is reminding people to still carry their plastic card "to avoid inconvenience", as some venues in NSW or other states and countries may not accept it as a valid form of ID.It's also reminding people to not use their phones while driving or riding when asked for ID and that you do not have to hand over your unlocked phone in order for it to be verified by people such as security guards or police officers.
"You don't have to hand over your phone. You may be asked to refresh your licence, by pulling down and releasing," the Service NSW app says.
Anyone want to put more information on their phone?
GlobalFoundries and TSMC Sign Broad Cross-Licensing Agreement, Dismiss Lawsuits
GlobalFoundries and TSMC have announced this afternoon that they have signed a broad cross-licensing agreement, ending all of their ongoing legal disputes. Under the terms of the deal, the two companies will license each other's semiconductor-related patents granted so far, as well as any patents filed over the next 10 years.
Previously, GlobalFoundries has been accusing TSMC of patent infringement. At the time of the first lawsuit in August, TSMC said that the charges were baseless and that it would defend itself in court. In October, TSMC countersued its rival and, in turn, accused GlobalFoundries of infringing multiple patents. Now, less than a month after the countersuit, the two companies have agreed to sign a broad cross licensing agreement and dismiss all ongoing litigation.
Previously: GlobalFoundries Sues TSMC for Patent Infringement, Seeking Import Ban
TSMC Countersues GlobalFoundries: Accuses US Fab of Infringing Patents Across Numerous Process Nodes
SpaceX President and COO Gwynne Shotwell has revealed that Starship can carry 400 Starlinks satellites into orbit, up from the 60 recently launched using a Falcon 9 rocket. The cost per launch may be negligible:
Beyond Shotwell's clear confidence that Starlink's satellite technology is far beyond OneWeb and years ahead of Amazon's Project Kuiper clone, she also touched on yet another strength: SpaceX's very own vertically-integrated launch systems. OneWeb plans to launch the vast majority of its Phase 1 constellation on Arianespace's commercial Soyuz rockets, with the launch contract alone expected to cost more than $1B for ~700 satellites.
SpaceX, on the other hand, owns, builds, and operates its own rocket factory and high-performance orbital launch vehicles and is the only company on Earth to have successfully fielded reusable rockets. In short, although Starlink's voracious need for launch capacity will undoubtedly require some major direct investments, a large portion of SpaceX's Starlink launch costs can be perceived as little more than the cost of propellant, work-hours, and recovery fleet operations. Boosters (and hopefully fairings) can be reused ad nauseum and so long as SpaceX sticks to its promise to put customer missions first, the practical opportunity cost of each Starlink launch should be close to zero.
[...] Shotwell revealed that a single Starship-Super Heavy launch should be able to place at least 400 Starlink satellites in orbit – a combined payload mass of ~120 metric tons (265,000 lb). Even if the cost of a Starship launch remained identical to Starlink v0.9's flight-proven Falcon 9, packing almost seven times as many Starlink satellites would singlehandedly cut the relative cost of launch per satellite by more than the 5X figure Musk noted.
In light of this new figure of 400 satellites per individual Starship launch, it's far easier to understand why SpaceX took the otherwise ludicrous step of reserving space for tens of thousands more Starlink satellites. Even if SpaceX arrives at a worst-case-scenario and is only able to launch Starship-Super Heavy once every 4-8 weeks for the first several years, that could translate to 2400-4800 Starlink satellites placed in orbit every year. Given that 120 tons to LEO is well within Starship's theoretical capabilities without orbital refueling, it's entirely possible that Starship could surpass Falcon 9's Starlink mass-to-orbit almost immediately after it completes its first orbital launch and recovery: a single Starship launch would be equivalent to almost 7 Falcon 9 missions.
The Starlink constellation can begin commercial operations with just 360-400 satellites, or 1,200 for global coverage. SpaceX has demonstrated a 610 Mbps connection to an in-flight U.S. military C-12 aircraft. SpaceX is planning to launch 60 additional Starlink satellites in November, marking the first reuse of a thrice-flown Falcon 9 booster.
Also at CNBC.
Previously: Third Time's the Charm! SpaceX Launch Good; Starlink Satellite Deployment Coming Up [Updated]
SpaceX Provides Update on Starship with Assembled Prototype as the Backdrop
SpaceX Requests Permission to Launch an Additional 30,000 Starlink Satellites, to a Total of 42,000+
Elon Musk Sends Tweet Via SpaceX's Starlink Satellite Broadband
SpaceX: Land Starship on Moon Before 2022, Then Do Cargo Runs for 2024 Human Landing
GlobalFoundries Teams Up with Singapore University for ReRAM Project
GlobalFoundries has announced that the company has teamed up with Singapore's Nanyang Technological University and the National Research Foundation to develop resistive random access memory (ReRAM). The next-generation memory technology could ultimately pave the way for use as a very fast non-volatile high-capacity embedded cache. The project will take four years and will cost S$120 million ($88 million).
[...] Right now, GlobalFoundries (and other contract makers of semiconductors) use eFlash (embedded flash) for chips that need relatively high-capacity onboard storage. This technology has numerous limitations, such as endurance and performance when manufactured using today's advanced logic technologies (i.e., sub-20nm nodes), which is something that is required of embedded memories. This is the main reason why GlobalFoundries and other chipmakers are looking at magneto resistive RAM (MRAM) to replace eFlash in future designs as it is considered the most durable non-volatile memory technology that exists today that can be made using contemporary logic fabrication processes.
MRAM relies on reading the magnetic anisotropy (orientation) of two ferromagnetic films separated by a thin barrier, and thus does not require an erase cycle before writing data, which makes it substantially faster than eFlash. Furthermore, its writing process requires a considerably lower amount of energy. On the flip side, MRAM's density is relatively low, its magnetic anisotropy decreases [open, DOI: 10.1038/s41598-018-32641-6] [DX] at low temperatures, which makes it a no-option for numerous applications, but which is still very promising for the majority of use cases that do not involve low temperatures.