Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Coin-sized pieces of graphene can be accelerated by firing low-powered lasers at them in micro-gravity conditions, say scientists. The technology could be a stepping stone to graphene solar sails, which could propel future spacecraft using starlight or a laser array.
The material was developed at SCALE Nanotech, a startup in Estonia and Germany, with the support of Delft University of Technology in the Netherlands. The project, backed by the European Space Agency, is experimenting with graphene to develop prototype light sails.
"Light sailing is the only existing in-space propulsion technology that could allow us to visit other star systems in a human lifespan," the scientists stated in a paper, published in Acta Astronautica. "In order to best harness radiation pressure, light sails need to be highly reflective, lightweight and mechanically robust."
To make these sails, the team crafted an atomically thin 2D film punctured with tiny holes, and covered it with a layer of graphene. Next, they traveled to the ZARM drop tower, a laboratory at the University of Bremen, Germany, that uses a 146-metre steel tube to simulate micro-gravity conditions to test their graphene coins. When the material was dropped inside and floating effectively weightlessly, it was accelerated 1 m/s2 by zapping it with a 1W laser. The photons in the laser light exerted a pressure on the material causing it to move faster and faster.
[...] Santiago Cartamil-Bueno, coauthor of the paper and the leader of the GrapheneSail team at Scale Nanotech, believes "graphene is part of the solution" to developing practical light sails.
Journal Reference:
Rocco Gaudenzia, Davide Stefaniam, Santiago Jose Cartamil-Buenob. "Light-induced propulsion of graphene-on-grid sails in microgravity", Acta Astronautica (DOI: 10.1016/j.actaastro.2020.03.030)
Arthur T Knackerbracket has found the following story:
In a study recently published in the Monthly Notices of the Royal Astronomical Society, Dr. Jade Powell and Dr. Bernhard Mueller from the ARC Centre of Excellence for Gravitational Wave Discovery (OzGrav) simulated three core-collapse supernovae using supercomputers from across Australia, including the OzSTAR supercomputer at Swinburne University of Technology. The simulation models—which are 39 times, 20 times and 18 times more massive than our
[..] Core-collapse supernovae are the explosive deaths of massive stars at the end of their lifetime. They are some of the most luminous objects in the universe and are the birthplace of black holes and neutron stars. The gravitational waves detected from these supernovae help scientists better understand the astrophysics of black holes and neutron stars.
[...] To detect a core-collapse supernova in gravitational waves, scientists need to predict what the gravitational wave signal will look like. They use supercomputers to simulate these cosmic explosions to understand their complicated physics. This allows them to predict what the detectors will see when a star explodes and its observable properties.
[...] OzGrav postdoctoral researcher Jade Powell says, "Our models are 39 times, 20 times and 18 times more massive than our sun. The 39-solar mass model is important because it's rotating very rapidly, and most previous long duration core-collapse supernova simulations do not include the effects of rotation."
The two most massive models produce energetic explosions powered by the neutrinos, but the smallest model did not explode. Stars that do not explode emit lower amplitude gravitational waves, but the frequency of their gravitational waves lies in the most sensitive range of gravitational wave detectors.
[...] The rapidly rotating model showed large gravitational-wave amplitudes that would make the exploding star detectable almost 6.5 million light years away by the next generation of gravitational-wave detectors, like the Einstein Telescope.
Journal Reference:
Jade Powell et al. "Three-dimensional core-collapse supernova simulations of massive and rotating progenitors", Monthly Notices of the Royal Astronomical Society (2020). DOI: 10.1093/mnras/staa1048
Thunderbolt flaw lets hackers steal your data in 'five minutes':
Attackers can steal data from Thunderbolt-equipped PCs or Linux computers, even if the computer is locked and the data encrypted, according to security researcher Björn Ruytenberg (via Wired). Using a relatively simple technique called "Thunderspy," someone with physical access to your machine could nab your data in just five minutes with a screwdriver and "easily portable hardware," he wrote.
Thunderbolt offers extremely fast transfer speeds by giving devices direct access to your PC's memory, which also creates a number of vulnerabilities. Researchers previously thought those weaknesses (dubbed Thunderclap), could be mitigated by disallowing access to untrusted devices or disabling Thunderbolt altogether but allowing DisplayPort and USB-C access.
However, Ruytenberg's attack method could get around even those settings by changing the firmware that controls the Thunderbolt port, allowing any device to access it. What's more, the hack leaves no trace, so the user would never know their PC was altered.
[...] The attack only requires about $400 worth of gear, including an SPI programmer and $200 Thunderbolt peripheral. The whole thing could be built into a single small device. "Three-letter agencies would have no problem miniaturizing this," Ruytenberg said.
Intel recently created a Thunderbolt security system called Kernel Direct Memory Access Protection that would stop Ruytenberg's Thunderspy attack. However, that protection is only available on computers made in 2019 and later, so it's lacking in any models manufactured prior to that. In addition, many PCs manufactured in 2019 and later from Dell, HP and Lenovo aren't protected, either. This vulnerability might explain why Microsoft didn't include Thunderbolt in its Surface laptops.
Apple computers running macOS are unaffected by the vulnerability unless you're running Boot Camp, according to Ruytenberg.
Intel's official response appears in this blog post.
See Spycheck to test if your system is vulnerable.
Arthur T Knackerbracket has found the following story:
Now, researchers at the National Institute of Standards and Technology (NIST) and their colleagues at the University of Maryland have developed a step-by-step recipe to produce the atomic-scale devices. Using these instructions, the NIST-led team has become only the second in the world to construct a single-atom transistor and the first to fabricate a series of single electron transistors with atom-scale control over the devices' geometry.
[...] Precise control over quantum tunneling is key because it enables the transistors to become "entangled" or interlinked in a way only possible through quantum mechanics and opens new possibilities for creating quantum bits (qubits) that could be used in quantum computing.
To fabricate single-atom and few-atom transistors, the team relied on a known technique in which a silicon chip is covered with a layer of hydrogen atoms, which readily bind to silicon. The fine tip of a scanning tunneling microscope then removed hydrogen atoms at selected sites. The remaining hydrogen acted as a barrier so that when the team directed phosphine gas (PH3) at the silicon surface, individual PH3 molecules attached only to the locations where the hydrogen had been removed (see animation). The researchers then heated the silicon surface. The heat ejected hydrogen atoms from the PH3 and caused the phosphorus atom that was left behind to embed itself in the surface. With additional processing, bound phosphorous atoms created the foundation of a series of highly stable single- or few-atom devices that have the potential to serve as qubits.
Two of the steps in the method devised by the NIST teams—sealing the phosphorus atoms with protective layers of silicon and then making electrical contact with the embedded atoms—appear to have been essential to reliably fabricate many copies of atomically precise devices, NIST researcher Richard Silver said.
[...] "Because quantum tunneling is so fundamental to any quantum device, including the construction of qubits, the ability to control the flow of one electron at a time is a significant achievement," [researcher Jonathan] Wyrick said. In addition, as engineers pack more and more circuitry on a tiny computer chip and the gap between components continues to shrink, understanding and controlling the effects of quantum tunneling will become even more critical, Richter said.
Journal Reference:
Jonathan Wyrick, Xiqiao Wang, Ranjit V. Kashid, et al. Atom‐by‐Atom Fabrication of Single and Few Dopant Quantum Devices, Advanced Functional Materials (DOI: 10.1002/adfm.201903475)
The major drivers in long-term climate models are water and air contamination (through the effects of greenhouse gases), and deforestation. Quite a bit of hay and discussion has been made about the level of contribution of the former, but what is not in dispute is the latter. Between 2000 and 2012, 2.3 million km^2 of forests were cut down, which amounts to 2 × 10^5 Km2 per year. At this rate all the forests would disappear approximatively in 100–200 years. A couple of mathematical biologists considered only the deforestation part and modeled the survivability of our species given the rate of the disappearance of the forests. Their results were not very promising.
In conclusion our model shows that a catastrophic collapse in human population, due to resource consumption, is the most likely scenario of the dynamical evolution based on current parameters. Adopting a combined deterministic and stochastic model we conclude from a statistical point of view that the probability that our civilisation survives itself is less than 10% in the most optimistic scenario. Calculations show that, maintaining the actual rate of population growth and resource consumption, in particular forest consumption, we have a few decades left before an irreversible collapse of our civilisation (see Fig. 5). Making the situation even worse, we stress once again that it is unrealistic to think that the decline of the population in a situation of strong environmental degradation would be a non-chaotic and well-ordered decline. This consideration leads to an even shorter remaining time.
They also note that according to the Kardashev scale, a Type II civilization needs to be able to harness the total energy output of their star to be able to spread across their own stellar system. Our civilization is many orders of magnitude away from that point and it looks like we are exhausting all of our available resources before achieving that technological level. If you believe the mediocrity principle applies, then perhaps the answer to the Fermi paradox question of "where is everybody?" is simply "they are all dead."
Journal Reference
Bologna, M., Aquino, G. "Deforestation and world population sustainability: a quantitative analysis", Scientific Reports 10, 7631 (2020). https://doi.org/10.1038/s41598-020-63657-6
Washington in talks with chipmakers about building U.S. factories:
(Reuters) - The Trump administration is in talks with semiconductor companies about building chip factories in the United States, representatives from two chipmakers said on Sunday.
Intel Corp (INTC.O) is in discussions with the United States Department of Defense over improving domestic sources for microelectronics and related technology, Intel spokesman William Moss said in an emailed statement.
"Intel is well positioned to work with the U.S. government to operate a U.S.-owned commercial foundry and supply a broad range of secure microelectronics", the statement added.
Taiwan Semiconductor Manufacturing Co (TSMC) (2330.TW), on the other hand, has been in talks with the U.S. Department of Commerce about building a U.S. factory but said it has not made a final decision yet.
"We are actively evaluating all the suitable locations, including in the U.S., but there is no concrete plan yet", TSMC spokeswoman Nina Kao said in a statement.
[...] The Trump administration's discussions with chipmakers were reported earlier by the Wall Street Journal, with the report adding that TSMC also has been talking with Apple Inc (AAPL.O), one of its largest customers, about building a chip factory in the United States.
[...] The Journal had also reported that U.S. officials are looking at helping South Korea's Samsung Electronics Co (005930.KS), which has a chip factory in Austin, Texas, to expand its contract-manufacturing operations in the United States.
The U.S. Commerce Department, Samsung and Apple did not respond to requests for comment on Sunday.
From Science Alert
In the early morning of 30 June 1908, something exploded over Siberia. The event shattered the normal stillness of the sparsely populated taiga, so powerful that it flattened an area of forest 2,150 square kilometres (830 square miles) in size - felling an estimated 80 million trees.
[...] It is often referred to as the "largest impact event in recorded history", even though no impact crater was found. Later searches have turned up fragments of rock that could be meteoric in origin, but the event still has a looming question mark. Was it really a bolide? And if it wasn't, what could it be?
Well, it's possible we'll never actually know... but according to a recent peer-reviewed paper, a large iron asteroid entering Earth's atmosphere and skimming the planet at a relatively low altitude before flying back into space could have produced the effects of the Tunguska event by producing a shock wave that devastated the surface.
We have had several submissions in the past couple of days about the Ferguson code.
For those that don't know - the Ferguson code, also known as the Ferguson Model or the Imperial College Model, is the epidemiology prediction software and the underlying model upon which the UK government is basing all its decisions relating to combating CV-19.
It appears that there is some question about the accuracy of the model and the repeatability of its predictions.
Thanks to NPC-131072, FatPhil, and nutherguy for their submissions. Details begin below the fold.
Details of the model [Ferguson's] team built to predict the epidemic are emerging and they are not pretty. In the respective words of four experienced modellers, the code is "deeply riddled" with bugs, "a fairly arbitrary Heath Robinson machine", has "huge blocks of code - bad practice" and is "quite possibly the worst production code I have ever seen".
When ministers make statements about coronavirus policy they invariably say that they are "following the science". But cutting-edge science is messy and unclear, a contest of ideas arbitrated by facts, a process of conjecture and refutation. This is not new. Almost two centuries ago Thomas Huxley described the "great tragedy of science - the slaying of a beautiful hypothesis by an ugly fact."
In this case, that phrase "the science" effectively means the Imperial College model, forecasting potentially hundreds of thousands of deaths, on the output of which the Government instituted the lockdown in March. Sage's advice has a huge impact on the lives of millions. Yet the committee meets in private, publishes no minutes, and until it was put under pressure did not even release the names of its members. We were making decisions based on the output of a black box, and a locked one at that.
It has become commonplace among financial forecasters, the Treasury, climate scientists, and epidemiologists to cite the output of mathematical models as if it was "evidence". The proper use of models is to test theories of complex systems against facts. If instead we are going to use models for forecasting and policy, we must be able to check that they are accurate, particularly when they drive life and death decisions. This has not been the case with the Imperial College model.
At the time of the lockdown, the model had not been released to the scientific community. When Ferguson finally released his code last week, it was a reorganised program different from the version run on March 16.
[...] We now know that the model's software is a 13-year-old, 15,000-line program that simulates homes, offices, schools, people and movements. According to a team at Edinburgh University which ran the model, the same inputs give different outputs, and the program gives different results if it is run on different machines, and even if it is run on the same machine using different numbers of central-processing units.
Worse, the code does not allow for large variations among groups of people with respect to their susceptibility to the virus and their social connections. An infected nurse in a hospital is likely to transmit the virus to many more people than an asymptomatic child. Introducing such heterogeneity shows that the threshold to achieve herd immunity with modest social distancing is much lower than the 50-60 per cent implied by the Ferguson model. One experienced modeller tells us that "my own modelling suggests that somewhere between 10 per cent and 30 per cent would suffice, depending on what assumptions one makes."
Code Review of Ferguson's Model – Lockdown Sceptics:
by Sue Denim (not the author's real name)
Imperial finally released a derivative of Ferguson's code. I figured I'd do a review of it and send you some of the things I noticed. I don't know your background so apologies if some of this is pitched at the wrong level.
[...] It isn't the code Ferguson ran to produce his famous Report 9. What's been released on GitHub is a heavily modified derivative of it, after having been upgraded for over a month by a team from Microsoft and others. This revised codebase is split into multiple files for legibility and written in C++, whereas the original program was "a single 15,000 line file that had been worked on for a decade" (this is considered extremely poor practice). A request for the original code was made 8 days ago but ignored, and it will probably take some kind of legal compulsion to make them release it. Clearly, Imperial are too embarrassed by the state of it ever to release it of their own free will, which is unacceptable given that it was paid for by the taxpayer and belongs to them.
What it's doing is best described as "SimCity without the graphics". It attempts to simulate households, schools, offices, people and their movements, etc. I won't go further into the underlying assumptions, since that's well explored elsewhere.
Due to bugs, the code can produce very different results given identical inputs. They routinely act as if this is unimportant.
This problem makes the code unusable for scientific purposes, given that a key part of the scientific method is the ability to replicate results. Without replication, the findings might not be real at all – as the field of psychology has been finding out to its cost. Even if their original code was released, it's apparent that the same numbers as in Report 9 might not come out of it.
We Now Know Far More About COVID-19 - The Lockdown Should End:
On March 23rd, when Boris Johnson declared a lockdown in the UK, it was a beyond surreal moment for me. With no debate, our freedoms, social life and jobs were gone.
The reasons given for the lockdown were to try and save lives, slow the spread of this virus and limit the impact on the NHS. It sounds good until you start to pose searching questions. Confining people to their homes and a complete loss of social life comes with its own set of serious problems. Focusing on Covid-19 means other people needing operations are postponed for months.
We had heard about other so-called Pandemics that had turned out to be nothing of the sort, Swine flu being one example. What was different about Covid-19? Johnson had seemed to be going the way of putting in some mitigation recommendations, like social distancing, hand washing and isolating of the elderly. Then he changed his mind.
The reason were the numbers of possible deaths that could occur if a full lockdown was not implemented. The numbers came from a Prof Neil Ferguson of Imperial College, London.
Ferguson had told the government that according to his computer model, over 500,000 people would die in the UK if they did nothing, 250,000 people would die if he continued with lesser mitigation in place, but allowing businesses to stay open as usual. With a full lockdown, deaths would be 20,000 or less, and the impact to the NHS would be kept to a minimum.
What immediately struck me was that Ferguson's computer model is just that, it's an estimate based on certain data. His projections could be totally wrong, we've all heard the expression, garbage in, garbage out. Why on earth would Johnson decide to implement such drastic measures based on a theoretical computer model?
It was also disturbing to find out that Ferguson has a lot of form for making highly exaggerated claims with his computer models.
Wonder why you're in lockdown? Wonder no more, the code review is in:
Imperial finally released a derivative of Ferguson's code. I figured I'd do a review of it and send you some of the things I noticed. I don't know your background so apologies if some of this is pitched at the wrong level.
It isn't the code Ferguson ran to produce his famous Report 9. What's been released on GitHub is a heavily modified derivative of it, after having been upgraded for over a month by a team from Microsoft and others. This revised codebase is split into multiple files for legibility and written in C++, whereas the original program was "a single 15,000 line file that had been worked on for a decade" (this is considered extremely poor practice). A request for the original code was made 8 days ago but ignored, and it will probably take some kind of legal compulsion to make them release it. Clearly, Imperial are too embarrassed by the state of it ever to release it of their own free will, which is unacceptable given that it was paid for by the taxpayer and belongs to them.
I predict this story will be better commented than the original code.
Original Submission #1 Original Submission #2 Original Submission #3 Original Submission #4
Programming Languages: Python Developers Reveal What They Use It For And Their Top Tools:
Data science is often cited as one of the main reasons for Python's growing popularity. But while people are definitely using Python for data analysis and machine learning, not many of those using Python actually identify their role as data scientist in the Python Software Foundation's (PSF) new 2019 developer survey, which was carried out by IDE-maker JetBrains.
[...] The survey is based on responses from 24,000 Python developers from 150 countries.
[...] The order hasn't changed this year, with data analysis remaining Python's top purpose at 59%, followed by web development at 51%, and machine learning at 40%.
Other major applications of Python include DevOps and system administration (39%), programming web tools like crawlers (37%), software testing (31%), education (26%), software prototyping (25%), network programming (21%), desktop development (18%), computer graphics (14%), embedded system development (8%), game development (7%) and mobile development (6%).
However, at 28%, web development remains the top purpose when respondents were asked what they used Python for the most. It is followed by data analysis (18%), machine learning (13%), and DevOps, and system administration (9%).
[...] Which cloud platform do Python developers prefer most? Not surprisingly, Amazon Web Services dominates with a share of 55%, followed by Google Cloud Platform with a 33% share.
A further 22% of Python developers use DigitalOcean, and 20% use Heroku. Microsoft Azure comes in at fifth place with a 19% share while 12% use PythonAnywhere.
[...] The top three sources for Python installation and upgrades are the operating system, followed by python.org, and Anaconda. Some 68% of Python developers are building on Linux, followed by Windows at 48%, while macOS has a 29% share.
The top web frameworks for Python are Flask and Django, while the leading data-science frameworks and libraries are NumPy, Pandas, Matplotlib, SciPy, SciKit-learn, TensorFlow, Keras, Seaborn, and Facebook's PyTorch, and NLTK.
The PyCharm integrated development environment (IDE) from JetBrains is once again the top IDE with a 33% share, followed by Microsoft's open-source cross-platform editor VS Code with a 24% share.
[...] The survey found that 44% of users have just two years' experience and 30% had three to five years' experience.
France is using AI to check whether people are wearing masks on public transport:
France is integrating new AI tools into security cameras in the Paris metro system to check whether passengers are wearing face masks.
The software, which has already been deployed elsewhere in the country, began a three-month trial in the central Chatelet-Les Halles station of Paris this week, reports Bloomberg. French startup DatakaLab, which created the program, says the goal is not to identify or punish individuals who don’t wear masks, but to generate anonymous statistical data that will help authorities anticipate future outbreaks of COVID-19.
“We are just measuring this one objective,” DatakaLab CEO Xavier Fischer told The Verge. “The goal is just to publish statistics of how many people are wearing masks every day.”
The pilot is one of a number of measures cities around the world are introducing as they begin to ease lockdown measures and allow people to return to work. Although France, like the US, initially discouraged citizens from wearing masks, the country has now made them mandatory on public transport. It’s even considering introducing fines of €135 ($145) for anyone found not wearing a mask on the subway, trains, buses, or taxis.
Arthur T Knackerbracket has found the following story:
Russia's space agency on Sunday confirmed one of its rockets used in past launches and floating in space has broken down, leaving debris in orbit.
The agency said the Fregat-SB upper stage rocket was used to deliver the Russian scientific satellite Spektr-R to orbit in 2011.
"The breakdown happened on May 8 2020" between 0500 and 0600GMT, above the Indian ocean, the agency said in a statement.
"Currently we are working to collect data to confirm the quantity and orbit parameters of the fragments," it said.
Wkipedia entry on the Spektr-R.
High-frequency audio could be used to stealthily track netizens
Technical folks looking to improve web privacy haven't been able to decide whether sound beyond the range of human hearing poses enough of a privacy risk to merit restriction.
People can generally hear audio frequencies ranging from 20 Hz and 20,000 Hz, though individual hearing ranges vary. Audio frequencies below and above the threshold of human hearing are known as infrasound and ultrasound, respectively.
[...] A warning from America's trade watchdog, the FTC, in 2016 and research published the following year identifying 234 Android apps listening covertly for ultrasound beacons, helped discourage inaudible tracking.
Several of the companies called out for these privacy-invading practices, such as SilverPush, have moved on to other sorts of services. But the ability to craft code that communicates silently with mobile devices through inaudible sound remains a possibility, both for native apps and web apps. Computer security researchers continue to find novel ways to use inaudible audio for data exfiltration. And ultrasound is still used for legitimate operations – Google's Cast app, for example, relies on an ultrasonic token when pairing with a nearby Chromecast.
[...] Weiler raised the subject three weeks ago – one element in a larger debate about reducing the fingerprinting surface of the Web Audio API. And last week, the discussion thread was closed by Raymond Toy, a Google software engineer and co-chair of the W3C's Audio Working Group.
Toy argued that if a developer is allowed to use a specific audio sampling rate, no additional permission should be required – few users enjoy dealing with permission prompts, after all. And other web developers participating in the debate expressed concern that limiting available frequency ranges could introduce phase shifting or latency and that there's no sensible lower or upper threshold suitable for everyone.
Alien life could be growing in more places than we realised, study suggests:
Alien life could flourish in many more kinds of environment than we had previously realised, a new study has suggested.
In the new research, scientists found that microorganisms could survive and grow in an atmosphere made entirely of hydrogen. That suggests the same could be happening elsewhere in the universe, the researchers indicate, and that alien life could be growing in similar places.
Away from Earth, there are many exoplanets that are much bigger than our planet and have large amounts of hydrogen in their atmosphere. Those atmosphere tend to extend more than those that are similar to our atmosphere, meaning they are easier to see through the telescopes we use to scour the universe for alien planets.
[...] Researchers hope that if such microorganisms are growing on other planets, they may one day be detectable from Earth. They tend to produce a huge variety of gases, which could eventually become thick enough on their home planets that we would be able to spot them from across the universe, they suggest.
The discovery also shows how experiments in labs on Earth could help illuminate the search for alien life on other planets, they write in the study.
The paper, 'Laboratory studies on the viability of life in H2-dominated exoplanet atmospheres', is published in Nature Astronomy.
System adminsitrator Chris Siebenmann has found Modern versions of systemd can cause an unmount storm during shutdowns:
One of my discoveries about Ubuntu 20.04 is that my test machine can trigger the kernel's out of memory killing during shutdown. My test virtual machine has 4 GB of RAM and 1 GB of swap, but it also has 347 NFS[*] mounts, and after some investigation, what appears to be happening is that in the 20.04 version of systemd (systemd 245 plus whatever changes Ubuntu has made), systemd now seems to try to run umount for all of those filesystems all at once (which also starts a umount.nfs process for each one). On 20.04, this is apparently enough to OOM[**] my test machine.
[...] Unfortunately, so far I haven't found a way to control this in systemd. There appears to be no way to set limits on how many unmounts systemd will try to do at once (or in general how many units it will try to stop at once, even if that requires running programs). Nor can we readily modify the mount units, because all of our NFS mounts are done through shell scripts by directly calling
mount; they don't exist in/etc/fstabor as actual.mountunits.
[*] NFS: Network File System
[**] OOM Out of memory.
We've been here before and there is certainly more where that came from.
Previously:
(2020) Linux Home Directory Management is About to Undergo Major Change
(2019) System Down: A systemd-journald Exploit
(2017) Savaged by Systemd
(2017) Linux systemd Gives Root Privileges to Invalid Usernames
(2016) Systemd Crashing Bug
(2015) tmux Coders Asked to Add Special Code for systemd
(2016) SystemD Mounts EFI pseudo-fs RW, Facilitates Permanently Bricking Laptops, Closes Bug Invalid
(2015) A Technical Critique of Systemd
(2014) Devuan Developers Can Be Reached Via vua@debianfork.org
(2014) Systemd-resolved Subject to Cache Poisoning
First off, on behalf of myself and the staff here at SoylentNews, here's wishing all the Moms out there a Happy Mother's Day! (For all mine did for me, I think it should last at least a month!) [Update: apparently this is for the USA; other countries have other dates. The sentiment, however, remains the same!)
Also, I hereby express the best possible wishes for our entire community as we try to navigate a path through the COVID-19 pandemic. Take the precautions you deem necessary to protect yourself, your loved ones, and all you meet. Please be careful out there!
Should you, or someone you know, be suffering at this time — be it from COVID-19 or any other reason — I can attest to the support I received from the community when I had a health-related situation last fall. You guys (and gals!) are the best!
Folding@Home: Our Folding@Home (F@H) team keeps chugging along! Historically, the F@H effort had been geared towards understanding Parkinson's Disease, Huntington's disease, cancer and the like. People donate their unused processing power (CPUs and and video cards) to perform simulations of how proteins fold. This, in turn, helps locate a way to interfere with the progression of a disease. For the past few months, the focus has shifted to the SARS-CoV-2 virus. In concert with that, there has been a huge increase in hardware donated to the cause. So, though our team rank has recently been slipping in the overall standings, I'm happy to report it's from the huge outpouring of support from around the world being brought to the cause.
Top place on our team is held by cmn3280 with just over 300 million points. Next we have LTKKane who just passed 222 million points. And not to be outdone, Runaway1956 has been running hard and is on the cusp of reaching 200 million points (and adding about 1.5 million points per day!) Pop into the #folding channel on IRC (Internet Relay Chat) or reply to this story if you'd like to join our team!
Read past the fold for info about the "Silly Season", subscriptions, site server issues, and plans.
Silly Season: It came a bit earlier than usual, but we are well into the "Silly Season". It's that time of the year when places of higher education close for the summer (in the northern hemisphere) and people's minds turn to summer vacation. Research reports are few and far between. To fill the gap, many publications turn to lighter fare for lack of anything better to publish. Compounded with the COVID-19 pandemic, what research that still continues tends to be slowed down by safety precautions.
What that means is we have less of a selection to choose from in trying to bring stories to the community for discussion. See something tech-related on the web that you think the community might be interested in? Please submit it to SoylentNews! It does not have to be the next "Great American Novel", either! Of course the more "publication ready" you make it, the easier it is for an editor to decide to run with it. On the other hand, if the topic is interesting, chances are an editor will see it and decide to run with it. If you have any questions, it's helpful to consult the Story Submission Guidelines. Also, a quick scan of stories that have recently been posted to the site should provide guidance as to story organization and layout. Lastly, we appreciate side comments within the story submission; for example: "Doesn't contain the links from the story." Sensational, spittle-spewing spouting soon sees silence. Try to stick with the basics of answering "who", "why", "what", "where", "when", and "how" and you'll be on your way!
Subscriptions: Thanks to a generous first-time subscription of $120.00, we just passed $2,100.00 towards our goal of $3,500.00 for the first half of the year (2020-01-01 through 2020-06-30). Thank you to all who have subscribed and helped pay for things like servers, taxes, and an accountant to prepare the taxes.
As you may recall, we made an announcement on April 19 concerning site subscriptions. Yes, coming down with the SAR-CoV-2 virus is bad. But so is having so many people out of work and and trying to make ends meet. We wanted to support people spending their money locally to support their community and the economy. Therefore, anyone who had a subscription that would otherwise have ended earlier was granted a free extension through 2020-05-30.
That said, we do still have bills to pay. If you are of a mind to do so and can afford it, we still are accepting subscriptions! Do bear in mind that Javascript needs to be enabled. We do not process the transactions directly, but instead invoke the API (Application Programming Interface) form provided by PayPal or Stripe. They, in turn, handle processing the payment and then deposit the payment (less processing fees) to our account.
Reminder: the indicated amount (e.g. $20.00 for one year) is a minimum for that duration. So, you can absolutely select a one year subscription and change the amount to, say, $100.00 from the $20.00 that was suggested. (For the record, the largest subscription to date was an extremely generous $1,000.00!) On the other hand, we have two Soylentils who subscribe for $4.00 — every month — like clockwork. It warms my heart every time I see their subscriptions arrive! It's one of the things I love about this community: everybody contributes in their own way and somehow it all comes together. And it has held together since February of 2014! Thanks everybody!
Servers, Part 1. Behind the scenes, TheMightyBuzzard spent the weekend setting up a new server, aluminum. We are gradually moving to a Gentoo Linux base for our servers. Rather than pre-compiled binaries that get downloaded and run locally, Gentoo provides source code for download that one compiles and builds locally. At the moment we have three Gentoo-based servers (lithium, magnesium, and aluminum), one server on CentOS (beryllium), and the rest are on Ubuntu. By moving to Gentoo Linux, we get a streamlined server with a smaller attack surface as only the things we need are built into the kernel. That lone CentOS server? It has been with us from the start and has been no end of a hassle. Several services "live" on it and these need to be migrated before we can retire it. The first stage of that process is underway as Deucalion has been working on bringing up IRC on aluminum. In turn, other services will be brought over. Then we can (finally!) retire beryllium for good! Next on the list are sodium and boron (aiming to have completed by June.) Along with that, there have been new (security and otherwise) releases of other services that site depends on. We intend to get those upgraded as we move to an entirely Gentoo platform. Please join me in wishing them well on the migrations and upgrades!
Servers, Part 2: We had a hiccup with Linode (our server provider) on Friday. Through it all, our servers stayed up and running! Unfortunately, the problem was with one or more network switches at Linode. (Cf: Bert & I on YouTube 😀) The front end (which processes requests for web pages) as well as IRC (and possibly other things of which I am unaware) were inaccessible for the better part of an hour. Given how frequently SoylentNews used to crash (several times each *day*), it is a testament to the hard work put in at the outset that this is such a rarity for us today. Our servers currently have uptimes in the range of 6-9 months... and it would be longer except for some behind-the-scenes work to take advantage of free storage upgrades made available to us by Linode. Remember all work on the site is performed by volunteers who give of their limited free time to keep things humming here.
Summary: Our Folding@Home team is helping to find a cure for COVID-19. Please send in story submissions. We are still accepting subscriptions. Our servers were NOT "Pining for the fjords". Server upgrades are in progress.