Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What was highest label on your first car speedometer?

  • 80 mph
  • 88 mph
  • 100 mph
  • 120 mph
  • 150 mph
  • it was in kph like civilized countries use you insensitive clod
  • Other (please specify in comments)

[ Results | Polls ]
Comments:71 | Votes:290

posted by janrinok on Tuesday March 14 2023, @10:20PM   Printer-friendly

To check that atomic weapons work, scientists run simulations of explosions using high-energy lasers—and Russia is building the strongest one of all:

In town of Sarov, roughly 350 kilometers east of Moscow, scientists are busy working on a project to help keep Russia's nuclear weapons operational long into the future. Inside a huge facility, 10 storeys high and covering the area of two football fields, they are building what's officially known as UFL-2M—or, as the Russian media has dubbed it, the "Tsar Laser." If completed, it will be the highest-energy laser in the world.

High-energy lasers can concentrate energy on groups of atoms, increasing temperature and pressure to start nuclear reactions. Scientists can use them to simulate what happens when a nuclear warhead detonates. By creating explosions in small samples of material—either research samples or tiny amounts from existing nuclear weapons—scientists can then calculate how a full-blown bomb is likely to perform. With an old warhead, they can check that it still works as intended. Laser experiments allow testing without letting a nuke off. "It's a substantial investment by the Russians in their nuclear weapons," says Jeffrey Lewis, a nuclear non-proliferation researcher at the Middlebury Institute of International Studies in California.

Until now, Russia has been unique among the best-established nuclear powers in not having a high-energy laser. The United States has its National Ignition Facility (NIF), currently the world's most energetic laser system. Its 192 separate beams combine to deliver 1.8 megajoules of energy. Looked at in one way, a megajoule is not an enormous amount—it's equivalent to 240 food calories, similar to a light meal. But concentrating this energy onto a tiny area can create very high temperatures and pressures. France meanwhile has its Laser Mégajoule, with 80 beams currently delivering 350 kilojoules, though it aims to have 176 beams delivering 1.3 megajoules by 2026. The UK's Orion laser produces 5 kilojoules of energy; China's SG-III laser, 180 kilojoules.

If completed the Tsar Laser will surpass them all. Like the NIF, it's due to have 192 beams, but with a higher combined output of 2.8 megajoules. Currently, though, only its first stage has launched. At a Russian Academy of Sciences meeting in December 2022, an official revealed that the laser boasts 64 beams in its current state. Their total output is 128 kilojoules, 6 percent of the planned final capability. The next step would be testing them, the official said.

[...] In experiments, these lasers blast their target materials into a high-energy state of matter known as plasma. In gases, solids, and liquids, electrons are usually locked tight to their atoms' nuclei, but in plasma they roam freely. The plasmas throw out electromagnetic radiation, such as flashes of light and x-rays, and particles like electrons and neutrons. The lasers therefore also need detection equipment that can record when and where these events happen. These measurements then allow scientists to extrapolate how a full warhead might behave.

[...] Researchers have used lasers in nuclear weapons testing since at least the 1970s. At first they combined them with underground tests of actual weapons, using data from both to build theoretical models of how plasma behaves. But after the US stopped live-testing nuclear weapons in 1992 while seeking agreement on the Comprehensive Nuclear-Test-Ban Treaty, it switched to "science-based stockpile stewardship"—namely, using supercomputer simulations of warheads detonating to assess their safety and reliability.

But the US and other countries following this approach still needed to physically test some nuclear materials, with lasers, to ensure their models and simulations matched reality and that their nukes were holding up. And they still need to do this today.

[...] But Tikhonchuk [emeritus professor at the Center for Intense Lasers and Applications at the University of Bordeaux, France] believes that Russia will struggle now because it has lost much of the expertise needed, with scientists moving overseas. He notes that the Tsar Laser's beam arrays are very large, at 40 centimeters across, which poses a significant challenge for making their lenses. The larger the lens, the greater the chance there will be a defect in it. Defects can concentrate energy, heating up and damaging or destroying the lenses.

The fact that Russia is developing the Tsar Laser indicates it wants to maintain its nuclear stockpile, says Lewis. "It's a sign that they plan for these things to be around for a long time, which is not great." But if the laser is completed, he sees a sliver of hope in Russia's move. "I'm quite worried that the US, Russia, and China are going to resume explosive testing." The Tsar Laser investment might instead show that Russia thinks it already has enough data from explosive nuclear tests, he says.


Original Submission

posted by janrinok on Tuesday March 14 2023, @07:34PM   Printer-friendly

Resulting in the birth of several mice that were produced without mothers:

Same-sex reproduction has historically required donor cells, as is the case with egg implantation and some instances of in-vitro fertilization (IVF). Thanks to genetic engineering, however, this might not always be the case. Scientists in Japan have successfully created eggs using male cells, resulting in the birth of several mice that were produced without mothers.

Renowned Kyushu University stem cell researcher Katshuhiko Hayashi presented his team's achievement this week at the Third International Summit on Human Genome Editing in London. Hayashi had led his colleagues through "reprogramming" a male mouse's skin cells into induced pluripotent stem (iPS) cells, or former non-reproductive cells that can be engineered into various cell forms. Because male cells contain the XY chromosome combination, Hayashi had to remove the Y chromosome and replace it with an X chromosome from another cell. (Hayashi's team attempted to devise a way to duplicate the first cell's X chromosome but was unsuccessful, resulting in the need to pull from a donor.)

Hayashi implanted the makeshift eggs inside a mouse ovary organoid, a ball of tissues that function similarly to a natural ovary. After fertilizing the eggs with sperm, his team implanted the resulting 600 embryos into surrogate mice. Seven of these embryos became mouse pups, which grew into adults with normal lifespans and successful mating routines.

Should Hayashi and his colleagues successfully produce eggs in the lab, it could pave the way for novel infertility treatments and for same-sex procreation that incorporates both partners' genes.


Original Submission

posted by janrinok on Tuesday March 14 2023, @04:53PM   Printer-friendly

And scientists have only seen four percent of the data so far:

A project to map the earliest structures of the universe has found 15,000 more galaxies in its first snapshot than captured in an entire deep field survey conducted 20 years ago.

The James Webb Space Telescope, the new preeminent observatory in the sky, saw about 25,000 galaxies in that single image, dramatically surpassing the nearly 10,000 shown in the Hubble Space Telescope's Ultra Deep Field Survey. Scientists say that little piece of the space pie represents just four percent of the data they'll discover from the new Webb survey by the time it's completed next year.

"When it is finished, this deep field will be astoundingly large and overwhelmingly beautiful," said Caitlin Casey, a University of Texas at Austin astronomer co-leading the investigation, in a statement.

[...] A deep field image is much like drilling deep into Earth to take a core sample: It's a narrow but distant view of the cosmos, revealing layers of history by cutting across billions of light-years. In Hubble's deep field, the oldest visible galaxies dated back to the first 800 million years after the Big Bang. That's an incredibly early period relative to the universe's estimated age of 13.8 billion-with-a-B years.

[...] Four different types of galaxies observed through the COSMOS-Web deep field survey.The COSMOS-Web survey will map 0.6 square degrees of the sky—about the area of three full moons.

The first images from COSMOS-Web, the largest program in Webb's first year, show a rich variety of structures, teeming with spiral galaxies, gravitational lensing, and galaxy mergers. Furthermore, hundreds of galaxies that were previously identified by Hubble are getting reclassified with different characteristics after being shown in more detail with Webb.


Original Submission

posted by janrinok on Tuesday March 14 2023, @02:12PM   Printer-friendly
from the more-data-for-spreadsheet-nerds dept.

SSD Reliability is Only Slightly Better Than HDD, Backblaze Says

A surprising outcome for the first SSD-based AFR report:

Backblaze is a California-based company dealing with cloud storage and data backup services. Every year, the organization provides some interesting reliability data about the large fleet of storage units employed in its five data centers around the world.

For the first time, Backblaze's latest report on storage drive reliability is focusing on Solid State Drives (SSD) rather than HDD units alone. The company started using SSDs in the fourth quarter of 2018, employing the NAND Flash-based units as boot drives rather than data-storing drives. Backblaze uses consumer-grade drives, providing Annualized Failure Rate (AFR) information about 13 different models from five different manufacturers.

The 2022 Drive States review is based on data recorded from 2,906 SSD boot units, Backblaze states, and it is essentially confirming what the company was saying in its 2022 mid-year report. SSDs are more reliable than HDDs, Backblaze says, as they show a lower AFR rate (0.98%) compared to HDDs (1.64%).

The fact that the difference in reliability level isn't exactly staggering (0.66% AFR) is rather surprising, however, as SSDs are essentially just moving electrons through memory chips while hard drives have to deal with a complex (and failure-prone) mechanism employing spinning platters and extremely sensitive read/write magnetic heads.

The reasons behind failing drives aren't known, as only an SSD manufacturer would have the equipment needed to make a reliable diagnose. For 2022, Backblaze says that seven of the 13 drive models had no failure at all. Six of those seven models had a limited number of "drive days" (less than 10,000), the company concedes, meaning that there is not enough data to make a reliable projection about their failure rates.

An interesting tidbit about Backblaze's report is that the company hasn't used a single SSD unit made by Samsung, which is a major player in the SSD consumer market. One possible explanation is that Samsung drives aren't cheap, and Backblaze is essentially using the cheapest drives they can buy in bulk quantities.

The SSD Edition: 2022 Drive Stats Review

The SSD Edition: 2022 Drive Stats Review:

Welcome to the 2022 SSD Edition of the Backblaze Drive Stats series. The SSD Edition focuses on the solid state drives (SSDs) we use as boot drives for the data storage servers in our cloud storage platform. This is opposed to our traditional Drive Stats reports which focus on our hard disk drives (HDDs) used to store customer data.

We started using SSDs as boot drives beginning in Q4 of 2018. Since that time, all new storage servers and any with failed HDD boot drives have had SSDs installed. Boot drives in our environment do much more than boot the storage servers. Each day they also read, write, and delete log files and temporary files produced by the storage server itself. The workload is similar across all the SSDs included in this report.

In this report, we look at the failure rates of the SSDs that we use in our storage servers for 2022, for the last 3 years, and for the lifetime of the SSDs. In addition, we take our first look at the temperature of our SSDs for 2022, and we compare SSD and HDD temperatures to see if SSDs really do run cooler.

As of December 31, 2022, there were 2,906 SSDs being used as boot drives in our storage servers. There were 13 different models in use, most of which are considered consumer grade SSDs, and we'll touch on why we use consumer grade SSDs a little later. In this report, we'll show the Annualized Failure Rate (AFR) for these drive models over various periods of time, making observations and providing caveats to help interpret the data presented.

The dataset on which this report is based is available for download on our Drive Stats Test Data webpage. The SSD data is combined with the HDD data in the same files. Unfortunately, the data itself does not distinguish between SSD and HDD drive types, so you have to use the model field to make that distinction. If you are just looking for SSD data, start with Q4 2018 and go forward.

Click on the link to get the actual figures.


Original Submission #1Original Submission #2

posted by janrinok on Tuesday March 14 2023, @11:26AM   Printer-friendly
from the lost-in-a-crowd dept.

Our brain has its own GPS and it helps us navigate by detecting the movements of the people around us:

Whether you are making your way through a crowded pedestrian zone or striving towards the goal in a team game, in both situations it is important to think not only about your own movements but also those of others. These navigation and orientation processes are carried out by brain cells that register our current position, where we are coming from, where we are moving towards and in which direction we are looking. Through their joint activity, they create a "map" of our surroundings. A special type of these cells are the so-called grid cells in the entorhinal cortex, a small brain region in the middle temporal lobe. They function like the brain's own GPS, because they not only represent our position in space, but can also put it in relation to other points in the same space.

[...] They found that the brain activity recorded while watching others was comparable to the activity of grid cells. In addition, the team was able to show that this activity was part of a larger network of brain regions that are associated with navigation processes. Interestingly, however, it turned out that the better a subject was at following the path of others, the less active this network was. "We interpret this as greater efficiency of the grid cells, which might make it less necessary to engage the larger brain network," Wagner explains.

The results of the study thus suggest that grid cells belong to a larger network of brain regions that, among other aspects, coordinates navigation processes. However, this network is particularly affected by ageing processes and especially by dementia. Wagner explains: "The function of grid cells decreases with age and dementia. As a result, people can no longer find their way around and their orientation is impaired." The group's further research is now dedicated to the question of whether grid cells are also involved in recognising other people - an aspect that is often impaired in advanced dementia.

Journal Reference:
Wagner, I.C., Graichen, L.P., Todorova, B. et al. Entorhinal grid-like codes and time-locked network dynamics track others navigating through space. Nat Commun 14, 231 (2023). https://doi.org/10.1038/s41467-023-35819-3


Original Submission

posted by hubie on Tuesday March 14 2023, @08:42AM   Printer-friendly

Wildfire Smoke Eroded Ozone Layer By 10 Percent In 2020: Study:

The havoc wreaked by wildfires isn't just on the ground. Researchers at MIT have found that wildfire smoke particles actively erode Earth's protective ozone layer, thus widening the gap we've been spending the last decade trying to close.

When something burns and produces smoke, those smoke particles—otherwise called wildfire aerosol—can drift into the stratosphere, where they hang out for a year or more. According to a study published Wednesday in the journal Nature, chemists and atmospheric scientists have found that suspended wildfire aerosol sparks chemical reactions that ultimately degrade the ozone layer, or the thin atmospheric layer responsible for shielding Earth from the Sun.

The newly-discovered chemical reaction increases hydrochloric acid's solubility. While hydrochloric acid is already present in the atmosphere, MIT found that larger hydrochloric acid quantities activate chlorine in the air and increase ozone loss rates when warmer temperatures strike. This spells danger for the storied hole in the ozone layer, which environmental activists, scientists, and policymakers have been fighting to shrink for several years.

[...] Thankfully, recent attempts to mitigate damage to the ozone layer have been quite successful. International treaties like the Montreal Protocol have helped phase out the use of ozone-depleting pollutants. The world's gradual adoption of electric vehicles might have also helped. The US National Oceanic and Atmospheric Administration even found that the Antarctic ozone hole was slightly smaller in 2022 than in 2021 and far smaller than in 2006 when its size peaked. That said, it's difficult to know right now whether these efforts are enough to compensate for the ozone damage caused by wildfire smoke.

Journal Reference:
Solomon, S., Stone, K., Yu, P. et al. Chlorine activation and enhanced ozone depletion induced by wildfire aerosol. Nature 615, 259–264 (2023). https://doi.org/10.1038/s41586-022-05683-0


Original Submission

posted by hubie on Tuesday March 14 2023, @05:54AM   Printer-friendly
from the acceleration-of-de-stocking-trends dept.

IQE says collapse in smartphone sales may wipe one-third off revenue in first half of 2023:

Plunging demand for semiconductors is taking an obvious toll on the chip sector, and Brit compound semiconductor wafer maker IQE is warning of a serious dent in sales.

In a trading update to investors, the London Stock Exchange listed business said it had seen an acceleration of de-stocking trends across the tech industry, "with weaker demand leading to inventory build-up throughout the supply chain."

"This reduction in customer orders and forecasts is expected to result in a decline of approximately £30 million in reported revenues for H1 2023," the Cardiff-based wafer manufacturer said.

It noted the patterns monitored by the Semiconductor Industry Association that reported an 18.5 percent tumble in shipments during calendar Q1.

[...] IQE makes wafers used for radio frequency and photonics applications in several smartphones, and was a major supplier to multiple chip companies who supplied Huawei, before the US government intervened to destroy Huawei's handset business. It is widening the portfolio to also include Power Electronics and Micro-LED tech used in VR headsets.

]...] The smartphone industry has shrunk for the past two years and great things aren't expected this year either. Likewise, the PC industry has also stumbled and isn't expected to bumble along in 2023. As such, the sales bonanza that chipmakers enjoyed in recent years is over for now.


Original Submission

posted by hubie on Tuesday March 14 2023, @03:06AM   Printer-friendly

The Register has a story about a Python compiler called Codon that turns Python code into native machine code without a runtime performance hit.:

Python is among the one of the most popular programming languages, yet it's generally not the first choice when speed is required.

"Typical speedups over Python are on the order of 10-100x or more, on a single thread," the Codon repo declares. "Codon's performance is typically on par with (and sometimes better than) that of C/C++."

"Unlike other performance-oriented Python implementations (such as PyPy or Numba), Codon is built from the ground up as a standalone system that compiles ahead-of-time to a static executable and is not tied to an existing Python runtime (e.g., CPython or RPython) for execution," the paper says. "As a result, Codon can achieve better performance and overcome runtime-specific issues such as the global interpreter lock."

C++ Weekly - Ep 366 - C++ vs Compiled Python (Codon) performs a benchmark by running the same algorithm in Python (Codon) 8.4 seconds and C++ which takes 0.09 seconds. The video also points out the following:

We need a python code that works with codon. It takes some porting. We have to give types. It is a lot like C++ in this regard.


Original Submission

posted by hubie on Tuesday March 14 2023, @12:16AM   Printer-friendly

Millions of pieces of debris orbit the Earth, prompting scientists to call for a legally binding treaty to address our planet's mounting orbital trash problem:

What goes up must come down, and that includes all of the satellites, rocket stages, and junk that humans have launched into space. A group of scientists is sounding the alarm about how that growing cloud of debris orbiting Earth may cause us trouble in the future, and are championing a global approach to governing Earth's orbit.

In a letter published in Science today, the team of researchers says that there are 9,000 satellites currently in orbit, but that that number is projected to rise to 60,000 by 2030. All of these satellites are sources of orbital debris, whether the spacecraft themselves become junk when they are decommissioned or whether they become involved in an in-orbit crash resulting in a cascade of debris that will circle the planet.

Regardless, this group of researchers points to this boom in the space economy as a problem for the future of space safety and are calling for a legally-binding treaty to enforce the sustainability of Earth's orbit—much the way 190 nations just vowed to protect the global oceans."

[...] Until a global initiative to reign in the issue of space debris is achieved, some space agencies are taking steps to tackle the problem. Last year, NASA announced it would be funding three projects from various universities to better understand orbital debris and sustainability in space. Likewise, ESA has approved ClearSpace's giant claw that will grab onto junk in orbit and send it into Earth's atmosphere to burn up to take care of pre-existing space debris. Meanwhile, The Drag Augmentation Deorbiting System, a 38-square-foot (3.5-square-meter) sail to increase a satellite's surface drag, could be a way to retire yet-to-be-launched satellites at the end of their lives.


Original Submission

posted by hubie on Monday March 13 2023, @09:35PM   Printer-friendly
from the please-don't-sue-us dept.

Last month, Volkswagen garnered plenty of bad publicity when it emerged that the company's connected car service refused to help track a stolen car—with a 2-year-old child still on board—until someone paid to reactivate the service. Now, the automaker says it's very sorry this happened, and it's making its connected vehicle emergency service free to most model-year 2020-2023 Volkswagens.
[...]
Most MY2020 or newer VWs are able to use connected services, apart from MY2020 Passats.

Some additional story details for the click-averse:

As Lake County deputies desperately tried to find a stolen Volkswagen with a toddler still inside, they reached out to Car-Net, a service that lets VW owners track their vehicles.

But the Car-Net trial period had ended, and a representative wanted $150 to restart the service and locate the SUV.

The detective pleaded, explaining the "extremely exigent circumstance," but the representative didn't budge, saying it was company policy, sheriff's office Deputy Chief Christopher Covelli said Friday.

"The detective had to work out getting a credit card number and then call the representative back to pay the $150 and at that time the representative provided the GPS location of the vehicle," Covelli said.


Original Submission

posted by janrinok on Monday March 13 2023, @06:53PM   Printer-friendly
from the silence-is-not-so-golden dept.

A new study suggests that too much – or too little – office noise has a negative effect on employee well-being. The sweet spot? About 50 decibels, comparable to moderate rain or birdsong.

Choosing to work in the murmur of a busy coffee shop rather than in an office with library-level silence might be healthier, according to a new study by researchers at the University of Arizona and University of Kansas.

The study finds – perhaps unsurprisingly – that loud noises at the office have a negative impact on employee well-being. But the study also suggests that complete silence is not conducive to a healthy workplace.

[...] "Everybody knows that loud noise is stressful, and, in fact, extremely loud noise is harmful to your ear," said study co-author Esther Sternberg, director of the UArizona Institute on Place, Wellbeing & Performance. "But what was new about this is that with even low levels of sound – less than 50 decibels – the stress response is higher."

[...] Humans' tendency to get distracted, Sternberg said, is a result of the brain's stress response to potential threats. Our brains are "difference detectors" that take note of sudden changes in sounds so we can decide to fight or flee, she said.

That may explain why low, steady sounds help mask distractions in the workplace, she added.

"People are always working in coffee shops – those are not quiet spaces. But the reason you can concentrate there is because the sounds all merge to become background noise," Sternberg said. "It masks sound that might be distracting. If you hear a pin drop when it's very, very quiet, it will distract you from what you're doing."

Journal Reference:
Karthik Srinivasan, Faiz Currim, Casey M. Lindberg, et al., Discovery of associative patterns between workplace sound level and physiological wellbeing using wearable devices and empirical Bayes modeling [open], npj Digital Medicine (2023) 6:5 ; https://doi.org/10.1038/s41746-022-00727-1


Original Submission

posted by janrinok on Monday March 13 2023, @04:04PM   Printer-friendly
from the hard-to-kill-these-serials dept.

Why Do Some Modern Computers Still Have Serial Ports?:

While the parallel port is now safely buried in the grave of obsolescence, it may seem odd that the humble, slow serial port is still around. But as it turns out, bit-by-bit, this humble communications port has become essential.

[...] Serial ports are slow with the standard speed at the high end of the range coming in at a pedestrian 115.2Kbps. At that speed, it would take you almost a day to transfer 1GB of data! That's under ideal circumstances, and things can be much, much slower than that.

If we have USB, and serial ports are so slow and comparatively bulky, why the heck do some computers still have them? There are a few reasons, but the most important ones include:

  • Lots, and lots, of industrial and scientific equipment are still in service and use serial ports to interface.
  • It's simple, reliable, well understood, and much cheaper to implement than other more modern port types.
  • Hobbyists have uses, such as programming microcontrollers.

Do you still use the serial port, or do you depend on equipment that does? I have noticed that it is still widely used in medical equipment but are there other fields in which the serial port is the standard interface?


Original Submission

posted by janrinok on Monday March 13 2023, @01:19PM   Printer-friendly

Last week, Denmark has stored the first volumes of carbon dioxide in an old oil and gas field in the Danish North Sea. The carbon dioxide sequestered comes from a chemical production plant (Ineos Oxide) in the Port of Antwerp, Belgium.

Since 2010, Ineos Oxide has captured CO2 as a by-product from its ethylene oxide (plastics) production, cooled it down to a liquid, and resold the product to the food (fizzy drinks, beer) and agricultural (greenhouse cultivation) industry. Now, instead, part of this production was transported to Nini, a previously abandoned oil platform about 200 km in front of the Danish coast, and injected 1,800 meters deep.

The test project, named Greensand, needs to prove that the process is possible, and safe. The modified transport vessel used, Aurora Storm, can only take 800 ton CO2 per traject; it will have to shuttle back and forth between Antwerp and Denmark about 20 times, enough for 15,000 ton, this year alone. The project will be upscaled to 1.5 million ton a year by 2025.

By 2030, 8 million ton a year is planned, or about half the carbon dioxide emitted by Antwerp's chemical cluster, the largest in Europe. This, however, requires investments in new offshore infrastructure, and larger transport ships known as CO2 carriers.

The Greensand project is racing behind another project though. That project is called Northern Lights, and aims to be able to store 1.5 million ton a year, by next year, 2024. Northern Lights is a partnership between Shell, Equinor and Total, and supported by Norway's government Langskip (Longship) CCS project.

The EU has set a target of capturing and storing a minimum 300 million ton CO2 a year by 2050.


Original Submission

posted by janrinok on Monday March 13 2023, @10:39AM   Printer-friendly

On March 13, we will officially begin rolling out our initiative to require all developers who contribute code on GitHub.com to enable one or more forms of two-factor authentication (2FA) by the end of 2023.

GitHub is central to the software supply chain, and securing the software supply chain starts with the developer. Our 2FA initiative is part of a platform-wide effort to secure software development by improving account security. Developers' accounts are frequent targets for social engineering and account takeover (ATO). Protecting developers and consumers of the open source ecosystem from these types of attacks is the first and most critical step toward securing the supply chain.

[...] If your account is selected for enrollment, you will be notified via email and see a banner on GitHub.com, asking you to enroll. You'll have 45 days to configure 2FA on your account—before that date nothing will change about using GitHub except for the reminders. We'll let you know when your enablement deadline is getting close, and once it has passed you will be required to enable 2FA the first time you access GitHub.com. You'll have the ability to snooze this notification for up to a week, but after that your ability to access your account will be limited.

So, what if you're not in an early enrollment group but you want to get started? Click here and follow a few easy steps to enroll in 2FA.

[...] You can choose between TOTP, SMS, security keys, or GitHub Mobile as your preferred 2FA method.

Recent GitHub security incidents:
GitHub says hackers cloned code-signing certificates in breached repository(1/30/2023)
Slack's private GitHub code repositories stolen over holidays(1/5/2023)
Okta's source code stolen after GitHub repositories hacked(12/21/2022)


Original Submission

posted by janrinok on Monday March 13 2023, @07:52AM   Printer-friendly
from the try-it-you'll-like-it dept.

A new measure can help scientists decide which estimation method to use when modeling a particular data problem:

If a scientist wanted to forecast ocean currents to understand how pollution travels after an oil spill, she could use a common approach that looks at currents traveling between 10 and 200 kilometers. Or, she could choose a newer model that also includes shorter currents. This might be more accurate, but it could also require learning new software or running new computational experiments. How to know if it will be worth the time, cost, and effort to use the new method?

A new approach developed by MIT researchers could help data scientists answer this question, whether they are looking at statistics on ocean currents, violent crime, children's reading ability, or any number of other types of datasets.

The team created a new measure, known as the "c-value," that helps users choose between techniques based on the chance that a new method is more accurate for a specific dataset. This measure answers the question "is it likely that the new method is more accurate for this data than the common approach?"

Traditionally, statisticians compare methods by averaging a method's accuracy across all possible datasets. But just because a new method is better for all datasets on average doesn't mean it will actually provide a better estimate using one particular dataset. Averages are not application-specific.

So, researchers from MIT and elsewhere created the c-value, which is a dataset-specific tool. A high c-value means it is unlikely a new method will be less accurate than the original method on a specific data problem.

[...] The c-value is designed to help with data problems in which researchers seek to estimate an unknown parameter using a dataset, such as estimating average student reading ability from a dataset of assessment results and student survey responses. A researcher has two estimation methods and must decide which to use for this particular problem.

[....] "In our case, we are assuming that you conservatively want to stay with the default estimator, and you only want to go to the new estimator if you feel very confident about it. With a high c-value, it's likely that the new estimate is more accurate. If you get a low c-value, you can't say anything conclusive. You might have actually done better, but you just don't know," Broderick explains.

The ultimate goal is to create a measure that is general enough for many more data analysis problems, and while there is still a lot of work to do to realize that objective, Broderick says this is an important and exciting first step in the right direction.

Journal Reference:
Brian L. Trippe, Sameer K. Deshpande, & Tamara Broderick, Confidently Comparing Estimates with the c-value [open], J. Am. Stat. Asoc., 2023. DOI: https://doi.org/10.1080/01621459.2022.2153688


Original Submission