Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What is your favorite keyboard trait?

  • QWERTY
  • AZERTY
  • Silent (sounds)
  • Clicky sounds
  • Thocky sounds
  • The pretty colored lights
  • I use Braille you insensitive clod
  • Other (please specify in comments)

[ Results | Polls ]
Comments:63 | Votes:116

posted by janrinok on Tuesday June 25, @09:12PM   Printer-friendly
from the walled-garden dept.

https://arstechnica.com/tech-policy/2024/06/eu-says-apple-violated-app-developers-rights-could-be-fined-10-of-revenue/

The European Commission today said it found that Apple is violating the Digital Markets Act (DMA) with App Store rules and fees that "prevent app developers from freely steering consumers to alternative channels for offers and content." The commission "informed Apple of its preliminary view" that the company is violating the law, the regulator announced.

This starts a process in which Apple has the right to examine documents in the commission's investigation file and reply in writing to the findings. There is a March 2025 deadline for the commission to make a final ruling.

[...] Apple was further accused of charging excessive fees. The commission said that Apple is allowed to charge "a fee for facilitating via the App Store the initial acquisition of a new customer by developers," but "the fees charged by Apple go beyond what is strictly necessary for such remuneration. For example, Apple charges developers a fee for every purchase of digital goods or services a user makes within seven days after a link-out from the app."

Apple says it charges a commission of 27 percent on sales "to the user for digital goods or services on your website after a link out... provided that the sale was initiated within seven days and the digital goods or services can be used in an app."

[...] The commission today also announced it is starting a separate investigation into Apple's "contractual requirements for third-party app developers and app stores," including its "Core Technology Fee." Apple charges the Core Technology Fee for app installs, whether they are delivered from Apple's own App Store, from an alternative app marketplace, or from a developer's own website. The first million installs each year are free, but a per-install fee of €0.50 applies after that.

The commission said it would investigate whether the Core Technology Fee complies with the DMA.


Original Submission

posted by janrinok on Tuesday June 25, @04:27PM   Printer-friendly

Climate models are numerical simulations of the climate system, which are used to for predicting climate change from emissions scenarios and many other applications. Let's take a closer look at how they work.

Why do we need climate models?

The climate system and its components like the atmosphere and hydrosphere are driven by many processes that interact with each other to produce the climate we observe. We understand some of these processes very well, such as many atmospheric circulations that drive the weather. Others like the role of aerosols and deep ocean circulations are more uncertain. Even when individual processes are well understood, the interaction between these processes makes the system more complex and produces emergent properties like feedbacks and tipping points. We can't rely on simple extrapolation to generate accurate predictions, which is why we need models to simulate the dynamics of the climate system. Global climate models simulate the entire planet at a coarse resolution. Regional climate models simulate smaller areas at a higher resolution, relying on global climate models for their initial and lateral boundary conditions (the edges of the domain).

How do climate models work?

A climate model is a combination of several components, each of which typically simulates one aspect of the climate system such as the atmosphere, hydrosphere, cryosphere, lithosphere, or biosphere. These components are coupled together, meaning that what happens in one component of the climate system affects all of the other components. The most advanced climate models are large software tools that use parallel computing to run across hundreds or thousands of processors. Climate models are a close cousin of the models we use for weather forecasting and even use a lot of the same source code.

The atmospheric component of the model, for example, has a fluid dynamics simulation at its core. The model numerically integrates a set of primitive equations such as the Navier-Stokes equation, the first law of thermodynamics, the continuity equation, the Clausius-Clapeyron equation, and the equation of state. Global climate models generally assume the atmosphere is in hydrostatic balance at all times, but that is not necessarily the case for regional models. Hydrostatic balance means that the force of gravity completely balances with the upward pressure gradient force, meaning that the air never accelerates upward or downward, which does occur in some instances like inside thunderstorms.

Not all atmospheric processes can be described by these equations, and we also need to predict things like aerosols (particulates suspended in the atmosphere), radiation (incoming solar radiation, and heat radiated upward), microphysics (e.g., cloud dropets. rain drops, ice crystals, etc...), and deep convection like thunderstorms (in models with coarse resolutions) to accurately simulate the atmosphere. Instead, these processes are parameterized to simulate their effects as accurately as possible in the absence of governing equations.

The atmospheric simulations are generally more complex and run at a higher resolution for weather models than in climate models. However, weather models do not simulate the oceans, land surface, or the biosphere with the same level of complexity because it's not necessary to get accurate forecasts. For example, the deep oceans don't change enough on weather time scales to impact the forecast, but they do change in important ways on climate time scales. A weather model also probably isn't going to directly simulate how temperature and precipitation affect the type of vegetation growing in a particular location, or if there's just bare soil. Instead, a weather model might have data sets for land use and land cover during the summer and winter, use the appropriate data depending on the time of year being simulated, and then use that information to estimate things like albedo and evapotranspiration.

The main difference between climate models and weather models is that weather models are solving an initial condition problem whereas climate modeling is a boundary condition problem. Weather is highly sensitive to the initial state of the atmosphere, meaning that small changes in the atmosphere at the current time might result in large differences a week from now. Climate models depend on factors that occur and are predictable on much longer time scales like greenhouse gas concentrations, land use and land cover, and the temperature and salinity of the deep ocean. Climate models are also not concerned with accurately predicting the weather at a specific point in time, only its statistical moments like the mean and standard deviation over a period of time. We intuitively understand that these statistical moments are predictable on far longer time scales, which is why you could confidently insist that I'm wrong if I claimed that there would be heavy snow in Miami, Florida on June 20, 2050.

How and why are climate models coupled?

Information from the various components in the model needs to be communicated to the other components to get an accurate simulation. For example, precipitation affects the land surface by changing the soil moisture, which may also affect the biosphere. The albedo of the land surface affects air temperatures. Soil moisture also affects temperature, with arid areas typically getting warmer during the day and colder at night. If the precipitation is snow, the snow cover prevents heat from being conducted from the ground into the atmosphere, causing colder temperatures. Warm ocean temperatures are conducive for tropical cyclones to form, but the winds in a strong cyclone can churn up cooler water from below, which will weaken a tropical cyclone.

Both weather and climate models are coupled models, meaning that information is communicated between different components of the system to allow the model to simulate interactions like these and many others. Each component of the climate system (e.g., atmosphere, hydrosphere, lithosphere, etc...) is generally a separate software module that is run simultaneously with the other components and interfaces with them. If the components of weather and climate models weren't coupled together, we couldn't simulate many of the feedbacks and tipping points that arise from these interactions.

What are climate models used for?

Perhaps the most frequently discussed application of climate models is simulating how various emissions scenarios will affect future climates. But climate models are also used for many other applications like sensitivity studies, attribution of extreme events, and paleoclimate studies.

An example of a sensitivity study might be to examine how deforestation of the Amazon affects the climate. A sensitivity study would require two models, one a control simulation with the Amazon rainforest intact, the other with the rainforest replaced by grassland or bare soil. Most of the parameters that define these simulations like greenhouse gas concentrations would be kept identical so that only the presence or absence of the Amazon rainforest would be responsible for the differences in climate. The simulations would be run for a period of time, perhaps years or decades, and then the differences between the simulations are analyzed to determine the sensitivity of the climate to whatever is different between the simulations.

Extreme event attribution attempts to determine to what extent climate change is responsible for a particular extreme event. This is very similar to sensitivity studies in that there's a control simulation and a second simulation where some aspect of the climate system like greenhouse gas concentrations is different. For example, if we want to estimate the effect of climate change on an extreme heat wave in Europe, we might run a control simulation with preindustrial greenhouse gas levels and another simulation with present day levels. In this case, the greenhouse gas concentrations would probably be prescribed at a particular level and not permitted to vary during the simulation. These simulations might be run for hundreds or even thousands of years to see how often the extreme event occurs in the preindustrial and the modern simulation. If the heat wave occurs every hundred years with modern greenhouse gas levels but never occurs with preindustrial conditions, the event might be attributed entirely to climate change. If the event occurs in both simulations, we would compare the frequency it occurs in each simulation to estimate how much it can be attributed to climate change.

For paleoclimate simulations, we have much more limited information about the climate. We might know the greenhouse gas concentrations from bubbles of air trapped in ice cores, for example. There may be proxy data like fossil evidence of the plants and animals that lived in a particular location, which can be used to infer information about whether a climate was hot or cold, or whether it was wet or dry. On the other hand, we certainly won't have detailed observations of things like extreme events, oceanic circulations, and many other aspects of the climate system. In this case, the climate model can be configured to match the known aspects of the past climate as closely as possible, then using the simulation to fill in the gaps where we don't have observations. Paleoclimate simulations can also be used to identify biases and errors in the model when it's unable to accurately reproduce past climates. When these errors are discovered, the model can be improved to better simulate past climates, and that also increases our confidence in its ability to extrapolate future climates.

Can we trust climate models?

All weather models and climate models are wrong. A weather model will never forecast the weather with 100% accuracy, though they do a remarkably good job at forecasting wide range of weather events. The model is still the best tool we have to predict the weather, especially beyond a day or two where extrapolation just isn't going to be reliable. Many components are shared between weather and climate models, and if these components didn't work correctly, they would also prevent us from producing accurate weather forecasts. Weather models often do have some systematic bias, especially for longer range forecasts, but we can correct for these biases with statistical postprocessing. Every time a weather model is run, it's also helping to verify the accuracy of any components that are shared with climate models.

Climate models from a couple of decades ago generated forecasts for the present climate, and once differences in greenhouse gas concentrations are accounted for, they are very accurate at predicting our current climate. Climate models are also used to simulate past climates, and their ability to do so accurately means that we can be more confident in their ability to predict the climate under a much wider range of conditions.

Even when there is a known bias in climate models, it does not invalidate all climate model studies. For example, climate models typically underestimate greenhouse gas sinks, resulting in a high bias in greenhouse gas concentrations for a particular emissions scenario. But we may be able to correct for that bias with statistical postprocessing. Also, many applications of climate models like extreme event attribution, many sensitivity studies, and many paleoclimate simulations do not dynamically simulate the carbon cycle. This means that those applications of climate models would be completely unaffected by the issue with underestimating greenhouse gas sinks.

Many of the climate models like the Goddard Institute for Space Studies models, the Community Earth System Model, and the Weather Research and Forecasting Model (often used in regional climate modeling) are free and open source, meaning that anyone can download the model, examine the source code, and run their own simulations. Data from a large number of climate model simulations is often publicly shared, especially in various intercomparison projects. Climate models are not closely guarded secrets, so anyone can examine and test climate models for themselves, and modify the source code to fix bugs or make improvements.


Original Submission

posted by janrinok on Tuesday June 25, @11:44AM   Printer-friendly
from the cap-that! dept.

Arthur T Knackerbracket has processed the following story:

Scientists at Lawrence Berkeley National Laboratory and UC Berkeley have created "microcapacitors" that address this shortcoming, as highlighted in a study published in Nature. Made from engineered thin films of hafnium oxide and zirconium oxide, these capacitors employ materials and fabrication techniques commonly used in chip manufacturing. What sets them apart is their ability to store significantly more energy than ordinary capacitors, thanks to the use of negative capacitance materials.

Capacitors are one of the basic components of electrical circuits. They store energy in an electric field established between two metallic plates separated by a dielectric material (non-metallic substance). They can deliver power quickly and have longer lifespans than batteries, which store energy in electrochemical reactions.

However, these benefits come at the cost of significantly lower energy densities. Perhaps that's why we've only seen low-powered devices like mice powered by this technology, as opposed to something like a laptop. Plus, the problem is only exacerbated when shrinking them down to microcapacitor sizes for on-chip energy storage.

The researchers overcame this by engineering thin films of HfO2-ZrO2 to achieve a negative capacitance effect. By tuning the composition just right, they were able to get the material to be easily polarized by even a small electric field.

To scale up the energy storage capability of the films, the team placed atomically thin layers of aluminum oxide every few layers of HfO2-ZrO2, allowing them to grow the films up to 100 nm thick while retaining the desired properties.

These films were integrated into three-dimensional microcapacitor structures, achieving record-breaking properties: nine times higher energy density and 170 times higher power density compared to the best electrostatic capacitors today. That's huge.

"The energy and power density we got are much higher than we expected," said Sayeef Salahuddin, a senior scientist at Berkeley Lab, UC Berkeley professor, and project lead. "We've been developing negative capacitance materials for many years, but these results were quite surprising."

It's a major breakthrough, but the researchers aren't resting on their laurels just yet. Now they're working on scaling up the technology and integrating it into full-size microchips while improving the negative capacitance of the films further.


Original Submission

posted by janrinok on Tuesday June 25, @07:03AM   Printer-friendly
from the nostalgia dept.

https://arstechnica.com/gaming/2024/06/in-first-person-john-romero-reflects-on-over-three-decades-as-the-doom-guy/

John Romero remembers the moment he realized what the future of gaming would look like.

In late 1991, Romero and his colleagues at id Software had just released Catacomb 3-D, a crude-looking, EGA-colored first-person shooter that was nonetheless revolutionary compared to other first-person games of the time. "When we started making our 3D games, the only 3D games out there were nothing like ours," Romero told Ars in a recent interview. "They were lockstep, going through a maze, do a 90-degree turn, that kind of thing."

Despite Catacomb 3-D's technological advances in first-person perspective, though, Romero remembers the team at id followed its release by going to work on the next entry in the long-running Commander Keen series of 2D platform games. But as that process moved forward, Romero told Ars that something didn't feel right.

"Within two weeks, [I was up] at one in the morning and I'm just like, 'Guys we need to not make this game [Keen],'" he said. "'This is not the future. The future is getting better at what we just did with Catacomb.' ... And everyone was immediately was like, 'Yeah, you know, you're right. That is the new thing, and we haven't seen it, and we can do it, so why aren't we doing it?'"

The team started working on Wolfenstein 3D that very night, Romero said. And the rest is history.

[...] The early id designers didn't even use basic development tools like version control systems, Romero said. Instead, development was highly compartmentalized between different developers; "the files that I'm going to work on, he doesn't touch, and I don't touch his files," Romero remembered of programming games alongside John Carmack. "I only put the files on my transfer floppy disk that he needs, and it's OK for him to copy everything off of there and overwrite what he has because it's only my files, and vice versa. If for some reason the hard drive crashed, we could rebuild the source from anyone's copies of what they've got."

[...] The formation of id Software in 1991 meant that Wolfenstein 3D "was the first time where we felt we had no limit on time," Romero said. That meant the team could experiment with adding in features drawn from the original 2D Castle Wolfenstein, things like "searching dead soldiers, trying to search in open boxes and drag the soldiers out of visibility from other soldiers, and other stuff that was not just 'mow it down.'"

But Romero says those features were quickly removed since they got in the way of "this high-speed run-and-gun gameplay" that was emerging as the core of the game's appeal.

[...] One of the last steps necessary to get Wolfenstein 3D out quickly, Romero said, was ignoring the publisher's suggestion that they double the game's size from 30 to 60 levels. But Romero said the publisher did give them the smart advice to ditch the aging EGA graphics standard in favor of much more colorful VGA monitors. "You're a marketing guy, you know what's going on, we trust you, we will do that immediately," Romero recalled saying.

[...] "We were making games that we wanted to play," he said. "We weren't worried about audience. We were the audience. We played every game on all the systems back then. We were consumers, and we knew what we wanted to make. We made so many games that we were past our learning years of how to make game designs. We were at the point where the 10,000 hours was over. Way, way over."

[...] "But when we saw the lengths people had to go to just to get access and make levels, it was like, we need to completely open the next game," Romero continued. "That's why Doom's WAD file formats were put out there, and our level formats were out there. That's what let people generate tons of content for Doom."

Romero said that looking back, he thinks Doom hit that sweet spot where players could create robust maps and content without getting bogged down in too many time-consuming details.

[...] Beyond quick WAD file creation, though, Romero expressed awe at what dedicated modders have been able to build on top of the now open source Doom engine over the decades. He specifically pointed to Selaco as a game that shows off just how far the updated GZDoom engine can be taken.

[...] Romero said he admires how the battle royale genre has made the shooter more accessible for players who weren't raised on the tight corridors of Doom and its ilk.

"If someone who's not good at a shooter jumps into [a game like Counter-Strike], they're dead, mercilessly," Romero said. "It's not fun. But what is fun is 'I'm not that good, I'm gonna jump into a game that is a battle royale, and I get to play with other people who are really good, but I actually survive for five minutes. I find some stuff; maybe I shoot somebody and take them out. You're gonna get shot, but you had an experience; you learn something, and you jump in and do it again."


Original Submission

posted by janrinok on Tuesday June 25, @06:36AM   Printer-friendly
https://www.theguardian.com/media/article/2024/jun/25/julian-assange-plea-deal-with-us-free-to-return-australia

Julian Assange has been released from a British prison and is expected to plead guilty to violating US espionage law, in a deal that would allow him to return home to his native Australia.

Assange, 52, agreed to plead guilty to a single criminal count of conspiring to obtain and disclose classified US national defence documents, according to filings in the US district court for the Northern Mariana Islands.

Wikileaks posted on social media a video of its founder boarding a flight at London's Stansted airport on Monday evening and Australian prime minister Anthony Albanese confirmed he had left the UK.

https://www.theguardian.com/media/article/2024/jun/25/julian-assange-may-be-on-his-way-to-freedom-but-this-is-not-a-clear-victory-for-freedom-of-the-press

The release from a UK prison of Julian Assange is a victory for him and his many supporters around the world, but not necessarily a clear win for the principle underlying his defence, the freedom of the press.

The charges Assange is anticipated to plead guilty to as part of a US deal, and for which he will be sentenced to time served, are drawn from the 1917 Espionage Act, for "conspiring to unlawfully obtain and disseminate classified information related to the national defense of the United States".

So although the WikiLeaks founder is expected to walk free from the US district court in Saipan after Wednesday's hearing, the Espionage Act will still hang over the heads of journalists reporting on national security issues, not just in the US. Assange himself is an Australian, not a US citizen.

Live: Father of Julian Assange hints at son's return to Australia after prison release - ABC News:

Nothing is certain until it happens and there's a lot we still don't know about how Julian Assange's case will proceed.

A lot of our understanding at this stage is coming from the court documents, which state that he'll appear before a judge in Saipan at 9am local time tomorrow.

An email from the Department of Justice (DOJ) to the judge in the Northern Mariana Islands states that Assange is expected to plead guilty to one count of conspiracy to obtain and disclose national defence information, and that he'll be sentenced for that offence.

American media outlets are reporting that the plea deal would need to be approved by the judge, and WikiLeaks has described the agreement as having "not yet been formally finalised."

But Assange's departure from the UK is a massive development in the case, and the court document says the DOJ expects he'll return to Australia "at the conclusion of the proceedings".

posted by hubie on Tuesday June 25, @02:17AM   Printer-friendly

As Apple enters the AI race, it's also looking for help from partners:

During the announcement of Apple Intelligence earlier this month, Apple said it would be partnering with OpenAI to bring ChatGPT into the revamped version of Siri. Now, the Wall Street Journal reports that Apple and Facebook's parent company Meta are in talks around a similar deal.

[...] As Sarah Perez noted, Apple's approach to AI currently sounds a bit boring and practical — rather than treating this as an opportunity for wholesale reinvention or disruption, it's starting out by adding AI-powered features (like writing suggestions and custom emojis) to its existing products. But emphasizing practicality over flashiness might be the key to AI adoption. Then, Apple can leverage partnerships to go beyond the capabilities of its own AI models.

So a deal with Meta could make Apple less reliant on a single partner, while also providing validation for Meta's generative AI tech. The Journal reports that Apple's isn't offering to pay for these partnerships; instead, Apple provides distribution to AI partners who can then sell premium subscriptions.

[...] In another recent development, Apple has also said that while Apple Intelligence is set to roll out in the newest versions of its operating systems (including iOS 18, iPadOS 18, and macOS Sequoia) later this year, it plans to hold the technology back from the European Union, due to the EU's Digital Markets Act (which is supposed to encourage competition in digital markets). It also said it will hold back iPhone Mirroring and SharePlay Screen Sharing.

"We are concerned that the interoperability requirements of the DMA could force us to compromise the integrity of our products in ways that risk user privacy and data security," the company said in a statement.


Original Submission

posted by hubie on Monday June 24, @09:35PM   Printer-friendly

http://www.righto.com/2024/06/montreal-mifare-ultralight-nfc.html

To use the Montreal subway (the Métro), you tap a paper ticket against the turnstile and it opens. The ticket works through a system called NFC, but what's happening internally? How does the ticket work without a battery? How does it communicate with the turnstile? And how can it be so cheap that you can throw the ticket away after one use? To answer these questions, I opened up a ticket and examined the tiny chip inside.


Original Submission

posted by hubie on Monday June 24, @04:48PM   Printer-friendly

Using stalkerware is creepy, unethical, potentially illegal, and puts your data and that of your loved ones in danger:

Last week, an unknown hacker broke into the servers of the U.S.-based stalkerware maker pcTattletale. The hacker then stole and leaked the company's internal data. They also defaced pcTattletale's official website with the goal of embarrassing the company.

"This took a total of 15 minutes from reading the techcrunch article," the hackers wrote in the defacement, referring to a recent TechCrunch article where we reported that pcTattletale was used to monitor several front desk check-in computers at Wyndham hotels across the United States.

As a result of this hack, leak and shame operation, pcTattletale founder Bryan Fleming said he was shutting down his company.

Consumer spyware apps like pcTattletale are commonly referred to as stalkerware because jealous spouses and partners use them to surreptitiously monitor and surveil their loved ones. These companies often explicitly market their products as solutions to catch cheating partners by encouraging illegal and unethical behavior. And there have been multiple court cases, journalistic investigations, and surveys of domestic abuse shelters that show that online stalking and monitoring can lead to cases of real-world harm and violence.

And that's why hackers have repeatedly targeted some of these companies.

According to TechCrunch's tally, with this latest hack, pcTattletale has become the 20th stalkerware company since 2017 that is known to have been hacked or leaked customer and victims' data online. That's not a typo: Twenty stalkerware companies have either been hacked or had a significant data exposure in recent years. And three stalkerware companies were hacked multiple times.

[...] But a company closing doesn't mean it's gone forever. As with Spyhide and SpyFone, some of the same owners and developers behind a shuttered stalkerware maker simply rebranded.

"I do think that these hacks do things. They do accomplish things, they do put a dent in it," Galperin said. "But if you think that if you hack a stalkerware company, that they will simply shake their fists, curse your name, disappear in a puff of blue smoke and never be seen again, that has most definitely not been the case."

"What happens most often, when you actually manage to kill a stalkerware company, is that the stalkerware company comes up like mushrooms after the rain," Galperin added.

[...] Using spyware to monitor your loved ones is not only unethical, it's also illegal in most jurisdictions, as it's considered unlawful surveillance.

That is already a significant reason not to use stalkerware. Then there is the issue that stalkerware makers have proven time and time again that they cannot keep data secure — neither data belonging to the customers nor their victims or targets.

Apart from spying on romantic partners and spouses, some people use stalkerware apps to monitor their children. While this type of use, at least in the United States, is legal, it doesn't mean using stalkerware to snoop on your kids' phone isn't creepy and unethical.

Even if it's lawful, Galperin thinks parents should not spy on their children without telling them, and without their consent.

If parents do inform their children and get their go-ahead, parents should stay away from insecure and untrustworthy stalkerware apps, and use parental tracking tools built into Apple phones and tablets and Android devices that are safer and operate overtly.

If you or someone you know needs help, the National Domestic Violence Hotline (1-800-799-7233) provides 24/7 free, confidential support to victims of domestic abuse and violence. If you are in an emergency situation, call 911. The Coalition Against Stalkerware has resources if you think your phone has been compromised by spyware.


Original Submission

posted by hubie on Monday June 24, @12:04PM   Printer-friendly

https://salvagedcircuitry.com/2000a-nand-recovery.html

I ran across an oscilloscope in need of love and attention on the Internet's favorite online auction site. After some back and forth from the seller, I found out that the scope didn't boot, one of the tell tale problems of the Agilent 2000a / 3000a / 4000a X-series oscilloscope series. The no boot condition can be caused by one of three things: a failed power supply, the mischievous NAND corruption error, or both. The seller took my lowball offer of $220 and just like that I had another project in my life.

On initial opening, the scope looked like it had road rash. Every single knob and edge of the plastic shell had distinct pavement scuff marks. There were some cracks in the front plastic bezel and rear molded fan grill, further confirming that this scope was dropped multiple times. The horizontal adjust encoder was also bent and two knobs were missing.

Not the end of the world, let's double check the description. I plugged it in and the scope powered up with 3 of the 4 indicator lights. Ref, Math, Digital and were illuminated, nothing on serial. The scope stays perpetually in this state with nothing displayed on the LCD. Button presses yield no response. The continuously-on fan and 3 indicator lights are the only source of life. Using a special sequence of power button + unlock cal switch displays no lights on the instrument. The seller clearly did not test the instrument.


Original Submission

posted by janrinok on Monday June 24, @07:18AM   Printer-friendly
from the escaping-digital-microserfdom dept.

Dr Andy Farnell at The Cyber Show writes about the effects of the "splinternet" and division in standards in general on overall computing security. He sees the Internet, as it was less than ten years ago, as an ideal, but one which has been intentionally divided and made captive. While governments talk out of one side of their mouth about cybersecurity they are rushing breathlessly to actually make systems and services less secure or outright insecure.

What I fear we are now seeing is a fault line between informed, professional computer users with access to knowledge and secure computer software - a breed educated in the 1970s who are slowly dying out - and a separate low-grade "consumer" group for whom digital mastery, security, privacy and autonomy have been completely surrendered.

The latter have no expectation of security or correctness. They've grown up in a world where the high ideals of computing that my generation held, ideas that launched the Voyager probe to go into deep space using 1970's technology, are gone.

They will be used as farm animals, as products by companies like Apple, Google and Microsoft. For them, warm feelings, conformance and assurances of safety and correctness, albeit false but comforting, are the only real offering, and there will be apparently "no alternatives".

These victims are becoming ever-less aware of how their cybersecurity is being taken from them, as data theft, manipulation, lock-in, price fixing, lost opportunity and so on. If security were a currency, we're amidst the greatest invisible transfer of wealth to the powerful in human history.

In lieu of actual security, several whole industries have sprung up around ensuring and maintaining computer insecurity. On the technical side of things it's maybe time for more of us to (re-)read the late Ross Anderson's Security Engineering, third edition. However, as Dr Farnell reminds us, most of these problems have non-technical origins and thus non-technical solutions.

Previously:
(2024) Windows Co-Pilot "Recall" Feature Privacy Nightmare
(2024) Reasons for Manual Image Editing over Generative AI
(2019) Chapters of Security Engineering, Third Edition, Begin to Arrive Online for Review


Original Submission

posted by janrinok on Monday June 24, @02:32AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

A team led by Lawrence Berkeley National Laboratory (Berkeley Lab) has invented a technique to study electrochemical processes at the atomic level with unprecedented resolution and used it to gain new insights into a popular catalyst material.

Electrochemical reactions—chemical transformations that are caused by or accompanied by the flow of electric currents—are the basis of batteries, fuel cells, electrolysis, and solar-powered fuel generation, among other technologies. They also drive biological processes such as photosynthesis and occur under the Earth's surface in the formation and breakdown of metal ores.

The scientists have developed a cell—a small enclosed chamber that can hold all the components of an electrochemical reaction—that can be paired with transmission electron microscopy (TEM) to generate precise views of a reaction at an atomic scale. Better yet, their device, which they call a polymer liquid cell (PLC), can be frozen to stop the reaction at specific timepoints, so scientists can observe composition changes at each stage of a reaction with other characterization tools.

In a paper appearing in Nature, the team describes their cell and a proof of principle investigation using it to study a copper catalyst that reduces carbon dioxide to generate fuels.

"This is a very exciting technical breakthrough that shows something we could not do before is now possible. The liquid cell allows us to see what's going on at the solid-liquid interface during reactions in real time, which are very complex phenomena. We can see how the catalyst surface atoms move and transform into different transient structures when interacting with the liquid electrolyte during electrocatalytic reactions," said Haimei Zheng, lead author and senior scientist in Berkeley Lab's Materials Science Division.

"It's very important for catalyst design to see how a catalyst works and also how it degrades. If we don't know how it fails, we won't be able to improve the design. And we're very confident we're going to see that happen with this technology," said co-first author Qiubo Zhang, a postdoctoral research fellow in Zheng's lab.

Zheng and her colleagues are excited to use the PLC on a variety of other electrocatalytic materials, and have already begun investigations into problems in lithium and zinc batteries. The team is optimistic that details revealed by the PLC-enabled TEM could lead to improvements in all electrochemical-driven technologies.

The scientists tested the PLC approach on a copper catalyst system that is a hot subject of research and development because it can transform atmospheric carbon dioxide molecules into valuable carbon-based chemicals such as methanol, ethanol, and acetone. However, a deeper understanding of copper-based CO2 reducing catalysts is needed to engineer systems that are durable and efficiently produce a desired carbon product rather than off-target products.

Zheng's team used the powerful microscopes at the National Center for Electron Microscopy, part of Berkeley Lab's Molecular Foundry, to study the area within the reaction called the solid-liquid interface, where the solid catalyst that has electrical current through it meets the liquid electrolyte. The catalyst system they put inside the cell consists of solid copper with an electrolyte of potassium bicarbonate (KHCO3) in water. The cell is composed of platinum, aluminum oxide, and a super thin, 10 nanometer polymer film.

Using electron microscopy, electron energy loss spectroscopy, and energy-dispersive X-ray spectroscopy, the researchers captured unprecedented images and data that revealed unexpected transformations at the solid-liquid interface during the reaction.

The team observed copper atoms leaving the solid, crystalline metal phase and mingling with carbon, hydrogen, and oxygen atoms from the electrolyte and CO2 to form a fluctuating, amorphous state between the surface and the electrolyte, which they dubbed an "amorphous interphase" because it is neither solid nor liquid. This amorphous interphase disappears again when the current stops flowing, and most of the copper atoms return to the solid lattice.

Journal information: Nature

More information:Haimei Zheng, Atomic dynamics of electrified solid–liquid interfaces in liquid-cell TEM, Nature (2024). DOI: 10.1038/s41586-024-07479-w. www.nature.com/articles/s41586-024-07479-w


Original Submission

posted by janrinok on Sunday June 23, @09:49PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

In recent years, age-verification laws have been popping up across the U.S. They typically require visitors to websites with over one-third of adult content to show proof of age (such as ID) when attempting to view this content. As experts told Mashable last year, age-verification laws won't work — VPNs are an easy work-around, for one, and requiring users to upload their IDs leaves them vulnerable to identity theft.

Still, more and more states are enacting these laws. Pornhub and its parent company Aylo's (formerly MindGeek) response has been to block these states from accessing the website at all.

Now, Aylo will block Indiana and Kentucky in July, according to adult trade publication AVN. The laws that spurred these bans are Indiana's SB 17 and Kentucky's HB 278.

If you happen to be in one of these states, Mashable has instructions on how to unblock Pornhub using a VPN.


Original Submission

posted by janrinok on Sunday June 23, @05:03PM   Printer-friendly
from the fungus-among-us dept.

Doug Muir, at his blog Crooked Timber, discusses a paper about symbiotic fungal networks loaning glucose to seedlings and saplings. Of note, fungi do not produce glucose themselves, so they are extracting and storing it. The fungi connect to new seedlings and help them get started by feeding the roots micronutrients, which for some of them compensates for sunlight which they can't yet reach. Then after some time, the network cuts off the supply. If the sapling dies, the network rots it. If the sapling survives, the network extracts and caches nutrients from it.

The problem was, succession leading to forest was a bunch of observations with a big theoretical hole in the center. Imagine a mid-succession field full of tall grass and bushes and mid-sized shrubs. Okay, so... how does the seedling of a slow-growing tree species break in? It should be overshadowed by the shrubs and bushes, and die before it ever has a chance to grow above them.

And the answer is, the fungus. The forest uses the fungus to pump sugar and nutrients into those seedlings, allowing them to grow until they are overshadowing the tall grasses and shrubs, not vice versa. The fungus is a tool the forest uses to expand. Or — looked at another way — the fungus is a venture capitalist, extending startup loans so that its client base can penetrate a new market.

This also answered a bunch of other questions that have puzzled observers for generations. Like, it's long been known that certain trees are "nurse trees", with unusual numbers of seedlings and saplings growing closely nearby. Turns out: it's the fungus. Why some trees do this and not others is unclear, but the ones that do, are using the fungus. Or: there's a species of lily that likes to grow near maple trees. Turns out they're getting some energy from the maple, through the fungus. Are the lilies symbiotes, providing some unknown benefit to the maple tree? Or are they parasites, who are somehow spoofing either the maple or the fungus? Research is ongoing.

Since the slow-growing trees spend years in the shade of other foliage, the nutrient boost lent by the fungi can make the difference between survival or death.

Previously:
(2022) Mushrooms May Communicate With Each Other Using Up To 50 "Words"
(2020) Radiation-Resistant Graphite-Eating Fungus
(2018) Soil Fungi May Help Determine The Resilience Of Forests To Environmental Change
(2015) Earth Has 3,000,000,000,000 Trees and Some Resist Wildfires


Original Submission

posted by janrinok on Sunday June 23, @12:19PM   Printer-friendly

A Fermat's Library featured paper of the week chosen a month or so ago was Richard Wexelblat's 1981 paper, The Consequences of One's First Programming Language. The abstract of which says:

After seeing many programs written in one language, but with the style and structure of another, I conducted an informal survey to determine whether this phenomenon was widespread and to determine how much a programmer's first programming language might be responsible.

Wexelblat's formal and informal findings suggested there was a lasting impression left by one's first programming language, suggesting that at least part of the reason for this is that one's problem solving approach could be unnecessarily constrained by their first language:

Programmers who think of code in concrete rather than abstract terms limit themselves to the style of data and control structures of their ingrained programming habits. A FORTRAN programmer who has successfully designed a payroll system may not see any value in the ability to combine numeric and alphabetic information into a single data structure. BASIC programmers often find no use at all for subroutine formal parameters and local variables.

The mind-set problem, works both ways: the PASCAL programmer who is forced to use FORTRAN or BASIC may end up writing awful programs because of the difficulty in switching from high-level to low-level constructs. APL converts seem to abhor all other languages.

In his concluding remarks he says:

Although everything presented here is anecdotal evidence from which it would be irresponsible to draw firm conclusions, I cannot help add that my concern about the future generation of programmers remains. Who knows? Perhaps the advent of automatic program generators and very high level specification languages will so change the way we talk to computers that all of this will become irrelevant.

This had me reflecting on my own experiences and wondering, with the benefit of hindsight, whether others agree with Wexelblat then, whether they would agree with him now, or now that we do have automatic program generators and very high level specification languages, has this all become irrelevant?


Original Submission

posted by hubie on Sunday June 23, @07:37AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

In an update released late Friday evening, NASA said it was "adjusting" the date of the Starliner spacecraft's return to Earth from June 26 to an unspecified time in July.

The announcement followed two days of long meetings to review the readiness of the spacecraft, developed by Boeing, to fly NASA astronauts Butch Wilmore and Suni Williams to Earth. According to sources these meetings included high-level participation from senior leaders at the agency, including associate administrator Jim Free.

This "Crew Flight Test," which launched on June 5th atop an Atlas V rocket, was originally due to undock and return to Earth on June 14. However, as engineers from NASA and Boeing studied data from the vehicle's problematic flight to the International Space Station, they have waived off several return opportunities.

On Friday night they did so again, citing the need to spend more time reviewing data.

[...] Just a few days ago, on Tuesday, officials from NASA and Boeing set a return date to Earth for June 26. But that was before a series of meetings on Thursday and Friday during which mission managers were to review findings about two significant issues with the Starliner spacecraft: five separate leaks in the helium system that pressurizes Starliner's propulsion system and the failure of five of the vehicle's 28 reaction-control system thrusters as Starliner approached the station.

[...] Now, the NASA and Boeing engineering teams will take some more time. Sources said NASA considered June 30th as a possible return date, but the agency is also keen to perform a pair of spacewalks outside the station. These spacewalks, presently planned for June 24 and July 2, will now go ahead. Starliner will make its return to Earth some time afterward, likely no earlier than the July 4th holiday.

"We are strategically using the extra time to clear a path for some critical station activities while completing readiness for Butch and Suni's return on Starliner and gaining valuable insight into the system upgrades we will want to make for post-certification missions," Stich said.

[...] However, this vehicle is only rated for a 45-day stay at the space station, and that clock began ticking on June 6. Moreover, it is not optimal that NASA feels the need to continue delaying the vehicle to get comfortable with its performance on the return journey to Earth. During a pair of news conferences since Starliner docked to the station officials have downplayed the overall seriousness of these issues—repeatedly saying Starliner is cleared to come home "in case of an emergency." But they have yet to fully explain why they are not yet comfortable with releasing Starliner to fly back to Earth under normal circumstances.

See also:


Original Submission