Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What is your favorite keyboard trait?

  • QWERTY
  • AZERTY
  • Silent (sounds)
  • Clicky sounds
  • Thocky sounds
  • The pretty colored lights
  • I use Braille you insensitive clod
  • Other (please specify in comments)

[ Results | Polls ]
Comments:63 | Votes:116

posted by hubie on Wednesday June 26, @08:58PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Does proton decay exist and how do we search for it? This is what a recently submitted study to the arXiv preprint server hopes to address as a team of international researchers investigate a concept of using samples from the moon to search for evidence of proton decay, which remains a hypothetical type of particle decay that has yet to be observed and continues to elude particle physicists.

This study holds the potential to help solve one of the longstanding mysteries in all of physics, as it could enable new studies into deep-level and the laws of nature, overall.

[...] Dr. Stengel tells Universe Today this research started around 2018 with lead author, Dr. Sebastian Baum, and other scientists regarding the use of paleo-detectors, which is a proposed method to examine particles that span vast periods of geological timeframes.

[...] For the study, the researchers proposed a hypothetical concept using paleo-detectors that would involve collecting mineral samples from more than 5 kilometers (3.1 miles) beneath the lunar surface and analyzing them for presence of proton decay, either on the moon itself or back on Earth.

[...] Dr. Stengel tells Universe Today, "For a lunar mineral sample which is both sufficiently radiopure to mitigate radiogenic backgrounds and buried at sufficient depths for shielding from other cosmic ray backgrounds, we show that the sensitivity of paleo-detectors to proton decay could in principle be competitive with next-generation conventional proton decay experiments."

As noted, proton decay continues to be a hypothetical type of particle decay and was first proposed in 1967 by the Soviet physicist and Nobel Prize laureate, Dr. Andrei Sakharov. As its name implies, proton decay is hypothesized to occur when protons decays into particle smaller than an atom, also called subatomic particles.

[...] Dr. Stengel tells Universe Today, "Proton decay is a generic prediction of particle physics theories beyond the Standard Model (SM). In particular, proton decay could be one of the only low energy predictions of so-called Grand Unified Theories (GUTs), which attempt to combine all of the forces which mediate SM interactions into one force at very high energies. Physicists have been designing and building experiments to look for proton decay for over 50 years."

[...] As noted, the hypothetical concept proposed by this study using paleo-detectors to detect proton decay on the moon would require collecting samples at least 5 kilometers (3.1 miles) beneath the lunar surface. For context, the deepest humans have ever collected samples from beneath the lunar surface was just under 300 centimeters (118 inches) with the drill core samples obtained from the Apollo 17 astronauts.

On Earth, the deepest human-made hole is the Kola Superdeep Borehole in northern Russia and measures approximately 12.3 kilometers (7.6 miles) in true vertical depth, along with requiring several holes to be drilled and several years to achieve. While the study notes the proposed concept using paleo-detectors on the moon is "clearly futuristic," what steps are required to take this concept from futuristic to realistic?

Dr. Stengel tells Universe Today, "As we are careful not to stray too far from our respective areas of expertise related to particle physics, we chose not to speculate much at all about the actual logistics of performing such an experiment on the moon. However, we also thought that this concept was timely as various scientific agencies across different countries are considering a return to the moon and planning for broad program of experiments."

[...] Dr. Stengel tells Universe Today, "Due to the exposure of paleo-detectors to proton decay over billion-year timescales, only one kilogram of target material is necessary to be competitive with conventional experiments. In combination with the scientific motivation and the recent push towards returning humans to the moon for scientific endeavors, we think paleo-detectors could represent the final frontier in the search for proton decay."

More information: Sebastian Baum et al, The Final Frontier for Proton Decay, arXiv (2024). DOI: 10.48550/arxiv.2405.15845


Original Submission

posted by hubie on Wednesday June 26, @04:13PM   Printer-friendly
from the Junk-Drawer dept.

The 2024 Old Computer Challenge has been announced. The challenge started 4 years ago with the challenge to use a computer with 1 core at a max of 1 GHz and 512MB of RAM for a week and grew a small community surrounding them with 34 entrants for 2023. This year's theme, however, is no theme at all. The announcement post includes suggestions however there's no set of official rules this time around.

Anyone interested in participating can take a look at Headcrash's OCC Site to look at previous years' entries and find instructions for how to get listed this year.

Personally I'm planning on running a classic Clamshell Mac with OS9 as my daily driver :)


Original Submission

posted by hubie on Wednesday June 26, @11:25AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Congratulations, world. We’ve done it. Since passing the Clean Air Act in the 1970s, we’ve reduced cancer-causing particulate emissions from our cars and other sources dramatically, a change that has added years to our lives.

That’s the good news. The bad news is that we can now spend more time focusing on the remaining sources, including some unexpected ones. In an EV era, tires are becoming the greatest emitters of particulate matter, and as we’ve seen, whether it’s the microplastics in our shrimp or the preservatives in our salmon, they’re having a disturbing impact on our environment.

Gunnlaugur Erlendsson wants to do something about that. The affable Icelander founded Enso to tackle what he saw as a developing need for better EV tires. The UK-based company’s next big step is coming close to home: a $500 million US tire factory specifically for building eco-friendly tires for EVs. 

Well, eco-friendlier, anyway.

[...] While EV-specific tires are increasingly common, Erlendsson says most tire manufacturers are too focused on partnering with auto manufacturers, shipping new tires with new cars. “So even though technology exists to make tires much better today, it isn’t hitting the 90 percent of the tire industry, which is the aftermarket,” he said.

While Erlendsson said Enso is working to develop partnerships with those same vehicle manufacturers, the company’s US business model will focus on the 90 percent, creating tires in the correct fitments for popular EVs, regardless of brand, then selling them directly to customers.

What makes Enso’s tires different? Erlendsson was light on the technical details but promised 10 percent lower rolling resistance than regular tires, equating to a commensurate range increase. That’ll make your EV cheaper to run, while a 35 percent increase in tire life means lower wear, fewer particulates in the air, and fewer old tires sent to the incinerator, where half of all American tires go to die. 

Enso’s new factory will also handle recycling. It will be truly carbon neutral, not reliant on carbon offsets, and manufacture tires out of recycled carbon black and tire silica made from rice husks. 

[...] Enso is aiming for the production of 5 million tires from the new factory by 2027. Its location is still being finalized, but Enso cites Colorado, Nevada, Texas, or Georgia as likely locations. With the southeastern US becoming a hotbed for EV production and the so-called “Battery Belt” seeing huge investments from startups like Redwood Materials, that last option might be the safest bet.

A factory of that size will be a huge step up for Enso, which right now provides tires exclusively for fleet use in the UK, including the Royal Mail. Per The Guardian, a study from Transport for London, which regulates public transit in the city, shows Enso’s tires are living up to Erlendsson’s claims of increased efficiency, reduced wear, and reduced cost.

If Enso can deliver that on a larger scale to American drivers, it’ll fly in the face of typical corporate goals of selling more things to more people. Erlendsson sees this as a way to reset today’s tire economy.

“A proposition where you sell fewer tires is just not palatable to most listed companies in this industry,” he said. “It’s hard for someone with a legacy manufacturing and legacy supply chains and legacy distribution model to suddenly say, ‘I’m going to make fewer tires, and I’m going to spend more to make them,’ while not tanking your share price at the same time.”

Of course, upending a more than 150-year-old industry is no small feat, either. 


Original Submission

posted by hubie on Wednesday June 26, @06:42AM   Printer-friendly

https://gizmodo.com/detect-aliens-warp-drive-collapse-gravitational-waves-1851550746

Warp drives, inspired by Albert Einstein's grasp of cosmological physics, were first mathematically modeled by physicist Miguel Alcubierre in 1994. According to Alcubierre, a spacecraft could achieve faster-than-light travel (relative to an outside observer) through a mechanism known as a "warp bubble," which contracts space in front of it and expands space behind. The warp drive doesn't accelerate the spacecraft locally to faster-than-light speeds; instead, it manipulates spacetime around the vessel. Such a spaceship could travel vast distances in a short period by "warping" spacetime, bypassing the light-speed limit in a way that is consistent with general relativity.

The trouble is, this model requires negative energy, a speculative form of energy where there's less energy than empty space, which is not currently understood or achievable with today's technology. This gap in our understanding keeps the actual construction of a warp drive, as portrayed in Star Wars and Star Trek, firmly within the realms of science fiction.

In a study uploaded to the arXiv preprint server, astrophysicist and mathematician Katy Clough from Queen Mary University of London, along with colleagues Tim Dietrich from the Max Planck Institute for Gravitational Physics and Sebastian Khan from Cardiff University, explore the possibility that the hypothetical collapse of warp drives could emit detectable gravitational waves.

Note: Simply disable CSS style sheets to bypass the "Continue Reading" button.


Original Submission

posted by janrinok on Wednesday June 26, @04:50AM   Printer-friendly
from the smell-that-fresh-air dept.

https://www.bbc.co.uk/news/live/world-69145409

It's currently just past 12:30 in Singapore, 05:30 in London and 14:30 in Canberra - where Assange is expected to land later this afternoon. If you're just joining us now, here's what you need to know:

  • As part of a plea deal reached with the US, the Wikileaks founder pleaded guilty to one charge of breaching the Espionage Act in relation to his role in leaking thousands of classified documents.
  • In return, he was sentenced by Judge Ramona Manglona to time served due to his time spent at London's Belmarsh prison and allowed to walk free
  • The plea was part of a deal struck with the US and ends a years-long battle by Assange against extradition to the US to face 18 felony charges
  • One of Assange's lawyers say that Wikileaks's work will continue and that Assange "will be a continuing force for freedom of speech and transparency in government"
  • Assange is due to arrive in the Australian capital Canberra at around 18:41 local time (08:41 GMT)

A former CIA chief of staff, Larry Pfeiffer, has been talking to the Australian Broadcasting Corporation, saying he believes the plea deal is "fair" and "not unusual".

He theorised that the US likely came to the negotiating table to protect intelligence sources and methods from being revealed in court, and because the case was causing "diplomatic irritants" in its relationships with Australia and the UK.

Barnaby Joyce, a former deputy Prime Minister of Australia who lobbied in Washington for Assange, told the BBC's Newsday earlier this morning that he believes the extraterritorial aspect of Assange's case is worrying.

"He was not a citizen of the United States, nor was he ever in the United States. So we've sent a person to prison in a third country," said Joyce.

"I don't believe what he did was right. I'm not here to give a warrant to his character. But I do say is what he did in Australia was not illegal... There is no law he broke in Australia."

He also criticised the treatment the Wikileaks founder received while at Belmarsh prison.

"One day we'll look back at this case and everyone will wonder: honestly, who did he murder to be in solitary confinement 23 hours a day? What was the charge that inspired that?" Joyce said.

Touchdown! Free at last' Wikileaks has just posted [a] picture as the plane touched down, saying Assange was "free at last".

posted by hubie on Wednesday June 26, @01:57AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Whether it's physical phenomena, share prices or climate models—many dynamic processes in our world can be described mathematically with the aid of partial differential equations. Thanks to stochastics—an area of mathematics which deals with probabilities—this is even possible when randomness plays a role in these processes.

Something researchers have been working on for some decades now are so-called stochastic partial differential equations. Working together with other researchers, Dr. Markus Tempelmayr at the Cluster of Excellence Mathematics Münster at the University of Münster has found a method which helps to solve a certain class of such equations.

The basis for their work is a theory by Prof. Martin Hairer, recipient of the Fields Medal, developed in 2014 with international colleagues. It is seen as a great breakthrough in the research field of singular stochastic partial differential equations. "Up to then," Tempelmayr explains, "it was something of a mystery how to solve these equations. The new theory has provided a complete 'toolbox,' so to speak, on how such equations can be tackled."

The problem, Tempelmayr continues, is that the theory is relatively complex, with the result that applying the 'toolbox' and adapting it to other situations is sometimes difficult.

"So, in our work, we looked at aspects of the 'toolbox' from a different perspective and found and proved a method which can be used more easily and flexibly."

[...] Stochastic partial differential equations can be used to model a wide range of dynamic processes, for example, the surface growth of bacteria, the evolution of thin liquid films, or interacting particle models in magnetism. However, these concrete areas of application play no role in basic research in mathematics as, irrespective of them, it is always the same class of equations which is involved.

The mathematicians are concentrating on solving the equations in spite of the stochastic terms and the resulting challenges such as overlapping frequencies which lead to resonances.

[...] The approach they took was not to tackle the solution of complicated stochastic partial differential equations directly, but, instead, to solve many different simpler equations and prove certain statements about them.

"The solutions of the simple equations can then be combined—simply added up, so to speak—to arrive at a solution for the complicated equation which we're actually interested in." This knowledge is something which is used by other research groups who themselves work with other methods.

More information: Pablo Linares et al, A diagram-free approach to the stochastic estimates in regularity structures, Inventiones mathematicae (2024). DOI: 10.1007/s00222-024-01275-z


Original Submission

posted by janrinok on Tuesday June 25, @09:12PM   Printer-friendly
from the walled-garden dept.

https://arstechnica.com/tech-policy/2024/06/eu-says-apple-violated-app-developers-rights-could-be-fined-10-of-revenue/

The European Commission today said it found that Apple is violating the Digital Markets Act (DMA) with App Store rules and fees that "prevent app developers from freely steering consumers to alternative channels for offers and content." The commission "informed Apple of its preliminary view" that the company is violating the law, the regulator announced.

This starts a process in which Apple has the right to examine documents in the commission's investigation file and reply in writing to the findings. There is a March 2025 deadline for the commission to make a final ruling.

[...] Apple was further accused of charging excessive fees. The commission said that Apple is allowed to charge "a fee for facilitating via the App Store the initial acquisition of a new customer by developers," but "the fees charged by Apple go beyond what is strictly necessary for such remuneration. For example, Apple charges developers a fee for every purchase of digital goods or services a user makes within seven days after a link-out from the app."

Apple says it charges a commission of 27 percent on sales "to the user for digital goods or services on your website after a link out... provided that the sale was initiated within seven days and the digital goods or services can be used in an app."

[...] The commission today also announced it is starting a separate investigation into Apple's "contractual requirements for third-party app developers and app stores," including its "Core Technology Fee." Apple charges the Core Technology Fee for app installs, whether they are delivered from Apple's own App Store, from an alternative app marketplace, or from a developer's own website. The first million installs each year are free, but a per-install fee of €0.50 applies after that.

The commission said it would investigate whether the Core Technology Fee complies with the DMA.


Original Submission

posted by janrinok on Tuesday June 25, @04:27PM   Printer-friendly

Climate models are numerical simulations of the climate system, which are used to for predicting climate change from emissions scenarios and many other applications. Let's take a closer look at how they work.

Why do we need climate models?

The climate system and its components like the atmosphere and hydrosphere are driven by many processes that interact with each other to produce the climate we observe. We understand some of these processes very well, such as many atmospheric circulations that drive the weather. Others like the role of aerosols and deep ocean circulations are more uncertain. Even when individual processes are well understood, the interaction between these processes makes the system more complex and produces emergent properties like feedbacks and tipping points. We can't rely on simple extrapolation to generate accurate predictions, which is why we need models to simulate the dynamics of the climate system. Global climate models simulate the entire planet at a coarse resolution. Regional climate models simulate smaller areas at a higher resolution, relying on global climate models for their initial and lateral boundary conditions (the edges of the domain).

How do climate models work?

A climate model is a combination of several components, each of which typically simulates one aspect of the climate system such as the atmosphere, hydrosphere, cryosphere, lithosphere, or biosphere. These components are coupled together, meaning that what happens in one component of the climate system affects all of the other components. The most advanced climate models are large software tools that use parallel computing to run across hundreds or thousands of processors. Climate models are a close cousin of the models we use for weather forecasting and even use a lot of the same source code.

The atmospheric component of the model, for example, has a fluid dynamics simulation at its core. The model numerically integrates a set of primitive equations such as the Navier-Stokes equation, the first law of thermodynamics, the continuity equation, the Clausius-Clapeyron equation, and the equation of state. Global climate models generally assume the atmosphere is in hydrostatic balance at all times, but that is not necessarily the case for regional models. Hydrostatic balance means that the force of gravity completely balances with the upward pressure gradient force, meaning that the air never accelerates upward or downward, which does occur in some instances like inside thunderstorms.

Not all atmospheric processes can be described by these equations, and we also need to predict things like aerosols (particulates suspended in the atmosphere), radiation (incoming solar radiation, and heat radiated upward), microphysics (e.g., cloud dropets. rain drops, ice crystals, etc...), and deep convection like thunderstorms (in models with coarse resolutions) to accurately simulate the atmosphere. Instead, these processes are parameterized to simulate their effects as accurately as possible in the absence of governing equations.

The atmospheric simulations are generally more complex and run at a higher resolution for weather models than in climate models. However, weather models do not simulate the oceans, land surface, or the biosphere with the same level of complexity because it's not necessary to get accurate forecasts. For example, the deep oceans don't change enough on weather time scales to impact the forecast, but they do change in important ways on climate time scales. A weather model also probably isn't going to directly simulate how temperature and precipitation affect the type of vegetation growing in a particular location, or if there's just bare soil. Instead, a weather model might have data sets for land use and land cover during the summer and winter, use the appropriate data depending on the time of year being simulated, and then use that information to estimate things like albedo and evapotranspiration.

The main difference between climate models and weather models is that weather models are solving an initial condition problem whereas climate modeling is a boundary condition problem. Weather is highly sensitive to the initial state of the atmosphere, meaning that small changes in the atmosphere at the current time might result in large differences a week from now. Climate models depend on factors that occur and are predictable on much longer time scales like greenhouse gas concentrations, land use and land cover, and the temperature and salinity of the deep ocean. Climate models are also not concerned with accurately predicting the weather at a specific point in time, only its statistical moments like the mean and standard deviation over a period of time. We intuitively understand that these statistical moments are predictable on far longer time scales, which is why you could confidently insist that I'm wrong if I claimed that there would be heavy snow in Miami, Florida on June 20, 2050.

How and why are climate models coupled?

Information from the various components in the model needs to be communicated to the other components to get an accurate simulation. For example, precipitation affects the land surface by changing the soil moisture, which may also affect the biosphere. The albedo of the land surface affects air temperatures. Soil moisture also affects temperature, with arid areas typically getting warmer during the day and colder at night. If the precipitation is snow, the snow cover prevents heat from being conducted from the ground into the atmosphere, causing colder temperatures. Warm ocean temperatures are conducive for tropical cyclones to form, but the winds in a strong cyclone can churn up cooler water from below, which will weaken a tropical cyclone.

Both weather and climate models are coupled models, meaning that information is communicated between different components of the system to allow the model to simulate interactions like these and many others. Each component of the climate system (e.g., atmosphere, hydrosphere, lithosphere, etc...) is generally a separate software module that is run simultaneously with the other components and interfaces with them. If the components of weather and climate models weren't coupled together, we couldn't simulate many of the feedbacks and tipping points that arise from these interactions.

What are climate models used for?

Perhaps the most frequently discussed application of climate models is simulating how various emissions scenarios will affect future climates. But climate models are also used for many other applications like sensitivity studies, attribution of extreme events, and paleoclimate studies.

An example of a sensitivity study might be to examine how deforestation of the Amazon affects the climate. A sensitivity study would require two models, one a control simulation with the Amazon rainforest intact, the other with the rainforest replaced by grassland or bare soil. Most of the parameters that define these simulations like greenhouse gas concentrations would be kept identical so that only the presence or absence of the Amazon rainforest would be responsible for the differences in climate. The simulations would be run for a period of time, perhaps years or decades, and then the differences between the simulations are analyzed to determine the sensitivity of the climate to whatever is different between the simulations.

Extreme event attribution attempts to determine to what extent climate change is responsible for a particular extreme event. This is very similar to sensitivity studies in that there's a control simulation and a second simulation where some aspect of the climate system like greenhouse gas concentrations is different. For example, if we want to estimate the effect of climate change on an extreme heat wave in Europe, we might run a control simulation with preindustrial greenhouse gas levels and another simulation with present day levels. In this case, the greenhouse gas concentrations would probably be prescribed at a particular level and not permitted to vary during the simulation. These simulations might be run for hundreds or even thousands of years to see how often the extreme event occurs in the preindustrial and the modern simulation. If the heat wave occurs every hundred years with modern greenhouse gas levels but never occurs with preindustrial conditions, the event might be attributed entirely to climate change. If the event occurs in both simulations, we would compare the frequency it occurs in each simulation to estimate how much it can be attributed to climate change.

For paleoclimate simulations, we have much more limited information about the climate. We might know the greenhouse gas concentrations from bubbles of air trapped in ice cores, for example. There may be proxy data like fossil evidence of the plants and animals that lived in a particular location, which can be used to infer information about whether a climate was hot or cold, or whether it was wet or dry. On the other hand, we certainly won't have detailed observations of things like extreme events, oceanic circulations, and many other aspects of the climate system. In this case, the climate model can be configured to match the known aspects of the past climate as closely as possible, then using the simulation to fill in the gaps where we don't have observations. Paleoclimate simulations can also be used to identify biases and errors in the model when it's unable to accurately reproduce past climates. When these errors are discovered, the model can be improved to better simulate past climates, and that also increases our confidence in its ability to extrapolate future climates.

Can we trust climate models?

All weather models and climate models are wrong. A weather model will never forecast the weather with 100% accuracy, though they do a remarkably good job at forecasting wide range of weather events. The model is still the best tool we have to predict the weather, especially beyond a day or two where extrapolation just isn't going to be reliable. Many components are shared between weather and climate models, and if these components didn't work correctly, they would also prevent us from producing accurate weather forecasts. Weather models often do have some systematic bias, especially for longer range forecasts, but we can correct for these biases with statistical postprocessing. Every time a weather model is run, it's also helping to verify the accuracy of any components that are shared with climate models.

Climate models from a couple of decades ago generated forecasts for the present climate, and once differences in greenhouse gas concentrations are accounted for, they are very accurate at predicting our current climate. Climate models are also used to simulate past climates, and their ability to do so accurately means that we can be more confident in their ability to predict the climate under a much wider range of conditions.

Even when there is a known bias in climate models, it does not invalidate all climate model studies. For example, climate models typically underestimate greenhouse gas sinks, resulting in a high bias in greenhouse gas concentrations for a particular emissions scenario. But we may be able to correct for that bias with statistical postprocessing. Also, many applications of climate models like extreme event attribution, many sensitivity studies, and many paleoclimate simulations do not dynamically simulate the carbon cycle. This means that those applications of climate models would be completely unaffected by the issue with underestimating greenhouse gas sinks.

Many of the climate models like the Goddard Institute for Space Studies models, the Community Earth System Model, and the Weather Research and Forecasting Model (often used in regional climate modeling) are free and open source, meaning that anyone can download the model, examine the source code, and run their own simulations. Data from a large number of climate model simulations is often publicly shared, especially in various intercomparison projects. Climate models are not closely guarded secrets, so anyone can examine and test climate models for themselves, and modify the source code to fix bugs or make improvements.


Original Submission

posted by janrinok on Tuesday June 25, @11:44AM   Printer-friendly
from the cap-that! dept.

Arthur T Knackerbracket has processed the following story:

Scientists at Lawrence Berkeley National Laboratory and UC Berkeley have created "microcapacitors" that address this shortcoming, as highlighted in a study published in Nature. Made from engineered thin films of hafnium oxide and zirconium oxide, these capacitors employ materials and fabrication techniques commonly used in chip manufacturing. What sets them apart is their ability to store significantly more energy than ordinary capacitors, thanks to the use of negative capacitance materials.

Capacitors are one of the basic components of electrical circuits. They store energy in an electric field established between two metallic plates separated by a dielectric material (non-metallic substance). They can deliver power quickly and have longer lifespans than batteries, which store energy in electrochemical reactions.

However, these benefits come at the cost of significantly lower energy densities. Perhaps that's why we've only seen low-powered devices like mice powered by this technology, as opposed to something like a laptop. Plus, the problem is only exacerbated when shrinking them down to microcapacitor sizes for on-chip energy storage.

The researchers overcame this by engineering thin films of HfO2-ZrO2 to achieve a negative capacitance effect. By tuning the composition just right, they were able to get the material to be easily polarized by even a small electric field.

To scale up the energy storage capability of the films, the team placed atomically thin layers of aluminum oxide every few layers of HfO2-ZrO2, allowing them to grow the films up to 100 nm thick while retaining the desired properties.

These films were integrated into three-dimensional microcapacitor structures, achieving record-breaking properties: nine times higher energy density and 170 times higher power density compared to the best electrostatic capacitors today. That's huge.

"The energy and power density we got are much higher than we expected," said Sayeef Salahuddin, a senior scientist at Berkeley Lab, UC Berkeley professor, and project lead. "We've been developing negative capacitance materials for many years, but these results were quite surprising."

It's a major breakthrough, but the researchers aren't resting on their laurels just yet. Now they're working on scaling up the technology and integrating it into full-size microchips while improving the negative capacitance of the films further.


Original Submission

posted by janrinok on Tuesday June 25, @07:03AM   Printer-friendly
from the nostalgia dept.

https://arstechnica.com/gaming/2024/06/in-first-person-john-romero-reflects-on-over-three-decades-as-the-doom-guy/

John Romero remembers the moment he realized what the future of gaming would look like.

In late 1991, Romero and his colleagues at id Software had just released Catacomb 3-D, a crude-looking, EGA-colored first-person shooter that was nonetheless revolutionary compared to other first-person games of the time. "When we started making our 3D games, the only 3D games out there were nothing like ours," Romero told Ars in a recent interview. "They were lockstep, going through a maze, do a 90-degree turn, that kind of thing."

Despite Catacomb 3-D's technological advances in first-person perspective, though, Romero remembers the team at id followed its release by going to work on the next entry in the long-running Commander Keen series of 2D platform games. But as that process moved forward, Romero told Ars that something didn't feel right.

"Within two weeks, [I was up] at one in the morning and I'm just like, 'Guys we need to not make this game [Keen],'" he said. "'This is not the future. The future is getting better at what we just did with Catacomb.' ... And everyone was immediately was like, 'Yeah, you know, you're right. That is the new thing, and we haven't seen it, and we can do it, so why aren't we doing it?'"

The team started working on Wolfenstein 3D that very night, Romero said. And the rest is history.

[...] The early id designers didn't even use basic development tools like version control systems, Romero said. Instead, development was highly compartmentalized between different developers; "the files that I'm going to work on, he doesn't touch, and I don't touch his files," Romero remembered of programming games alongside John Carmack. "I only put the files on my transfer floppy disk that he needs, and it's OK for him to copy everything off of there and overwrite what he has because it's only my files, and vice versa. If for some reason the hard drive crashed, we could rebuild the source from anyone's copies of what they've got."

[...] The formation of id Software in 1991 meant that Wolfenstein 3D "was the first time where we felt we had no limit on time," Romero said. That meant the team could experiment with adding in features drawn from the original 2D Castle Wolfenstein, things like "searching dead soldiers, trying to search in open boxes and drag the soldiers out of visibility from other soldiers, and other stuff that was not just 'mow it down.'"

But Romero says those features were quickly removed since they got in the way of "this high-speed run-and-gun gameplay" that was emerging as the core of the game's appeal.

[...] One of the last steps necessary to get Wolfenstein 3D out quickly, Romero said, was ignoring the publisher's suggestion that they double the game's size from 30 to 60 levels. But Romero said the publisher did give them the smart advice to ditch the aging EGA graphics standard in favor of much more colorful VGA monitors. "You're a marketing guy, you know what's going on, we trust you, we will do that immediately," Romero recalled saying.

[...] "We were making games that we wanted to play," he said. "We weren't worried about audience. We were the audience. We played every game on all the systems back then. We were consumers, and we knew what we wanted to make. We made so many games that we were past our learning years of how to make game designs. We were at the point where the 10,000 hours was over. Way, way over."

[...] "But when we saw the lengths people had to go to just to get access and make levels, it was like, we need to completely open the next game," Romero continued. "That's why Doom's WAD file formats were put out there, and our level formats were out there. That's what let people generate tons of content for Doom."

Romero said that looking back, he thinks Doom hit that sweet spot where players could create robust maps and content without getting bogged down in too many time-consuming details.

[...] Beyond quick WAD file creation, though, Romero expressed awe at what dedicated modders have been able to build on top of the now open source Doom engine over the decades. He specifically pointed to Selaco as a game that shows off just how far the updated GZDoom engine can be taken.

[...] Romero said he admires how the battle royale genre has made the shooter more accessible for players who weren't raised on the tight corridors of Doom and its ilk.

"If someone who's not good at a shooter jumps into [a game like Counter-Strike], they're dead, mercilessly," Romero said. "It's not fun. But what is fun is 'I'm not that good, I'm gonna jump into a game that is a battle royale, and I get to play with other people who are really good, but I actually survive for five minutes. I find some stuff; maybe I shoot somebody and take them out. You're gonna get shot, but you had an experience; you learn something, and you jump in and do it again."


Original Submission

posted by janrinok on Tuesday June 25, @06:36AM   Printer-friendly
https://www.theguardian.com/media/article/2024/jun/25/julian-assange-plea-deal-with-us-free-to-return-australia

Julian Assange has been released from a British prison and is expected to plead guilty to violating US espionage law, in a deal that would allow him to return home to his native Australia.

Assange, 52, agreed to plead guilty to a single criminal count of conspiring to obtain and disclose classified US national defence documents, according to filings in the US district court for the Northern Mariana Islands.

Wikileaks posted on social media a video of its founder boarding a flight at London's Stansted airport on Monday evening and Australian prime minister Anthony Albanese confirmed he had left the UK.

https://www.theguardian.com/media/article/2024/jun/25/julian-assange-may-be-on-his-way-to-freedom-but-this-is-not-a-clear-victory-for-freedom-of-the-press

The release from a UK prison of Julian Assange is a victory for him and his many supporters around the world, but not necessarily a clear win for the principle underlying his defence, the freedom of the press.

The charges Assange is anticipated to plead guilty to as part of a US deal, and for which he will be sentenced to time served, are drawn from the 1917 Espionage Act, for "conspiring to unlawfully obtain and disseminate classified information related to the national defense of the United States".

So although the WikiLeaks founder is expected to walk free from the US district court in Saipan after Wednesday's hearing, the Espionage Act will still hang over the heads of journalists reporting on national security issues, not just in the US. Assange himself is an Australian, not a US citizen.

Live: Father of Julian Assange hints at son's return to Australia after prison release - ABC News:

Nothing is certain until it happens and there's a lot we still don't know about how Julian Assange's case will proceed.

A lot of our understanding at this stage is coming from the court documents, which state that he'll appear before a judge in Saipan at 9am local time tomorrow.

An email from the Department of Justice (DOJ) to the judge in the Northern Mariana Islands states that Assange is expected to plead guilty to one count of conspiracy to obtain and disclose national defence information, and that he'll be sentenced for that offence.

American media outlets are reporting that the plea deal would need to be approved by the judge, and WikiLeaks has described the agreement as having "not yet been formally finalised."

But Assange's departure from the UK is a massive development in the case, and the court document says the DOJ expects he'll return to Australia "at the conclusion of the proceedings".

posted by hubie on Tuesday June 25, @02:17AM   Printer-friendly

As Apple enters the AI race, it's also looking for help from partners:

During the announcement of Apple Intelligence earlier this month, Apple said it would be partnering with OpenAI to bring ChatGPT into the revamped version of Siri. Now, the Wall Street Journal reports that Apple and Facebook's parent company Meta are in talks around a similar deal.

[...] As Sarah Perez noted, Apple's approach to AI currently sounds a bit boring and practical — rather than treating this as an opportunity for wholesale reinvention or disruption, it's starting out by adding AI-powered features (like writing suggestions and custom emojis) to its existing products. But emphasizing practicality over flashiness might be the key to AI adoption. Then, Apple can leverage partnerships to go beyond the capabilities of its own AI models.

So a deal with Meta could make Apple less reliant on a single partner, while also providing validation for Meta's generative AI tech. The Journal reports that Apple's isn't offering to pay for these partnerships; instead, Apple provides distribution to AI partners who can then sell premium subscriptions.

[...] In another recent development, Apple has also said that while Apple Intelligence is set to roll out in the newest versions of its operating systems (including iOS 18, iPadOS 18, and macOS Sequoia) later this year, it plans to hold the technology back from the European Union, due to the EU's Digital Markets Act (which is supposed to encourage competition in digital markets). It also said it will hold back iPhone Mirroring and SharePlay Screen Sharing.

"We are concerned that the interoperability requirements of the DMA could force us to compromise the integrity of our products in ways that risk user privacy and data security," the company said in a statement.


Original Submission

posted by hubie on Monday June 24, @09:35PM   Printer-friendly

http://www.righto.com/2024/06/montreal-mifare-ultralight-nfc.html

To use the Montreal subway (the Métro), you tap a paper ticket against the turnstile and it opens. The ticket works through a system called NFC, but what's happening internally? How does the ticket work without a battery? How does it communicate with the turnstile? And how can it be so cheap that you can throw the ticket away after one use? To answer these questions, I opened up a ticket and examined the tiny chip inside.


Original Submission

posted by hubie on Monday June 24, @04:48PM   Printer-friendly

Using stalkerware is creepy, unethical, potentially illegal, and puts your data and that of your loved ones in danger:

Last week, an unknown hacker broke into the servers of the U.S.-based stalkerware maker pcTattletale. The hacker then stole and leaked the company's internal data. They also defaced pcTattletale's official website with the goal of embarrassing the company.

"This took a total of 15 minutes from reading the techcrunch article," the hackers wrote in the defacement, referring to a recent TechCrunch article where we reported that pcTattletale was used to monitor several front desk check-in computers at Wyndham hotels across the United States.

As a result of this hack, leak and shame operation, pcTattletale founder Bryan Fleming said he was shutting down his company.

Consumer spyware apps like pcTattletale are commonly referred to as stalkerware because jealous spouses and partners use them to surreptitiously monitor and surveil their loved ones. These companies often explicitly market their products as solutions to catch cheating partners by encouraging illegal and unethical behavior. And there have been multiple court cases, journalistic investigations, and surveys of domestic abuse shelters that show that online stalking and monitoring can lead to cases of real-world harm and violence.

And that's why hackers have repeatedly targeted some of these companies.

According to TechCrunch's tally, with this latest hack, pcTattletale has become the 20th stalkerware company since 2017 that is known to have been hacked or leaked customer and victims' data online. That's not a typo: Twenty stalkerware companies have either been hacked or had a significant data exposure in recent years. And three stalkerware companies were hacked multiple times.

[...] But a company closing doesn't mean it's gone forever. As with Spyhide and SpyFone, some of the same owners and developers behind a shuttered stalkerware maker simply rebranded.

"I do think that these hacks do things. They do accomplish things, they do put a dent in it," Galperin said. "But if you think that if you hack a stalkerware company, that they will simply shake their fists, curse your name, disappear in a puff of blue smoke and never be seen again, that has most definitely not been the case."

"What happens most often, when you actually manage to kill a stalkerware company, is that the stalkerware company comes up like mushrooms after the rain," Galperin added.

[...] Using spyware to monitor your loved ones is not only unethical, it's also illegal in most jurisdictions, as it's considered unlawful surveillance.

That is already a significant reason not to use stalkerware. Then there is the issue that stalkerware makers have proven time and time again that they cannot keep data secure — neither data belonging to the customers nor their victims or targets.

Apart from spying on romantic partners and spouses, some people use stalkerware apps to monitor their children. While this type of use, at least in the United States, is legal, it doesn't mean using stalkerware to snoop on your kids' phone isn't creepy and unethical.

Even if it's lawful, Galperin thinks parents should not spy on their children without telling them, and without their consent.

If parents do inform their children and get their go-ahead, parents should stay away from insecure and untrustworthy stalkerware apps, and use parental tracking tools built into Apple phones and tablets and Android devices that are safer and operate overtly.

If you or someone you know needs help, the National Domestic Violence Hotline (1-800-799-7233) provides 24/7 free, confidential support to victims of domestic abuse and violence. If you are in an emergency situation, call 911. The Coalition Against Stalkerware has resources if you think your phone has been compromised by spyware.


Original Submission

posted by hubie on Monday June 24, @12:04PM   Printer-friendly

https://salvagedcircuitry.com/2000a-nand-recovery.html

I ran across an oscilloscope in need of love and attention on the Internet's favorite online auction site. After some back and forth from the seller, I found out that the scope didn't boot, one of the tell tale problems of the Agilent 2000a / 3000a / 4000a X-series oscilloscope series. The no boot condition can be caused by one of three things: a failed power supply, the mischievous NAND corruption error, or both. The seller took my lowball offer of $220 and just like that I had another project in my life.

On initial opening, the scope looked like it had road rash. Every single knob and edge of the plastic shell had distinct pavement scuff marks. There were some cracks in the front plastic bezel and rear molded fan grill, further confirming that this scope was dropped multiple times. The horizontal adjust encoder was also bent and two knobs were missing.

Not the end of the world, let's double check the description. I plugged it in and the scope powered up with 3 of the 4 indicator lights. Ref, Math, Digital and were illuminated, nothing on serial. The scope stays perpetually in this state with nothing displayed on the LCD. Button presses yield no response. The continuously-on fan and 3 indicator lights are the only source of life. Using a special sequence of power button + unlock cal switch displays no lights on the instrument. The seller clearly did not test the instrument.


Original Submission