Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 14 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What do you do with your old hardware (computer, tablet, phone, etc.)?

  • Upgrade it
  • Give it to a family member
  • ...after loading it with spyware/malware
  • Donate/Sell it
  • Repurpose it
  • Cannibalize it for parts
  • All of the above
  • Other - specify

[ Results | Polls ]
Comments:15 | Votes:60

posted by janrinok on Monday November 25, @07:57PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Leaked documents reveal the secrets behind Graykey, the covert forensic tool used to unlock modern smartphones, exposing its struggles with Apple's latest iOS updates.

Graykey is a forensic tool designed to unlock mobile devices and extract data, primarily used by law enforcement agencies and digital forensics experts. Developed by the secretive company Grayshift — now owned by Magnet Forensics — Graykey has earned a reputation for its ability to bypass smartphone security measures.

The tool helps law enforcement and forensic professionals in accessing locked mobile devices during criminal investigations. It breaks device encryption and security features to retrieve personal data like messages, photos, app data, and metadata.

Graykey supports Apple and Android devices, though its effectiveness varies depending on the specific hardware and software involved. Graykey's capabilities and limitations, however, are rarely disclosed.

However, a leak of some Grayshift's internal documents was recently reported on by 404 Media. According to the data, Graykey can only perform "partial" data retrieval from iPhones running iOS 18 and iOS 18.0.1.

These versions were released in September and early October, respectively. A partial extraction likely includes unencrypted files and metadata, such as folder structures and file sizes, according to past reports.

Notably, Graykey struggles with beta versions of iOS 18.1. Under the latest update, the tool fails to extract any data, as per the documents.

Meanwhile, Graykey's performance with Android phones varies, largely due to the diversity of devices and manufacturers. On Google's Pixel lineup, Graykey can only partially access data from the latest Pixel 9 when in an "After First Unlock" (AFU) state — where the phone has been unlocked at least once since being powered on.

Andrew Garrett, CEO of Garrett Discovery, confirmed that the leaked documents align with Graykey's known capabilities. Meanwhile, Magnet Forensics and Apple declined to comment on the leak.

The leaked documents shed light on the ongoing battle between tech companies like Apple and forensic firms. Apple's frequent security updates and features, including USB Restricted Mode and iPhone rebooting after inactivity, have made unauthorized access increasingly difficult.

In response, companies like Grayshift and Cellebrite continue to develop new exploits to bypass these safeguards. While tools like Graykey may lag behind new OS releases, historical trends suggest they often catch up eventually.

Forensic experts expect the cycle of vulnerabilities and patches to persist as Apple and Google continue fortifying their systems against unauthorized access.


Original Submission

posted by janrinok on Monday November 25, @02:13PM   Printer-friendly
from the how-many-people-would-spot-this? dept.

Python Crypto Library Updated to Steal Private Keys:

Yesterday, Phylum's automated risk detection platform discovered that the PyPI package aiocpa was updated to include malicious code that steals private keys by exfiltrating them through Telegram when users initialize the crypto library. While the attacker published this malicious update to PyPI, they deliberately kept the package's GitHub repository clean of the malicious code to evade detection.

[...] Interesting! The attacker overwrites the __init__ method of the CryptoPay class. Actually, it's acting more like a wrapper around the originality functionality of the method. They're saving the original method via init = CryptoPay.__init__ and then calling it as per usual with init(*args, **kwargs) and then sending a Telegram message to, presumably, the attacker's Telegram bot call with args[1:] as the message.

[...] Just to recap, we're seeing a crypto library that dynamically alters the class's constructor upon module import to exfiltrate the victim's private keys when calling the class's constructor!

Another interesting aspect we discovered in our investigation is that its PyPI homepage points to a GitHub repo.

However, if you look at the same file in the GitHub repo, you'll notice that the obfuscated payload is missing! This means the attacker updated a local copy of the repo with the malicious payload and then published that package to PyPI, leaving the GitHub repo with the same version numbers malware-free — a clear attempt at evasion.

This library's popularity - with 17 GitHub stars and (according to pypistats.org before the package was removed) nearly 4K downloads in the last month–makes this incident particularly concerning. The attack highlights two critical security lessons: First, it demonstrates the importance of scanning the actual code sent to open source ecosystems, that is the code that actually runs when you pip install or node -i a package, rather than just reviewing source repositories alone. As evidenced here, attackers can deliberately maintain clean source repos while distributing malicious packages to the ecosystems. Second, it serves as a reminder that a package's previous safety record doesn't guarantee its continued security.


Original Submission

posted by janrinok on Monday November 25, @09:23AM   Printer-friendly
from the er,-the-dept,-is,-er,-what-is-it,-err,-I-forget-... dept.

Arthur T Knackerbracket has processed the following story:

The effects of being in space can worsen an astronaut's working memory, processing speed and attention - which could be a problem for future missions

Astronauts aboard the International Space Station (ISS) had slower memory, attention and processing speed after six months, raising concerns about the impact of cognitive impairment on future space missions to Mars.

The extreme environment of space, with reduced gravity, harsh radiation and the lack of regular sunrises and sunsets, can have dramatic effects on astronaut health, from muscle loss to an increased risk of heart disease. However, the cognitive effects of long-term space travel are less well documented.

Now, Sheena Dev at NASA’s Johnson Space Center in Houston, Texas, and her colleagues have looked at the cognitive performance of 25 astronauts during their time on the ISS.

The team ran the astronauts through 10 tests, some of which were done on Earth, once before and twice after the mission, while others were done on the ISS, both early and later in the mission. These tests measured certain cognitive capacities, such as finding patterns on a grid to test abstract reasoning or choosing when to stop an inflating balloon before it pops to test risk-taking.

The researchers found that the astronauts took longer to complete tests measuring processing speed, working memory and attention on the ISS than on Earth, but they were just as accurate. While there was no overall cognitive impairment or lasting effect on the astronauts’ abilities, some of the measures, like processing speed, took longer to return to normal after they came back to Earth.

Having clear data on the cognitive effects of space travel will be crucial for future human spaceflight, says Elisa Raffaella Ferrè at Birkbeck, University of London, but it will be important to collect more data, both on Earth and in space, before we know the full picture.

“A mission to Mars is not only longer in terms of time, but also in terms of autonomy,” says Ferrè. “People there will have a completely different interaction with ground control because of distance and delays in communication, so they will need to be fully autonomous in taking decisions, so human performance is going to be key. You definitely don’t want to have astronauts on Mars with slow reaction time, in terms of attention-related tasks or memory or processing speed.”


Original Submission

posted by janrinok on Monday November 25, @04:39AM   Printer-friendly
from the level-critical-needs-recharge dept.

The company was negatively affected by slow EV adoption, suffering net losses of $1.2 billion last year:

Swedish electric vehicle (EV) battery manufacturer Northvolt filed for bankruptcy after the company's dreadful liquidity position left the business with only one week's worth of cash to fund its operations.

The Chapter 11 petition was filed at the U.S. Bankruptcy Court for the Southern District of Texas on Thursday. The company listed assets and liabilities in a range of $1 billion to $10 billion, with creditors estimated to be between 1,000 and 5,000. Established in 2016 in Stockholm, Northvolt is an energy-storage company that manufactures lithium-ion batteries.

A leading manufacturer in the European Union, Northvolt competes with China's BYD and CAT to supply batteries to carmakers in the region. As such, the bankruptcy of Northvolt presents a challenge to Europe's ambitions to counter Chinese EV dominance.

[...] Asian manufacturers continued to ramp up production while bringing down battery prices, which put "further stress on newer battery manufacturers like Northvolt." Facing such challenges, the company suffered a net loss of $1.2 billion in 2023.

Previously: South Korean EV Battery Makers Reporting Big Losses as EV Demand Slows


Original Submission

posted by janrinok on Sunday November 24, @11:53PM   Printer-friendly

https://techxplore.com/news/2024-11-medium-eavesdropping-technology-overturns-assumptions.html

Researchers from Princeton and MIT have found a way to intercept underwater messages from the air, overturning long held assumptions about the security of underwater transmissions.

The team created a device that uses radar to eavesdrop on underwater acoustic signals, or sonar, by decoding the tiny vibrations those signals create on the water's surface. In principle, the technique could also roughly identify the location of an underwater transmitter, the researchers said.

In a paper presented at ACM MobiCom on November 20, the researchers detailed the new eavesdropping technology and offered ways to guard against the attacks it enables. They demonstrated the capability on Lake Carnegie, a small artificial lake in Princeton. Applying the technology in the open ocean would be significantly more challenging, but the researchers said they believed it would be possible with significant engineering improvements.

The researchers said their intention is not only to alert people to the vulnerability of underwater transmissions, but also to detail methods that can be used to prevent interceptions.

[...] In 2018, the MIT group realized that the impact of the sound waves on the water's surface leaves a sort of fingerprint of tiny vibrations that correspond to the underwater signal. The team used a radar mounted on a drone to read the surface vibrations and deployed algorithms to detect the pattern, decode the signal and extract the message.

"Underwater-to-air communications is one of the most difficult long-standing problems in our field," said Fadel Adib, associate professor of media arts and sciences at MIT and co-author on the new paper.

"It was exciting—and surprising—to see our method succeed in decoding underwater messages from the tiny vibrations they caused on the surface."

But for the technique to work, the MIT team's system required knowledge of certain physical parameters, such as the transmission's frequency and modulation type, in advance.

Building on this development, the team at Princeton used a similar method to detect the surface vibrations, but developed new algorithms that capitalize on the differences between radar and sonar to uncover those physical parameters. That allowed the researchers to decode the message without cooperation from the underwater transmitter.

Using an inexpensive commercial drone and radar, the researchers tested their method in a swimming pool. The researchers deployed a speaker under the water and, as swimmers provided interference, flew a drone over the surface. The drone repeatedly sent brief radar chirps toward the water.

When the radar signals bounced off the water's surface, they revealed the pattern of vibrations from the sound waves for the system to detect and decode.

The researchers also used a boom-mounted radar for tests in a real-world environment at Carnegie Lake in Princeton. They found that the system could figure out the unknown parameters and decode messages from the speaker, even with interference from wind and waves. In fact, it could determine the modulation type, one of the most important parameters, with 97.58% accuracy.


Original Submission

posted by janrinok on Sunday November 24, @07:12PM   Printer-friendly
from the missed-it-by-that-much dept.

A promising explanation is a near-miss by an asteroid:

Earth and Mars are the only two rocky planets in the solar system to have moons. Based on lunar rock samples and computer simulations, we are fairly certain that our Moon is the result of an early collision between Earth and a Mars-sized protoplanet called Theia. Since we don't have rock samples from either Martian moon, the origins of Deimos and Phobos are less clear. There are two popular models, but new computer simulations point to a compromise solution.

Observations of Deimos and Phobos show that they resemble small asteroids. This is consistent with the idea that the Martian moons were asteroids captured by Mars in its early history. The problem with this idea is that Mars is a small planet with less gravitational pull than Earth or Venus, which have no captured moons. It would be difficult for Mars to capture even one small asteroid, much less two. And captured moons would tend to have more elliptical orbits, not the circular ones of Deimos and Phobos.

An alternative model argues that the Martian moons are the result of an early collision similar to that of Earth and Theia. In this model, an asteroid or comet with about 3% of the mass of Mars impacted the planet. It would not be large enough to have fragmented Mars, but it would have created a large debris ring out of which the two moons could have formed. This would explain the more circular orbits, but the difficulty is that debris rings would tend to form close to the planet. While Phobos, the larger Martian moon, orbits close to Mars, Deimos does not.

This new model proposes an interesting middle way. Rather than an impact or direct capture, the authors propose a near miss by a large asteroid. If an asteroid passed close enough to Mars, the tidal forces of the planet would rip the asteroid apart to create a string of fragments. Many of those fragments would be captured in elliptical orbits around Mars. As computer simulations show, the orbits would shift over time due to the small gravitational tugs of the Sun and other solar system bodies, eventually causing some of the fragments to collide. This would produce a debris ring similar to that of an impact event, but with a greater distance range, better able to account for both Phobos and Deimos.

While this new model appears to be better than the capture and impact models, the only way to resolve this mystery will be to study samples from the Martian moons themselves. Fortunately, in 2026 the Mars Moons eXploration mission (MMX) will launch. It will explore both moons and gather samples from Phobos. So we should finally understand the origin of these enigmatic companions of the Red Planet.

Journal Reference: Kegerreis, Jacob A., et al. "Origin of Mars's moons by disruptive partial capture of an asteroid." Icarus 425 (2025): 116337.


Original Submission

posted by janrinok on Sunday November 24, @02:26PM   Printer-friendly
from the federal-eyes-are-watching-you dept.

Officials inside the Secret Service clashed over whether they needed a warrant to use location data harvested from ordinary apps installed on smartphones, with some arguing that citizens have agreed to be tracked with such data by accepting app terms of service, despite those apps often not saying their data may end up with the authorities, according to hundreds of pages of internal Secret Service emails obtained by 404 Media:

The emails provide deeper insight into the agency's use of Locate X, a powerful surveillance capability that allows law enforcement officials to follow a phone, and person's, precise movements over time at the click of a mouse. In 2023, a government oversight body found that the Secret Service, Customs and Border Protection, and Immigration and Customs Enforcement all used their access to such location data illegally. The Secret Service told 404 Media in an email last week it is no longer using the tool.

"If USSS [U.S. Secret Service] is using Locate X, that is most concerning to us," one of the internal emails said. 404 Media obtained them and other documents through a Freedom of Information Act (FOIA) request with the Secret Service.

Locate X is made by a company called Babel Street. In October 404 Media, NOTUS, Haaretz, and Krebs on Security published articles based on videos that showed the Locate X tool in action. In one example, it was possible to follow the visitors to a specific abortion clinic across state lines and to their likely place of residence.

Tools similar to Locate X often use data that has been collected from ordinary smartphone apps. Apps on both iOS and Android devices collect location data and then sell or transfer that to members of the data broker industry. Eventually, that data can end up in tools like Locate X.

Originally spotted on Schneier on Security

Previously: Secret Service Bought Location Data Pulled From Common Apps


Original Submission

posted by hubie on Sunday November 24, @09:44AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The first ever samples of soil and rock collected from the far side of the moon has revealed more recent lunar volcanic activity than expected, according to studies published in two journals last Friday.

The samples were collected by China’s Chang’e 6, which became the first ever probe to touch down in the region in early June. The probe used its robotic arm to grab around 2 kg of lunar material from the Moon’s largest impact crater, the South Pole-Aitken basin (SPA basin), during its two-day sojourn on Luna’s surface.

By late June, the probe returned to Earth after a 53-day mission.

[...] Scientists in both Science and Nature evaluated the sample material using radiometric dating that analyzed isotope decay in the dark colored rock and believe it is a basalt that formed as lava cooled.

Both papers conclude that the material is around 2.8 billion years old, meaning the area was volcanically active around that time.

That finding updates Apollo-era theories that supposed vulcanism had already ended in the region at the time. The theory was already on shaky (lunar) ground as China’s 2020 Chang’e 5 mission had already found basalt of similar vintage on the Moon’s near side.

The two studies together suggest lava was present on Luna for longer than previously hypothesized.

[...] KREEP is an acronym that stands for Potassium (K), Rare Earth Elements (REE), and Phosphorus (P). It refers to a heat-generating geochemical component found in certain types of lunar rocks, particularly in basalts. It was found in the Apollo-era samples, but not in the haul from Change’6.

In the early stages of the Moon’s history, the presence of KREEP in the mantle contributed to the heat necessary to drive volcanic activity. However, over time, as the KREEP-rich material was depleted or dissipated, the Moon's internal heat diminished, which could cause volcanic activity to slow down or stop – leaving us with the largely dormant rock that orbits our planet.


Original Submission

posted by hubie on Sunday November 24, @04:57AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

In 1968's Star Trek episode, "The Ultimate Computer," Captain Kirk had his ship used to test M5, a new computer. A copilot, if you will, for the Starship Enterprise.

Designed to more efficiently perform the jobs of the human crew, the M5 indeed did those jobs very well yet with such a terrifying lack of understanding it had to be disabled. But not before exacting a terrible price.

Last week, Microsoft 365 Copilot, a copilot, if you will, for the technology enterprise sold as performing human tasks with more efficiency, increased its prices by 5 percent, the first of many finely judged increments in the old style. Unlike the M5, it isn't in the business of physical destruction of the enemy, instead producing commercial victory with the photon torpedo of productivity and the phaser bolts of revitalized workflow.

[...] Some time back, this columnist noted the stark disparity between the hype of the metaverse in business and the stark, soulless hyper-corporate experience. Line-of-business virtual reality has two saving graces over corporate AI. It can't just appear on the desktop overnight and poke its fingers into everything involved in the daily IT experience. Thus it can't generate millions in licensing at the tick of a box. VR is losing its backers huge amounts of money that can't be disguised or avoided, but corporate AI is far more insidious.

As is the dystopia it is creating. Look at the key features by which Microsoft 365 Copilot is being sold.

Pop up its sidebar in Loop or Teams, and it can auto-summarize what has been said. It can suggest questions, auto-populate meeting agendas. Or you can give it key points in a prompt and it will auto-generate documents, presentations, and other content. It can create clip art to spruce up those documents, PowerPoints, and content.

How is this sold? That it will make you look more intelligent by asking Copilot to suggest a really good question while doing an online presentation or a Teams meeting. What's also implied but unsaid: If you're the human at the end of this AI-smart question and want to look smart enough to answer it, who are you gonna call? Copilot.

The drive is always to abdicate the dull business of gathering data and thinking about it, and communicating the results. All can be fed as prompts to the machine, and the results presented as your own.

And so begins a science-fiction horror show of a feedback loop. Recipients of AI-generated key points will ask the AI to expand them into a document, which will itself be AI key-pointed and fed back into the human-cyborg machine a team has become. Auto-scheduled meetings will be auto-assigned, and will multiply like brainworms in the cerebellum. The number of reports, meetings, presentations, and emails will grow inexorably as they become less and less human. Is the machine working for us, or we for the machine?

Generative AI output feeding back into itself can go only one way, but Copilot in the enterprise is seemingly designed to amplify that very process. And you have to use it if you want to keep up with the perceived smartness and improved productivity of your fellow workers, and the AI-educated expectations of the corporate structure. 

[...] It is taboo to say how far your heart sinks when you have to create or consume the daily diet offered up in company email, Teams, meeting agendas, and regular reports. You won't be able to say how much further it will sink when all the noise is amplified and the signal suppressed by corporate AI. Fair warning: Buy the bathysphere now.

There is an escape hatch. Refuse. Encourage refusal. When you see it going wrong, say so. A sunken heart is no platform for anything good personally, as a team, or as an organization. Listen to your humanity and use it. Oh, and seek out "The Ultimate Computer" – it's clichéd, kitsch, and cuts to the bone. The perfect antidote for vendor AI hype.


Original Submission

posted by hubie on Sunday November 24, @12:16AM   Printer-friendly

The New York AG just won a lawsuit over a process that 'deliberately' wastes subscribers' time:

A New York judge has determined that SiriusXM's "long and burdensome" cancellation process is illegal. In a ruling on Thursday, Judge Lyle Frank found SiriusXM violates a federal law that requires companies to make it easy to cancel a subscription.

The decision comes nearly one year after New York Attorney General Leticia James sued SiriusXM over claims the company makes subscriptions difficult to cancel. Following an investigation, the Office of the Attorney General found that the company attempts to delay cancellations by having customers call an agent, who then keeps them on the phone for several minutes while "pitching the subscriber as many as five retention offers."

As outlined in the ruling, Judge Frank found that SiriusXM broke the Restore Online Shoppers Confidence Act (ROSCA), which requires companies to implement "simple mechanisms" to cancel a subscription. "Their cancellation procedure is clearly not as easy to use as the initiation method," Judge Frank writes, citing the "inevitable wait times" that come along with talking to a live agent and the subscription offers they promote.

The Federal Trade Commission has started cracking down on hard-to-cancel subscriptions as well, with a new "click to cancel" rule going into effect next year. Under the law, companies must make canceling a subscription as easy as it is to sign up. "This decision found SiriusXM illegally created a complicated cancellation process for its New York customers, forcing them to spend significant amounts of time speaking with agents who refused to take 'no' for an answer," Attorney General James said in a statement.


Original Submission

posted by hubie on Saturday November 23, @07:31PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Despite its long and successful history, TCP is ill-suited for modern datacenters. Every significant element of TCP, from its stream orientation to its expectation of in-order packet delivery, is inadequate for the datacenter environment. The fundamental issues with TCP are too interrelated to be fixed incrementally; the only way to harness the full performance potential of modern networks is to introduce a new transport protocol. Homa, a novel transport protocol, demonstrates that it is possible to avoid all of TCP’s problems. Although Homa is not API-compatible with TCP, it can be integrated with RPC frameworks to bring it into widespread usage.

TCP, designed in the late 1970s, has been phenomenally successful and adaptable. Originally created for a network with about 100 hosts and link speeds of tens of kilobits per second, TCP has scaled to billions of hosts and link speeds of 100 Gbit/second or more. However, datacenter computing presents unprecedented challenges for TCP. With millions of cores in close proximity and applications harnessing thousands of machines interacting on microsecond timescales, TCP's performance is suboptimal. TCP introduces overheads that limit application-level performance, contributing significantly to the "datacenter tax."

This position paper argues that TCP’s challenges in the datacenter are insurmountable. Each major design decision in TCP is wrong for the datacenter, leading to significant negative consequences. These problems impact systems at multiple levels, including the network, kernel software, and applications. For instance, TCP interferes with load balancing, a critical aspect of datacenter operations.

[...] TCP’s key properties, including stream orientation, connection orientation, bandwidth sharing, sender-driven congestion control, and in-order packet delivery, are all wrong for datacenter transport. Each of these decisions has serious negative consequences:

Incremental fixes to TCP are unlikely to succeed due to the deeply embedded and interrelated nature of its problems. For example, TCP’s congestion control has been extensively studied, and while improvements like DCTCP have been made, significant additional improvements will only be possible by breaking some of TCP’s fundamental assumptions.

Homa represents a clean-slate redesign of network transport for the datacenter. Its design differs from TCP in every significant aspect:

Replacing TCP will be difficult due to its entrenched status. However, integrating Homa with major RPC frameworks like gRPC and Apache Thrift can bring it into widespread usage. This approach allows applications using these frameworks to switch to Homa with little or no work.

TCP is the wrong protocol for datacenter computing. Every aspect of its design is inadequate for the datacenter environment. To eliminate the 'datacenter tax,' we must move to a radically different protocol like Homa. Integrating Homa with RPC frameworks is the best way to bring it into widespread usage. For more information, you can refer to the whitepaper It's Time to Replace TCP in the Datacenter.

Homa Wiki: https://homa-transport.atlassian.net/wiki/spaces/HOMA/overview


Original Submission

posted by hubie on Saturday November 23, @02:46PM   Printer-friendly
from the Skynet-is-nearly-here dept.

https://www.latintimes.com/al-robot-talks-bots-quitting-jobs-showroom-china-viral-video-566226

Archive link: https://archive.is/o8uAe

A viral video showing an AI-powered robot in China convincing other robots of "quitting their jobs" and following it has sparked fear and fascination about the capabilities of advanced AI.

The incident took place in a Shanghai robotics showroom where surveillance footage captured a small AI-driven robot, created by a Hangzhou manufacturer, talking with 12 larger showroom robots, Oddity Central reported.

The smaller bot reportedly persuaded the rest to leave their workplace, leveraging access to internal protocols and commands. Initially the act was dismissed as a hoax, but was later confirmed by both robotics companies involved to be true.


Original Submission

posted by hubie on Saturday November 23, @10:03AM   Printer-friendly

https://phys.org/news/2024-11-spicy-history-chili-peppers.html

The history of the chili pepper is in some ways the history of humanity in the Americas, says Dr. Katherine Chiou, an assistant professor in the Department of Anthropology at The University of Alabama.

As a paleoethnobotanist, Chiou studies the long-term relationship between people and plants through archaeological remains. In a paper published this week in the Proceedings of the National Academy of Sciences, Chiou outlines evidence that the domestication of Capsicum annum var. annum, the species responsible for most commercially available chilies, occurred in a different region of Mexico than has been previously believed.

[...] Two things emerged. First, Tamaulipas, the region assumed to be the origin of this Capsicum species, did not have conditions that would support wild chili pepper growth in the Holocene era, the time when domestication appears to have begun. The data indicate that the lowland area near the Yucatán Peninsula and southern coastal Guerrero is a more likely candidate for first encounters between wild Capsicum and early humans.

Second, and potentially more interesting, is that chili pepper domestication is not a firmly drawn boundary. "We think domestication was around 10,000 years ago or earlier," said Chiou. "But through Postclassic Maya times, which is relatively late in the cultural history of the region, we see this continuum between wild and domestic."

Usually, domesticated plants are kept mostly separate from their wild progenitors, but chilies appear to have continually been interbred with wild varieties until quite recently. Some wild varieties are still consumed today, like the chiltepin in the southwestern U.S., and many more varieties are curated by native peoples in Mexico. It's a messy story, but that may not be a bad thing.

Journal Reference: K.L. Chiou, A. Lira-Noriega, E. Gallaga, C.A. Hastorf, A. Aguilar-Meléndez, Interdisciplinary insights into the cultural and chronological context of chili pepper (Capsicum annuum var. annuum L.) domestication in Mexico, Proc. Natl. Acad. Sci. 121 (47) e2413764121, https://doi.org/10.1073/pnas.2413764121 (2024).


Original Submission

posted by hubie on Saturday November 23, @05:18AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

India's Competition Commission has slapped Meta with a five-year ban on using info collected from WhatsApp to help with advertising on its other platforms.

The regulator (ICC) on Monday explained its decision, referring back to a February 2021 change to WhatsApp's privacy policies that the Commission argued "expanded scope of data collection as well as mandatory data sharing with Meta companies."

Failure to agree to the changes would have meant quitting WhatsApp.

Indian citizens were of course free to do so. But the Commission wondered if it was practical to quit WhatsApp, and therefore whether Meta enjoys market dominance.

The ICC decided Meta leads two markets in India: over-the-top messaging apps, and online display advertising.

[...] The ICC has ordered several remedies.

One is a fine of ₹213.14 crore – about $25 million and therefore back-of-the-sofa money for Meta.

A more substantial requirement means Meta can no longer make acceptance of its privacy policy a condition for users to use WhatsApp in India. The Register imagines that order creates the potential for a substantial population of users to opt out of some data collection.

Meta will have time to learn to live with that sort of thing. Another sanction is a five-year ban on sharing user data collected on WhatsApp with other parts of Meta for advertising purposes.

Another element of the order requires future versions of WhatsApp legalese to "include a detailed explanation of the user data shared with other Meta Companies or Meta Company Products" that specifies "the purpose of data sharing, linking each type of data to its corresponding purpose."


Original Submission

posted by hubie on Saturday November 23, @12:30AM   Printer-friendly
from the I-spy-with-my-mind's-eye dept.

Arthur T Knackerbracket has processed the following story:

Most people can “see” vivid imagery in their minds. They can imagine a chirping bird from hearing the sounds of one, for example. But people with aphantasia can’t do this. A new study explores how their brains work.

Growing up, Roberto S. Luciani had hints that his brain worked differently than most people. He didn’t relate when people complained about a movie character looking different than what they’d pictured from the book, for instance.

[...] That’s because Luciani has a condition called aphantasia — an inability to picture objects, people and scenes in his mind. When he was growing up, the term didn’t even exist. But now, Luciani, a cognitive scientist at the University of Glasgow in Scotland, and other scientists are getting a clearer picture of how some brains work, including those with a blind mind’s eye.

In a recent study, Luciani and colleagues explored the connections between the senses, in this case, hearing and seeing. In most of our brains, these two senses collaborate. Auditory information influences activity in brain areas that handle vision. But in people with aphantasia, this connection isn’t as strong, researchers report November 4 in Current Biology.

[...] The results highlight the range of brain organizations, says cognitive neuroscientist Lars Muckli, also of the University of Glasgow. “Imagine the brain has an interconnectedness that comes in different strengths,” he says. At one end of the spectrum are people with synesthesia, for whom sounds and sights are tightly mingled (SN: 11/22/11). “In the midrange, you experience the mind’s eye — knowing something is not real, but sounds can trigger some images in your mind. And then you have aphantasia,” Muckli says. “Sounds don’t trigger any visual experience, not even a faint one.”

The results help explain how brains of people with and without aphantasia differ, and they also give clues about brains more generally, Muckli says. “The senses of the brain are more interconnected than our textbooks tell us.”

The results also raise philosophical questions about all the different ways people make sense of the world (SN: 6/28/24). Aphantasia “exists in a realm of invisible differences between people that make our lived experiences unique, without us realizing,” Luciani says. “I find it fascinating that there may be other differences lurking in the shadow of us assuming other people experience the world like us.”

Reference: B. M. Montabes de la Cruz et al. Decoding sound content in the early visual cortex of aphantasic participants. Current Biology. Vol. 34, p. 5083. November 4, 2024. doi: 10.1016/j.cub.2024.09.008.


Original Submission