Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Who or what piqued your interest in technology?

  • School
  • Parent
  • Friend
  • Book
  • Gadget
  • Curiosity
  • I have been kidnapped by a technology company you insensitive clod
  • Other (please specify in the comments)

[ Results | Polls ]
Comments:36 | Votes:119

posted by janrinok on Friday February 07, @09:02PM   Printer-friendly
from the now-you-can-be-a-pop-star-or-a-porn-star dept.

Deepfake videos are getting shockingly good:

Researchers from TikTok owner ByteDance have demoed a new AI system, OmniHuman-1, that can generate perhaps the most realistic deepfake videos to date.

Deepfaking AI is a commodity. There's no shortage of apps that can insert someone into a photo, or make a person appear to say something they didn't actually say. But most deepfakes — and video deepfakes in particular — fail to clear the uncanny valley. There's usually some tell or obvious sign that AI was involved somewhere.

Not so with OmniHuman-1 — at least from the cherry-picked samples the ByteDance team released. [Ed Note: The source contains some examples if you wish to enable it's access to your computer.]

According to the ByteDance researchers, OmniHuman-1 only needs a single reference image and audio, like speech or vocals, to generate a clip of an arbitrary length. The output video's aspect ratio is adjustable, as is the subject's "body proportion" — i.e. how much of their body is shown in the fake footage.

Trained on 19,000 hours of video content from undisclosed sources, OmniHuman-1 can also edit existing videos — even modifying the movements of a person's limbs. It's truly astonishing how convincing the result can be.

Granted, OmniHuman-1 isn't perfect. The ByteDance team says that "low-quality" reference images won't yield the best videos, and the system seems to struggle with certain poses.

Also look at: OmniHuman-1: Rethinking the Scaling-Up of One-Stage Conditioned Human Animation Models


Original Submission

posted by janrinok on Friday February 07, @04:14PM   Printer-friendly

As Internet enshittification marches on, here are some of the worst offenders:

Two years ago, a Canadian writer named Cory Doctorow coined the phrase "enshittification" to describe the decay of online platforms. The word immediately set the Internet ablaze, as it captured the growing malaise regarding how almost everything about the web seemed to be getting worse.

"It's my theory explaining how the Internet was colonized by platforms, why all those platforms are degrading so quickly and thoroughly, why it matters, and what we can do about it," Doctorow explained in a follow-up article. "We're all living through a great enshittening, in which the services that matter to us, that we rely on, are turning into giant piles of shit. It's frustrating. It's demoralizing. It's even terrifying."

Doctorow believes there are four basic forces that might constrain companies from getting worse: competition, regulation, self-help, and tech workers. One by one, he says, these constraints have been eroded as large corporations squeeze the Internet and its denizens for dollars.

If you want a real-world, literal example of enshittification, let's look at actual poop. When Diapers.com refused Amazon's acquisition offer, Amazon lit $100 million on fire, selling diapers way below cost for months, until Diapers.com folded. With another competitor tossed aside, Amazon was then free to sell diapers at its price from wherever it wanted to source them.

Anyway, we at Ars have covered a lot of things that have been enshittified. Here are some of the worst examples we've come across. Hopefully, you'll share some of your own experiences in the comments. We might even do a follow-up story based on those.

Smart TVs have come a long way since Samsung released the first model readily available for the masses in 2008. While there have certainly been improvements in areas like image quality, sound capabilities, usability, size, and, critically, price, much of smart TVs' evolution could be viewed as invasive and anti-consumer.

Today, smart TVs are essentially digital billboards that serve as tools for companies—from advertisers to TV OEMs—to extract user data. Corporate interest in understanding what people do with and watch on their TVs and in pushing ads has dramatically worsened the user experience. For example, the remotes for LG's 2025 TVs don't have a dedicated input button but do have multiple ways for accessing LG webOS apps.

This is all likely to get worse as TV companies target software, tracking, and ad sales as ways to monetize customers after their TV purchases—even at the cost of customer convenience and privacy. When budget brands like Roku are selling TV sets at a loss, you know something's up.

With this approach, TVs miss the opportunity to appeal to customers with more relevant and impressive upgrades. There's also a growing desire among users to disconnect their connected TVs, defeating their original purpose. Suddenly, buying a dumb TV seems smarter than buying a smart one. But smart TVs and the ongoing revenue opportunities they represent have made it extremely hard to find a TV that won't spy on you.

Doctorow writes about so many different aspects of enshittification that is not possible to cover them all here, and it would be wrong to copy the entire source. However, he discusses Google, PDFs, Apple, TV Sports, AI, Windows, etc. I recommend that you read the original source, but you will probably spend much of the time nodding in agreement to his observations and comments.


Original Submission

posted by janrinok on Friday February 07, @11:28AM   Printer-friendly

Russia VPN Crackdown Revelation - VPN Sites Hide Their IP Addresses:

Reports concerning the Russian government's growing intolerance of VPNs, often refer to the technology or associated services as "banned" or otherwise outlawed.

While technically inaccurate, amendments to local law effectively place VPN services into two groups. The first group contains the VPN providers officially registered with the authorities. The second group contains the illegal services, whose owners haven't yet agreed to provide the authorities with unfettered access, when that becomes necessary.

Illegal VPN services are unsurprisingly illegal to sell. Under more recent amendments, it's also illegal to promote or encourage illegal VPN use, or provide tutorials or similar assistance to others. These are crimes punishable under law but at least for now, Russian authorities seem more likely to block offending websites, to prevent Russians from viewing illegal information.

Thanks to the tireless work of digital rights group Roskomsvoboda, blocking orders issued by many government departments, courts, and less easily defined entities that seem to come and go, can be accessed much more easily.

A Verstka.Media review of the blocking data published this week, found a fivefold increase in persistent site blocking in 2024, when compared to data for 2022.

For offenses related to VPNs, torrent and streaming sites, tax offenses and a myriad of other reasons, in 2024 Russia restricted access to over 523,000 infringing sites/URLs. 106,000 restrictions were lifted in the same year, Verstka's analysis notes.

A closer look at the data reveals that telecoms regulator Roskomnadzor, which oversees most matters concerning online piracy, rogue VPNs, and site blocking in general, is only the second most prolific issuer of blocking instructions in Russia. [...] the Federal Tax Service is way out in front as the most significant contributor to the all-time blocking totals seen on the bottom line.

Determining how many sites have been targeted due to alleged VPN offenses, is much less straightforward.

[...] The revelation that those familiar with VPNs also appreciate reverse proxies, isn't an especially big surprise. Or any surprise at all. Russia having a blocklist full of Cloudflare IP addresses is almost normal too.

The difficult part is trying to determine who emerges from this entire process having achieved anything of any value. Maybe there's a technical basis for claiming that Russia successfully exported its VPN problem to the West. There's certainly very little else.


Original Submission

posted by janrinok on Friday February 07, @06:43AM   Printer-friendly

Researchers at the Max Planck Institute for Human Development have identified who is most susceptible to online misinformation and why. Their meta-analysis reveals surprising patterns on how demographic and psychological factors—including age, education, political identity, analytical thinking, and motivated reflection—affect people's ability to assess the accuracy of information. For instance, individuals with higher levels of education are just as likely to fall for misinformation as those with a lower level of education. The work, published in the journal PNAS, provides important information for theory building and designing interventions.

[...] The researchers found no significant impact of education on people's ability to distinguish between true and false information. This contradicts the widespread belief that more educated individuals are likely to be less susceptible to misinformation, especially as higher education teaches us critical thinking. The study also challenges assumptions about age and misinformation. While older adults are often portrayed as more vulnerable to fake news, the analysis found that they were actually better than younger adults at distinguishing between true and false headlines. Older adults were also more skeptical and tended to label headlines as false more often. Paradoxically, however, previous research has consistently shown that older adults engage with and share more misinformation online. The study distinguishes between three age groups: 18-31 years, 32-47 years and 48-88 years.

[...] Political identity also played a key role. The meta-analysis confirmed previous research showing that individuals who identify as Republicans are more likely to fall for misinformation than those who identify as Democrats. Republicans were less accurate at assessing the veracity of news and tended to label more headlines as true, whereas Democrats were more skeptical.

Individuals with higher analytical thinking skills—that is, who are better at logically evaluating information, identifying patterns, and systematically solving problems—performed better overall and were more skeptical (tending to classify news as false). People were more likely to believe that news that aligned with their political identity was true and to disregard news that was not aligned with their political identity—a phenomenon known as partisan bias.

However, a counterintuitive finding was that individuals with higher analytical thinking were actually more susceptible to partisan bias. This tendency is known as motivated reflection, which is a cognitive process where individuals' analytical reasoning works against them to protect their pre-existing beliefs, values, or partisan affiliations.

The strongest effect in the meta-analysis was the influence of familiarity. When participants reported having already seen a news headline, they were more likely to believe it was true. This finding underscores the danger of repeated exposure to misinformation, particularly on social media.

[...] The results come at a critical time. "The World Economic Forum's Global Risks Report 2024 identifies misinformation as one of the greatest risks to the world in the next two years. With the rise of right-wing populism, the study's results are highly relevant and could influence debates on how to best combat misinformation in different demographic groups", says co-author Ralf Kurvers, Senior Research Scientist at the Center for Adaptive Rationality of the Max Planck Institute for Human Development.

[Source]: Max Planck Institute for Human Development, Berlin

[Journal Ref]: Proceedings of the National Academy of Sciences

[Covered By]: PHYS.ORG


Original Submission

posted by janrinok on Friday February 07, @01:55AM   Printer-friendly

https://techcrunch.com/2025/02/04/google-removes-pledge-to-not-use-ai-for-weapons-from-website/

Google removed a pledge to not build AI for weapons or surveillance from its website this week. The change was first spotted by Bloomberg. The company appears to have updated its public AI principles page, erasing a section titled "applications we will not pursue," which was still included as recently as last week.

Asked for comment, the company pointed TechCrunch to a new blog post on "responsible AI." It notes, in part, "we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security."


Original Submission

posted by janrinok on Thursday February 06, @09:12PM   Printer-friendly

Warmer, More Crowded Cities Bring Out the Rats:

Jonathan Richardson was long bothered by hyperbolic headlines proclaiming the rat problem in so-and-so city was out of control. City dwellers do have cause for concern—the animals bring both disease and distress—but the urban ecologist at the University of Richmond balked at claims saying one city had it worse than another. "There were not a lot of data," he says. Yet having real numbers on rodent infestations is critical for determining whether control measures are working. So, he and colleagues embarked on a global study of how rat populations in major cities have changed over time.

Climate change emerged as a driving factor behind urban rat swarms, the researchers report today in Science Advances. As temperatures rise, they conclude, and people flock to urban areas and convert formerly "green" spaces into neighborhoods and shopping centers, they created a perfect storm for rat populations to explode. And the city that's fared the worst over the past decade? Washington, D.C.

[...] Smart, cooperative, and resilient, rats have coevolved with humans for millennia and have fine-tuned their ability to take advantage of garbage, debris piles, sewers, and small postage stamp–size plots of soil along sidewalks for food and nesting. The animals can transmit disease, spoil food and animal feed supplies—costing the United States $27 billion per year—and cause mental anguish in city dwellers. "Like the proverbial 'canary in the mine' our 'rats in the city' provide an indication of human welfare," Bartal says.

To learn more about this threat, Richardson and colleagues reached out to city governments around the U.S. to collect data on rat populations, as well as average temperature, human population, and property development trends. And because so few places keep or share rat data, they expanded the study to cities outside the U.S. and eventually ended up with 16 where there were inspection, trapping, and rat sighting records across an average of 12 years that had been compiled by these municipalities.

"It is a lot of work to build these databases," says Miriam Maas, who studies animal-borne infectious at the Centre for Infectious Disease Control and was not involved with the work. "[But] when done on 16 cities, it is possible to see trends."

Cities that experienced greater rates of temperature increase and more people moving in were more likely to have bigger rat problems, Richardson and colleagues report. That makes sense, Maas notes, as cold weather slows reproduction and foraging. Moreover, denser populations mean more dumpsters, more restaurants, and more opportunities for rats to eat their fill.

Disappearing green space also seemed to benefit rat populations, which surprised the researchers. [...] However, "Not all green spaces are equally beneficial to rats," Munshi-South says. Big parks usually have less garbage per square meter, for example, and so are less conducive to rat explosions than commercial developments or housing projects.

Of the cities studied, 11 had rat populations that increased over the past 2 decades, but the one whose rat problem has grown the most is Washington, D.C. The U.S. capital had three times greater growth in rat numbers than Boston and 1.5 times that of New York City. Those explosions happened even though in the U.S., municipal governments collectively spend an estimated $500 million annually to keep rats in check. It is unclear exactly why the problem appears to be so bad in Washington, D.C., though one survey suggested the city's residents may simply be more likely to report rat sightings, which may boost its numbers.

[...] "There's no silver lining," Richardson says. "Cities who are truly committed are going to have to dedicate more resources and larger staff" to controlling the problem.


Original Submission

posted by janrinok on Thursday February 06, @04:24PM   Printer-friendly
from the the-real-super-bowl-winnners-are-the-advertisers dept.

Eagles or Chiefs? Who's your pick to win the Super Bowl?

Anyone can make a guess, but can we predict the winner with some skill? At first glance, the Eagles were 14-3 during the regular season, but the Chiefs had a slightly better record at 15-2. Therefore, we should pick the Chiefs, right? As college football commentator Lee Corso would say, not so fast, my friend.

A game like chess has no luck at all. You might say you got lucky if your opponent made a poor decision, but that's really just human error. When I make a move like to castle kingside, that move always happens in the exact same way, with no luck involved. In football, however, there's a lot of random chance. A gust of wind might blow a field goal wide right, or a receiver might slip on a slick field and miss an otherwise easy pass. Or the officials might miss a call due to their own human error. Research shows that there's a lot of luck in football, and it doesn't always even out over a 16 or 17 game season. If we want to predict the outcome of future games skillfully, we need a way to distinguish lucky teams from good teams.

The most accurate prediction systems rely heavily on margin of victory instead of a team's won-loss record. If the quarterback throws a pass to a wide open receiver, but the receiver slips on a slick field and doesn't catch the pass, it might prevent the team from scoring a touchdown on that drive. Luck might cost the team a touchdown, but it's a lot less likely for bad luck to cost that team two or three touchdowns. When teams win or lose games by larger margins, they're more insulated from the effects of luck. A team that wins a lot of close games might well be getting lucky, but a team that's blowing out their opponents is probably just a really good team. Strength of schedule also matters. If a team is winning a lot of blowout games but against lesser competition, they're probably not as good as their record or margins of victory might suggest.

Another factor is the pace of play. A team that plays quickly is going to run more plays during a game, and that will also result in more scoring. A good team that plays quickly will probably win by larger margins, but a bad team with a rapid tempo is going to lose by larger margins. Many good prediction systems also take this into account, as well as that teams also tend to perform slightly better when they're at home than on the road.

During the regular season, the Chiefs outscored their opponents by a total of 59 points, but the Eagles had a much larger scoring margin of 160 points. But what about their schedules?

Two of the best rating systems are ESPN's Football Power Index (FPI) and Jeff Sagarin's ratings. FPI shows that the Chiefs played the 20th toughest schedule, compared to the Eagles with the 23rd strongest schedule. Sagarin also has the Chiefs' schedule at #20, but the Eagles at #30. According to Sagarin, the average Chiefs opponent was 0.97 points tougher than the average Eagles opponent. Over the course of the season, this is worth about 16.49 points. If the Eagles played the same schedule as the Chiefs, we would expect the Chiefs to have a scoring margin of 75.49 points. That's a little better, but still not nearly as good as the Eagles.

The advanced metrics generally agree that the Eagles are the better team. If we subtract their FPI ratings, we would expect the Eagles to be favored by 2.1 points. Sagarin's ratings suggest Eagles by 4.08. On paper, the advanced metrics say the Eagles have a small edge over the Chiefs. But those predictions are actually the mean (or very close to it) of a statistical distribution of possible outcomes. FPI favors the Eagles by 2.1, but gives them a 56.1% chance of winning. Although we can predict football games with some skill, this is why we still have to play the games.


Original Submission

posted by janrinok on Thursday February 06, @11:40AM   Printer-friendly

Google offering 'voluntary exit' for employees working on Pixel, Android:

Last year, the teams responsible for Pixel hardware and Android software were merged into one division, and Google today announced a "voluntary exit program" for employees working in the Platforms & Devices group.

SVP Rick Osterloh sent out a memo to employees this morning about the "voluntary exit program," and the company confirmed to 9to5Google that this is happening.

This program applies to US employees working on Platforms & Devices, which includes Android (Auto, TV, Wear OS, XR), Chrome, ChromeOS, Google Photos, Google One, Pixel, Fitbit, and Nest. Google has many people around the world working on these products, but today's announcement is just for those stateside.

Meanwhile, this is not a company-wide offer that applies to Search, AI, or other groups, though Alphabet's new CFO last October said "driving further efficiencies" was a key priority.

Separately, software and hardware were already two very large organizations, with some overlap. Now that things have settled in recent months, employees have a better idea of their roles. Osterloh said the division received questions about the possibility of voluntary exits since the Pixel-Android merger. Not offering people the option to leave in advance was a complaint about how Google handled past layoffs.

The memo frames this exit program as being beneficial for those who might not be aligned or passionate about the combined organization's mission or are having difficulty with their roles, and hybrid working requirements.

In leaving Google, employees will get a severance package, with more details internally coming soon. From what we learned, this program does not coincide with any product roadmap changes.

Before the merger, the Google hardware division last January switched to a functional organization model where there is one team (and leader) for teams like hardware engineering across Pixel, Nest, and Fitbit. At the same time, a few hundred roles were cut. The broader unification in April was designed to "speed up decision-making" internally.


Original Submission

posted by janrinok on Thursday February 06, @06:55AM   Printer-friendly

https://physics.aps.org/articles/v18/22

A bowl of steaming hot pasta covered in your favorite sauce and dusted with a healthy dose of parmesan cheese comes high on the list of ultimate comfort foods. But cooking that pasta to perfection can be more difficult than seemingly simple recipes imply. Now two separate teams of researchers have explored two different aspects of executing a flawless dish. In one study, Phillip Toultchinski and Thomas Vilgis of the Max Planck Institute for Polymer Research, Germany, studied whether perfectly al dente spaghetti could be prepared in a more energy-efficient way [1]. In a second study, Matteo Ciarchi and Daniel Busiello of the Max Planck Institute for the Physics of Complex Systems, Germany, Giacomo Bartolucci of the University of Barcelona, Spain, and colleagues developed a recipe for making perfect cacio e pepe, a three-ingredient cheese sauce that is surprisingly easy to mess up [2]. "It is very difficult to make this sauce," says Busiello. "You are almost always doomed to fail."

The study by Toultchinski and Vilgis was inspired by a brouhaha over a 2022 Facebook post by physics Nobel laureate Giorgio Parisi. In that post, Parisi suggested that chefs could reduce the energy needed for cooking pasta using a "heat-off-lid-on" method. In this method, after the pasta is added to boiling water, the heat source is turned off and the pot is covered with a lid. The pasta is left to cook in slowly cooling water. Studies indicate than a significant fraction of the cooking energy could be saved this way. But chefs questioned whether this method could achieve al dente pasta—pasta that is soft on the outside and crunchy at its core.

To put a scientific answer to this question, Toultchinski and Vilgis studied three methods of cooking pasta. The first method is the most familiar one: Add pasta to boiling water and keep that water roiling until the pasta is perfectly cooked. The second method, which the team terms presoaking, involves soaking the pasta in cold water for one and a half hours prior to cooking. The soaked pasta is then cooked in boiling water. The third method was Parisi's heat-off-lid-on method. For all experiments, the team used the same pot and the same amounts of dry durum-wheat spaghetti (150 g) and water (1.5 l). For the heat-off-lid-on method, the lid was a sheet of aluminum foil.

Toultchinski and Vilgis show that the heat-off-lid-on method used the least energy, while the traditional method used the most. Roughly 60% of the energy needs for cooking pasta comes from keeping the water roiling while the pasta cooks, so eliminating this step leads to significant energy saving, Vilgis says. Presoaking also considerably reduced the energy needs, as it lowered the cooking time from 13 minutes to 3 minutes. "Presoaked pasta cooks very fast," Vilgis says. But do all three methods achieve perfect al dente pasta?

You will have to read the article to find out the results because it is just too big to summarise here...


Original Submission

posted by janrinok on Thursday February 06, @02:13AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Macular degeneration is one of the leading causes of vision loss, affecting millions worldwide, especially those over 60. However, Soliddd Corp may have an answer to the problem: a pair of smart glasses that treat macular degeneration.

These glasses promise to restore vision for individuals grappling with this debilitating condition, helping them regain independence and improve their quality of life. The glasses mimic the mechanics of insect eyes, using multiple perspectives to create one sharp image.

Tiny cameras on each temple capture images of the environment and send them to displays inside the lenses. These displays house 64 micro-lenses, each projecting a miniature image onto the healthy peripheral part of the retina. So, these smart glasses treat macular degeneration by essentially removing the blind spots that the disease causes.

It’s an interesting solution to the problem. While it doesn’t actively “cure” the degeneration, it does help revive the patient’s vision in different ways. The company showcased the glasses at CES 2025, where Soliddd showcased tests involving 31 individuals experiencing macular degeneration.

25 participants read faster with the glasses, and seven who couldn’t read previously were able to read again. The company plans to test both eyes simultaneously in future trials. Overall, the results here are very promising, and it does look like these smart glasses could actually treat macular degeneration.

However, macular degeneration is a very complex disease that comes in two forms: dry and wet. The “dry” form, which accounts for up to 90 percent of cases, involves the gradual deterioration of light-sensitive cells in the macula—the part of the retina responsible for central vision, color perception, and fine details.


Original Submission

posted by janrinok on Wednesday February 05, @09:24PM   Printer-friendly

Giant Study Questions Link Between Autism and Maternal Health:

In scientists' search to understand the causes of autism, a spotlight has fallen on maternal health during pregnancy. Based partly on association studies, researchers have proposed that conditions including obesity and depression during pregnancy could lead to autism in a child by affecting fetal neurodevelopment.

But a study of more than 1 million Danish children and their families, published today in Nature Medicine, pushes back against this view. Researchers analyzed more than 200 health conditions that occurred in these children's mothers before or during pregnancy. They conclude that many of the supposed links to a child's autism diagnosis may not be causal, and instead reflect inherited genetic variants or environmental factors shared within families.

[...] Previous research has linked conditions such as maternal obesity, psychiatric disorders, and pregnancy or birth complications to an increased likelihood of autism diagnoses in children. Such findings can lead some pregnant people to feel that "if they get this or that condition, their [child's] chance of autism may increase," says Magdalena Janecka, an epidemiologist at New York University's Grossman School of Medicine and a co-author on the new paper.

Several recent studies have highlighted flaws in this reasoning, noting that observed links may actually reflect genetic predispositions to autism passed from parents to children, or shared environmental factors, such as household exposure to pollutants, that are also associated with the condition. For example, a 2022 study of thousands of Norwegian parents found that people carrying genetic variants linked to neurodevelopmental conditions including autism were also more likely to experience pregnancy-related health issues associated with those same neurodevelopmental conditions in children. This suggests inheritance of the genetic variants, rather than the maternal health problems themselves, partly explain the increased chance, the study authors note.

In the new research, Janecka and her colleagues set out to systematically test for this so-called familial confounding in an even larger group. They used records in the Danish national health registry from roughly 1 million children born between 1998 and 2015, more than 18,000 of whom had received diagnoses of autism spectrum disorder. The team then looked at health conditions documented in the children's mothers. Thirty of these conditions, including depression and various pregnancy complications, showed a link to autism diagnosis even after the team had run statistical analyses to try to account for socioeconomic, demographic, and other known confounding factors.

Next came the hunt for familial confounding. First, Janecka's team analyzed the incidence of autism in families with more than one child where the mother had a health condition during one pregnancy, but not another. For most of the 30 conditions, they found the likelihood of an autism diagnosis among siblings was relatively stable, regardless of which pregnancy had been affected, Janecka says. What's more, health issues documented in the children's fathers were associated with similar likelihoods of an autism diagnosis as the maternal conditions. (The team couldn't run this paternal analysis for pregnancy-specific complications, such as preeclampsia and gestational diabetes.)

Those findings indicate familial confounding is widespread in the observed associations, weakening the argument that maternal health conditions directly cause autism by affecting development in utero, Janecka says. She adds that the evidence for confounding was stronger for some conditions, such as obesity and mental health disorders such as depression, than others, such as gestational diabetes.

Lee and others caution against interpreting the paper as meaning that autism isn't affected at all by maternal health, or that it's entirely genetically determined. The analyses used in the paper are a "blunt instrument" that can't get at mechanisms underlying autism, he adds.

Some scientists also criticize the paper's claim that "most of the observational associations are attributable to family-level factors." The study didn't use the specialized statistical methods needed to make claims about causation, says Peter Tennant, an epidemiologist at the University of Leeds, making it "difficult to draw definitive conclusions." He adds that additional confounding factors, such as how often a pregnant person used health care—something that itself is likely linked to that person's health—might have skewed the study results.

Janecka acknowledges that potential confounder, though she thinks it's unlikely to have had a strong impact and stands by the team's conclusions. For now, she says, her team is working to combine its findings with genetic data from Danish families to test whether shared genes can explain a lot of the associations between maternal health and child autism. Either way, she says, parents and families "deserve to know the true role of these [health] conditions in their child."

Journal Reference:
Khachadourian, Vahe, Arildskov, Elias Speleman, Grove, Jakob, et al. Familial confounding in the associations between maternal health and autism [open], Nature Medicine (DOI: 10.1038/s41591-024-03479-5)


Original Submission

posted by janrinok on Wednesday February 05, @04:43PM   Printer-friendly

Everyone knows your location: tracking myself down through in-app ads:

Recently I read about a massive geolocation data leak from Gravy Analytics, which exposed more than 2000 apps, both in AppStore and Google Play, that secretly collect geolocation data without user consent. Oftentimes, even without developers` knowledge.

I looked into the list (link here) and found at least 3 apps I have installed on my iPhone. Take a look for yourself!
This made me come up with an idea to track myself down externally, e.g. to buy my geolocation data leaked by some application.

TL;DR

After more than couple dozen hours of trying, here are the main takeaways:

  1. I found a couple requests sent by my phone with my location + 5 requests that leak my IP address, which can be turned into geolocation using reverse DNS.
  2. Learned a lot about the RTB (real-time bidding) auctions and OpenRTB protocol and was shocked by the amount and types of data sent with the bids to ad exchanges.
  3. Gave up on the idea to buy my location data from a data broker or a tracking service, because I don't have a big enough company to take a trial or $10-50k to buy a huge database with the data of millions of people + me.
    Well maybe I do, but such expense seems a bit irrational.
    Turns out that EU-based peoples` data is almost the most expensive.

But still, I know my location data was collected and I know where to buy it!


Original Submission

posted by hubie on Wednesday February 05, @12:00PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Microsoft is getting rid of the VPN offered through Microsoft Defender. As spotted by Windows Latest, the company updated its support pages for privacy protection, its built-in VPN, to notify users that the service will end on February 28. The VPN was bundled with Microsoft Defender, which is available to anyone with a personal or family Microsoft 365 subscription, and it offered private browsing by “routing your internet traffic through Microsoft servers,” up to the monthly data limit of 50GB.

In a statement about the decision posted on the support page, Microsoft said, “Our goal is to ensure you, and your family remain safer online. We routinely evaluate the usage and effectiveness of our features. As such, we are removing the privacy protection feature and will invest in new areas that will better align to customer needs.” Android users might still see the Microsoft Defender VPN profile in their settings after the expiration date, which they’ll need to remove manually if they want it gone. “Action is not required by Windows, iOS, and macOS users,” Microsoft notes.

The company also says this is the only feature getting killed off for now. According to Microsoft, “device protection and identity theft and credit monitoring (US) features will continue.”


Original Submission

posted by hubie on Wednesday February 05, @07:17AM   Printer-friendly

Automakers Tesla and BMW have launched a major lawsuit against the European Commission over tariffs imposed on electric vehicles imported from China:

Tesla and BMW, both of which produce EVs in China, have taken their grievances to the European Union's Court of Justice. The legal battle comes after the European Commission's decision to enact a wide range of tariffs on Chinese-made electric cars, allegedly aimed at curbing competition from Chinese manufacturers.

Tesla's China-made EVs are subject to a 7.8% tariff from the EU, while BMWs face a much higher levy of 20.7%. Some Chinese-manufactured EVs have been hit with tariffs as high as 45%, The Wall Street Journal reported.

Other major Chinese automakers, including Geely, SAIC, and BYD, have also been vocal about their stance against the tariffs, which they argue have negatively impacted the free market. Furthermore, the new tariffs are stacked on top of the EU's standard 10% duty on all imported cars, adding to the pressure on certain international carmakers.

In a statement to the WSJ, a BMW spokesperson focused on how the duties were not improving the competition between carmakers but were, instead, harming the business models of many international companies in the market.

[...] The outcome of the lawsuits filed by Tesla and BMW could set a major precedent that could shape the EU's ability to impose tariffs on a wider variety of Chinese or foreign products.


Original Submission

posted by hubie on Wednesday February 05, @02:35AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The Contec CMS8000, also sold as the Epsimed MN-120, contains a trio of vulnerabilities (CVE-2024-12248, CVSS 9.3; CVE-2025-0626, CVSS 7.5; and CVE-2025-0683, CVSS 5.9) that the Cybersecurity and Infrastructure Security Agency (CISA) last week warned could allow an attacker to remotely execute code, crash the device and, most alarmingly, exfiltrate information about patients.

"Once the patient monitor is connected to the internet, it begins gathering patient data, including personally identifiable information and protected health information, and exfiltrating the data outside of the health care delivery environment," the FDA said of the hardcoded hole.

The FDA recommends that anyone with a CMS8000 unplug it from the internet and disable its Wi-Fi immediately, and stop using it to remotely monitor patients.

While neither the FDA nor CISA believe there have been any cybersecurity incidents related to the devices, it's possible any left online could be compromised, and used by an attacker to move laterally to further compromise a connected network.

To make matters worse, CISA said in a factsheet about the vulnerability that it doesn't believe the backdoor is related to remote software updates - this appears to be all about harvesting data.

"The [back door] provides neither an integrity-checking mechanism nor version tracking of updates," CISA said. "When the function is executed, files on the device are forcibly overwritten, preventing the end customer—such as a hospital—from maintaining awareness of what software is running on the device."

In other words, not only does it exfiltrate data, but it also actively hides its presence from hospitals and their infosec teams.


Original Submission