Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

How long have you had your current job?

  • less than 1 year
  • 1 year up to 2 years
  • 2 years up to 3 years
  • 3 years up to 5 years
  • 5 years up to 10 years
  • 10 or more years
  • work is for suckers
  • I haven't got a job you insensitive clod!

[ Results | Polls ]
Comments:92 | Votes:223

posted by mrpg on Monday July 21, @08:19PM   Printer-friendly
from the not-enough dept.

Phys.org reports on how weird space weather seems to have influenced human behavior on Earth 41,000 years ago

[...] This near-collapse is known as the Laschamps Excursion, a brief but extreme geomagnetic event named for the volcanic fields in France where it was first identified. At the time of the Laschamps Excursion, near the end of the Pleistocene epoch, Earth's magnetic poles didn't reverse as they do every few hundred thousand years. Instead, they wandered, erratically and rapidly, over thousands of miles. At the same time, the strength of the magnetic field dropped to less than 10% of its modern day intensity.

The magnetosphere normally deflects much of the solar wind and harmful ultraviolet radiation that would otherwise reach Earth's surface.

The skies 41,000 years ago may have been both spectacular and threatening. When we realized this, we two geophysicists wanted to know whether this could have affected people living at the time.

[...] In response, people may have adopted practical measures: spending more time in caves, producing tailored clothing for better coverage, or applying mineral pigment "sunscreen" made of ochre to their skin.

At this time, both Neanderthals and members of our species, Homo sapiens, were living in Europe, though their geographic distributions likely overlapped only in certain regions. The archaeological record suggests that different populations exhibited distinct approaches to environmental challenges, with some groups perhaps more reliant on shelter or material culture for protection.

Importantly, we're not suggesting that space weather alone caused an increase in these behaviors or, certainly, that the Laschamps caused Neanderthals to go extinct, which is one misinterpretation of our research. But it could have been a contributing factor—an invisible but powerful force that influenced innovation and adaptability.


Original Submission

posted by jelizondo on Monday July 21, @03:39PM   Printer-friendly

A CarFax for Used PCs

The United Nations' Global E-waste Monitor estimates that the world generates over 60 million tonnes of e-waste annually. Furthermore, this number is rising five times as fast as e-waste recycling. Much of this waste comes from prematurely discarded electronic devices.

Many enterprises follow a standard three-year replacement cycle, assuming older computers are inefficient. However, many of these devices are still functional and could perform well with minor upgrades or maintenance. The issue is, no one knows what the weak points are for a particular machine, or what the needed maintenance is, and the diagnostics would be too costly and time-consuming. It's easier to just buy brand new laptops.

When buying a used car, dealerships and individual buyers can access each car's particular CarFax report, detailing the vehicle's usage and maintenance history. Armed with this information, dealerships can perform the necessary fixes or upgrades before reselling the car. And individuals can decide whether to trust that vehicle's performance. We at HP realized that, to prevent unnecessary e-waste, we need to collect and make available usage and maintenance data for each laptop, like a CarFax for used PCs.

There is a particular challenge to collecting usage data for a PC, however. We need to make sure to protect the user's privacy and security. So, we set out to design a data-collection protocol for PCs that manages to remain secure.

Luckily, the sensors that can collect the necessary data are already installed in each PC. There are thermal sensors that monitor CPU temperature, power-consumption monitors that track energy efficiency, storage health indicators that assess solid state drive (SSD) wear levels, performance counters that measure system utilization, fan-rotation-speed sensors that detect cooling efficiency, and more. The key is to collect and store all that data in a secure yet useful way.

We decided that the best way to do this is to integrate the life-cycle records into the firmware layer. By embedding telemetry capabilities directly within the firmware, we ensure that device health and usage data is captured the moment it is collected. This data is stored securely on HP SSD drives, leveraging hardware-based security measures to protect against unauthorized access or manipulation.

The secure telemetry protocol we've developed at HP works as follows. We gather the critical hardware and sensor data and store it in a designated area of the SSD. This area is write-locked, meaning only authorized firmware components can write to it, preventing accidental modification or tampering. That authorized firmware component we use is the Endpoint Security Controller, a dedicated piece of hardware embedded in business-class HP PCs. It plays a critical role in strengthening platform-level security and works independently from the main CPU to provide foundational protection.

The endpoint security controller establishes a secure session by retaining the secret key within the controller itself. This mechanism enables read data protection on the SSD—where telemetry and sensitive data are stored—by preventing unauthorized access, even if the operating system is reinstalled or the system environment is otherwise altered.

Then, the collected data is recorded in a time-stamped file, stored within a dedicated telemetry log on the SSD. Storing these records on the SSD has the benefit of ensuring the data is persistent even if the operating system is reinstalled or some other drastic change in software environment occurs.

The telemetry log employs a cyclic buffer design, automatically overwriting older entries when the log reaches full capacity. Then, the telemetry log can be accessed by authorized applications at the operating system level.

The telemetry log serves as the foundation for a comprehensive device history report. Much like a CarFax report for used cars, this report, which we call PCFax, will provide both current users and potential buyers with crucial information.

The PCFax report aggregates data from multiple sources beyond just the on-device telemetry logs. It combines the secure firmware-level usage data with information from HP's factory and supply-chain records, digital-services platforms, customer-support service records, diagnostic logs, and more. Additionally, the system can integrate data from external sources including partner sales and service records, refurbishment partner databases, third-party component manufacturers like Intel, and other original equipment manufacturers. This multisource approach creates a complete picture of the device's entire life cycle, from manufacturing through all subsequent ownership and service events.

For IT teams within organizations, we hope the PCFax will bring simplicity and give opportunities for optimization. Having access to fine-grained usage and health information for each device in their fleet can help IT managers decide which devices are sent to which users, as well as when maintenance is scheduled. This data can also help device managers decide which specific devices to replace rather than issuing new computers automatically, enhancing sustainability. And this can help with security: With real-time monitoring and firmware-level protection, IT teams can mitigate risks and respond swiftly to emerging threats. All of this can facilitate more efficient use of PC resources, cutting down on unnecessary waste.

We also hope that, much as the CarFax gives people confidence in buying used cars, the PCFax can encourage resale of used PCs. For enterprises and consumers purchasing second-life PCs, it provides detailed visibility into the complete service and support history of each system, including any repairs, upgrades, or performance issues encountered during its initial deployment. By making this comprehensive device history readily available, PCFax enables more PCs to find productive second lives rather than being prematurely discarded, directly addressing the e-waste challenge while providing economic benefits to both sellers and buyers in the secondary PC market.

While HP's solutions represent a significant step forward, challenges remain. Standardizing telemetry frameworks across diverse ecosystems is critical for broader adoption. Additionally, educating organizations about the benefits of life-cycle records will be essential to driving uptake.

We are also working on integrating AI into our dashboards. We hope to use AI models to analyze historical telemetry data and predict failures before they happen, such as detecting increasing SSD write cycles to forecast impending failure and alert IT teams for proactive replacement, or predicting battery degradation and automatically generating a service ticket to ensure a replacement battery is ready before failure, minimizing downtime.

We plan to start rolling out these features at the beginning of 2026.


Original Submission

posted by jelizondo on Monday July 21, @10:55AM   Printer-friendly
from the resistance-is-futile-you-will-be-assimilated dept.

upstart writes:

Delta Air Lines is using AI to set the maximum price you're willing to pay:

Delta's president says the quiet part out loud.

Delta Air Lines is leaning into dynamic ticket pricing that uses artificial intelligence to individually determine the highest fee you'd willingly pay for flights, according to comments Fortune spotted in the company's latest earnings call. Following a limited test of the technology last year, Delta is planning to shift away from static ticket prices entirely after seeing "amazingly favorable" results.

"We will have a price that's available on that flight, on that time, to you, the individual," Delta president Glen Hauenstein told investors in November, having started to test the technology on one percent of its ticket prices. Delta currently uses AI to influence three percent of its ticket prices, according to last week's earnings call, and is aiming to increase that to 20 percent by the end of this year. "We're in a heavy testing phase," said Hauenstein. "We like what we see. We like it a lot, and we're continuing to roll it out."

While personalized pricing isn't unique to Delta, the airline has been particularly candid about embracing it. During that November call, Hauenstein said the AI ticketing system is "a full reengineering of how we price and how we will be pricing in the future," and described the rollout as "a multiyear, multi-step process." Hauenstein acknowledged that Delta was excited about the initial revenue results it saw in testing, but noted the shift to AI-determined pricing could "be very dangerous, if it's not controlled and it's not done correctly."

Delta's personalized AI pricing tech is provided by travel firm Fetcherr, which also partners with Virgin Atlantic, Azul, WestJet, and VivaAerobus. In Delta's case, the AI will act as a "super analyst" that operates 24/7 to determine custom ticket prices that should be offered to individual customers in real-time, per specific flights and times.

Airlines have varied their ticket prices for customers on the same routes for many years, depending on a range of factors, including how far in advance the booking is made, what website or service it's being booked with, and even the web browser the customer is using. Delta is no exception, but AI pricing looks set to supercharge the approach.

Delta has taken heat for charging customers different prices for flights, having rolled back the decision to price tickets higher for solo-travelers compared to groups in May. It's not entirely clear how invasive Delta's AI ticketing will be when it analyzes customers to figure out prices, but Fortune notes that it has privacy advocates concerned.

"They are trying to see into people's heads to see how much they're willing to pay," Justin Kloczko of Consumer Watchdog told the publication. "They are basically hacking our brains." Arizona Senator Ruben Gallego described it as "predatory pricing" that's designed to "squeeze you for every penny."


Original Submission

posted by jelizondo on Monday July 21, @06:09AM   Printer-friendly

upstart writes:

For Algorithms, a Little Memory Outweighs a Lot of Time:

One of the most important classes goes by the humble name "P." Roughly speaking, it encompasses all problems that can be solved in a reasonable amount of time. An analogous complexity class for space is dubbed "PSPACE."

The relationship between these two classes is one of the central questions of complexity theory. Every problem in P is also in PSPACE, because fast algorithms just don't have enough time to fill up much space in a computer's memory. If the reverse statement were also true, the two classes would be equivalent: Space and time would have comparable computational power. But complexity theorists suspect that PSPACE is a much larger class, containing many problems that aren't in P. In other words, they believe that space is a far more powerful computational resource than time. This belief stems from the fact that algorithms can use the same small chunk of memory over and over, while time isn't as forgiving — once it passes, you can't get it back.

"The intuition is just so simple," Williams said. "You can reuse space, but you can't reuse time."

But intuition isn't good enough for complexity theorists: They want rigorous proof. To prove that PSPACE is larger than P, researchers would have to show that for some problems in PSPACE, fast algorithms are categorically impossible. Where would they even start?

Those definitions emerged from the work of Juris Hartmanis, a pioneering computer scientist who had experience making do with limited resources. He was born in 1928 into a prominent Latvian family, but his childhood was disrupted by the outbreak of World War II. Occupying Soviet forces arrested and executed his father, and after the war Hartmanis finished high school in a refugee camp. He went on to university, where he excelled even though he couldn't afford textbooks.

In 1960, while working at the General Electric research laboratory in Schenectady, New York, Hartmanis met Richard Stearns, a graduate student doing a summer internship. In a pair of groundbreaking papers they established precise mathematical definitions for time and space. These definitions gave researchers the language they needed to compare the two resources and sort problems into complexity classes.

As it happened, they started at Cornell University, where Hartmanis moved in 1965 to head the newly established computer science department. Under his leadership it quickly became a center of research in complexity theory, and in the early 1970s, a pair of researchers there, John Hopcroft and Wolfgang Paul, set out to establish a precise link between time and space.

Hopcroft and Paul knew that to resolve the P versus PSPACE problem, they'd have to prove that you can't do certain computations in a limited amount of time. But it's hard to prove a negative. Instead, they decided to flip the problem on its head and explore what you can do with limited space. They hoped to prove that algorithms given a certain space budget can solve all the same problems as algorithms with a slightly larger time budget. That would indicate that space is at least slightly more powerful than time — a small but necessary step toward showing that PSPACE is larger than P.

With that goal in mind, they turned to a method that complexity theorists call simulation, which involves transforming existing algorithms into new ones that solve the same problems, but with different amounts of space and time. To understand the basic idea, imagine you're given a fast algorithm for alphabetizing your bookshelf, but it requires you to lay out your books in dozens of small piles. You might prefer an approach that takes up less space in your apartment, even if it takes longer. A simulation is a mathematical procedure you could use to get a more suitable algorithm: Feed it the original, and it'll give you a new algorithm that saves space at the expense of time.

Algorithm designers have long studied these space-time trade-offs for specific tasks like sorting. But to establish a general relationship between time and space, Hopcroft and Paul needed something more comprehensive: a space-saving simulation procedure that works for every algorithm, no matter what problem it solves. They expected this generality to come at a cost. A universal simulation can't exploit the details of any specific problem, so it probably won't save as much space as a specialized simulation. But when Hopcroft and Paul started their work, there were no known universal methods for saving space at all. Even saving a small amount would be progress.

The breakthrough came in 1975, after Hopcroft and Paul teamed up with a young researcher named Leslie Valiant. The trio devised a universal simulation procedure that always saves a bit of space. No matter what algorithm you give it, you'll get an equivalent one whose space budget is slightly smaller than the original algorithm's time budget.

"Anything you can do in so much time, you can also do in slightly less space," Valiant said. It was the first major step in the quest to connect space and time.

But then progress stalled, and complexity theorists began to suspect that they'd hit a fundamental barrier. The problem was precisely the universal character of Hopcroft, Paul and Valiant's simulation. While many problems can be solved with much less space than time, some intuitively seemed like they'd need nearly as much space as time. If so, more space-efficient universal simulations would be impossible. Paul and two other researchers soon proved that they are indeed impossible, provided you make one seemingly obvious assumption: Different chunks of data can't occupy the same space in memory at the same time.

"Everybody took it for granted that you cannot do better," Wigderson said.

Paul's result suggested that resolving the P versus PSPACE problem would require abandoning simulation altogether in favor of a different approach, but nobody had any good ideas. That was where the problem stood for 50 years — until Williams finally broke the logjam. First, he had to get through college.

In 1996, the time came for Williams to apply to colleges. He knew that pursuing complexity theory would take him far from home, but his parents made it clear that the West Coast and Canada were out of the question. Among his remaining options, Cornell stood out to him for its prominent place in the history of his favorite discipline.

"This guy Hartmanis defined the time and space complexity classes," he recalled thinking. "That was important for me."

Williams was admitted to Cornell with generous financial aid and arrived in the fall of 1997, planning to do whatever it took to become a complexity theorist himself. His single-mindedness stuck out to his fellow students.

"He was just super-duper into complexity theory," said Scott Aaronson, a computer scientist at the University of Texas, Austin, who overlapped with Williams at Cornell.

For 50 years, researchers had assumed it was impossible to improve Hopcroft, Paul and Valiant's universal simulation. Williams' idea — if it worked — wouldn't just beat their record — it would demolish it.

"I thought about it, and I was like, 'Well, that just simply can't be true,'" Williams said. He set it aside and didn't come back to it until that fateful day in July, when he tried to find the flaw in the argument and failed. After he realized that there was no flaw, he spent months writing and rewriting the proof to make it as clear as possible.

Valiant got a sneak preview of Williams' improvement on his decades-old result during his morning commute. For years, he's taught at Harvard University, just down the road from Williams' office at MIT. They'd met before, but they didn't know they lived in the same neighborhood until they bumped into each other on the bus on a snowy February day, a few weeks before the result was public. Williams described his proof to the startled Valiant and promised to send along his paper.

"I was very, very impressed," Valiant said. "If you get any mathematical result which is the best thing in 50 years, you must be doing something right."

With his new simulation, Williams had proved a positive result about the computational power of space: Algorithms that use relatively little space can solve all problems that require a somewhat larger amount of time.

The difference is a matter of scale. P and PSPACE are very broad complexity classes, while Williams' results work at a finer level. He established a quantitative gap between the power of space and the power of time, and to prove that PSPACE is larger than P, researchers will have to make that gap much, much wider.

That's a daunting challenge, akin to prying apart a sidewalk crack with a crowbar until it's as wide as the Grand Canyon. But it might be possible to get there by using a modified version of Williams' simulation procedure that repeats the key step many times, saving a bit of space each time. It's like a way to repeatedly ratchet up the length of your crowbar — make it big enough, and you can pry open anything.

"It could be an ultimate bottleneck, or it could be a 50-year bottleneck," Valiant said. "Or it could be something which maybe someone can solve next week."

"I can never prove precisely the things that I want to prove," Williams said. "But often, the thing I prove is way better than what I wanted."

Journal References:
Dr. Juris Hartmanis Interview: July 26, 2009; Cornell University in Ithaca, New York
On Time Versus Space, Journal of the ACM (JACM)
Space bounds for a game on graphs:, Journal of the ACM (JACM)
Tree Evaluation Is in Space 𝑂 (log 𝑛 · log log 𝑛), Journal of the ACM (JACM)


Original Submission

posted by jelizondo on Monday July 21, @01:22AM   Printer-friendly

An Anonymous Coward writes:

Open, free, and completely ignored: The strange afterlife of Symbian

The result of the pioneering joint Psion and Nokia smartphone effort is still out there on GitHub.

Smartphones are everywhere. They are entirely commoditized now. Most of them run Android, which uses the Linux kernel. The rest run Apple's iOS, which uses the same XNU kernel as macOS. As we've said before, they're not Unix-like, they really are Unix™.

There have been a bunch of others. BlackBerry tried hard with BB10, but even a decade ago, it was over. It was based on QNX and Qt, and both of those are doing fine. We reported last year that QNX 8 is free to use again. Palm's WebOS ended up with HP and now runs in LG smart TVs – but it's Linux underneath.

The most radical, though, was probably Symbian. The Register covered it at length back in the day, notably the epic Psion: the Last Computer feature, followed by the two-part Symbian, The Secret History, and Symbian UI Wars features.

Built from scratch in the late 1990s in the then-relatively new C++, it evolved into a real-time microkernel OS for handhelds, with the radical EKA2 microkernel designed by Dennis May and documented in detail in the book Symbian OS Internals. There's also The

Symbian OS Architecture Sourcebook [PDF]. An official version of the source code is on GitHub, and other copies are out there.

We liked this description from CHERI Project boffin David Chisnall:

The original Symbian kernel was nothing special, but EKA2 (which is the one described in the amazing Symbian Internals book) was a thing of beauty. It had a realtime nano-kernel (does not allocate memory) that could run both an RTOS and a richer application stack.

It was a victim of poor timing: the big advantage was the ability to run both the apps and the phone stack on the same core, but it came along as Arm cores became cheap enough that just sticking two in the SoC was cheap enough.

Before Nokia was assimilated and digested by Microsoft, it open sourced the OS, and despite some licensing concerns, it's still there.
It strikes this vulture as odd that while work continues on some ground-up FOSS OS projects in C++, such as the Genode OS, or

Serenity OS, which we looked at in 2022,the more complete Symbian, which shipped on millions of devices and for a while had a thriving third-party application market, languishes ignored.
(Incidentally, the Serenity OS project lead has moved on to the independent Ladybird browser, which we looked at in 2023. Work on the OS continues, now community-led.)

Symbian's progenitor, Psion EPOC32, predates much of the standardization of C++ – much as BeOS did. We've seen comments that it was not easy to program, but tools such as P.I.P.S. made it easier. Nokia wasted vast effort on multiple incompatible UIs, which have been blamed for tearing Symbian apart, but none of that matters now: adapt some existing FOSS stuff, and forget backwards compatibility. Relatively few of the apps were FOSS, and who needs touchscreen phone apps on a Raspberry Pi anyway? Qt would be ideal – it's a native C++ tool too.

Fans of all manner of 20th century proprietary OSes from AmigaOS to OS/2 bemoan that these never went open source. Some of BeOS made it into PalmOS Cobalt but that sank. Palm even mulled basing an Arm version of PalmOS on Symbian, but the deal fell through.

Some of those OSes have been rebuilt from scratch, including AmigaOS as AROS and BeOS as Haiku. But they run on Intel. Neither runs natively on Arm, and yet Symbian sits there ignored. Sometimes you can't even give the good stuff away.

by Liam Proven // Thu 17 Jul 2025 // 07:27 UTC


Original Submission

posted by janrinok on Sunday July 20, @08:35PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The Information Technology Organization of Iran (ITOI), the government body that develops and implements IT services for the country, is looking for suppliers of cloud computing.

The org[anisation] recently posted a notification of its desire to evaluate, grade, and rank cloud players to assess their suitability to host government services.

At the end of the exercise, the organization hopes to have a panel of at least three cloud operators capable of handling government services.

The government agency will base its assessments on compliance with standards such as ISO 27017 and ISO 27018, which define controls for secure cloud computing and protection of personally identifiable information.

ITOI also expects companies that participate in its evaluation to be compliant with the NIST SP 800-145 definition of cloud computing.

Yes, Iran recognizes that NIST – the USA’s National Institute of Standards and Technology – despite regarding America as a trenchant enemy.

ITOI has cast the net wide, by seeking cloud operators with the capacity to deliver IaaS, PaaS, or SaaS. Service providers that deliver private, public, hybrid or community clouds are also welcome, as are service providers who specialize in security, monitoring, support services, or cloud migration.

Organizations that pass ITOI’s tests will earn a “cloud service rating certificate” that makes them eligible for inclusion on a list of authorized cloud services providers.


Original Submission

posted by janrinok on Sunday July 20, @07:12PM   Printer-friendly

https://lists.archlinux.org/archives/list/aur-general@lists.archlinux.org/thread/7EZTJXLIAQLARQNTMEW2HBWZYE626IFJ/
https://archive.ph/jwPRg

On the 16th of July, at around 8pm UTC+2, a malicious AUR package was
uploaded to the AUR. Two other malicious packages were uploaded by the
same user a few hours later. These packages were installing a script
coming from the same GitHub repository that was identified as a Remote
Access Trojan (RAT).

The affected malicious packages are:

- librewolf-fix-bin
- firefox-patch-bin
- zen-browser-patched-bin

The Arch Linux team addressed the issue as soon as they became aware of
the situation. As of today, 18th of July, at around 6pm UTC+2, the
offending packages have been deleted from the AUR.

We strongly encourage users that may have installed one of these
packages to remove them from their system and to take the necessary
measures in order to ensure they were not compromised.

/r/linux Discussion: http://old.reddit.com/r/linux/comments/1m3wodv/malware_found_in_the_aur/
/r/archlinux Discussion: https://old.reddit.com/r/archlinux/comments/1m387c5/aurgeneral_security_firefoxpatchbin/


Original Submission

posted by janrinok on Sunday July 20, @03:49PM   Printer-friendly

https://distrowatch.com/dwres.php?resource=showheadline&story=20030

Clear Linux is a rolling release, highly optimized distribution developed by Intel. Or, it is now more accurate to say it "was", since Intel has decided to abruptly discontinue the project. Just one day after the project's latest snapshot, the following announcement was published on the distribution's forum: "Effective immediately, Intel will no longer provide security patches, updates, or maintenance for Clear Linux OS, and the Clear Linux OS GitHub repository will be archived in read-only mode. So, if you're currently using Clear Linux OS, we strongly recommend planning your migration to another actively maintained Linux distribution as soon as possible to ensure ongoing security and stability."


Original Submission

posted by jelizondo on Sunday July 20, @11:04AM   Printer-friendly

An Anonymous Coward writes:

Microsoft's Copilot finally comes into its own with new AI features like Recall

Buy any new Windows PC and you might notice an unfamiliar key: the Copilot key. Launched in January, it promised quick access to Microsoft's AI Copilot. Yet features were limited, causing critics to wonder: Is this it?

Microsoft Build 2024, the company's annual developer conference, had a reply: No. On 20 May, the company revealed Copilot+ PCs, a new class of Windows computers that exclusively use Qualcomm chips (for now, at least) to power a host of AI features that run on-device. Copilot+ PCs can quickly recall tasks you've completed on the PC, refine simple sketches in Paint, and translate languages in a real-time video call. Microsoft's Surface Laptop and Surface Pro will showcase these features, but they're joined by Copilot+ PCs from multiple laptop partners including Acer, Asus, Dell, HP, Lenovo, and Samsung.

"We wanted to put the best foot forward," said Brett Ostrum, corporate vice president of Surface devices at Microsoft. "When we started this journey, the goal was that Surface was going to ship relevant volumes on [Qualcomm] silicon. And people need to love it."
Windows' Recall is a new way to search

Microsoft revealed several AI features at Build 2024, but the highlight was Recall. Similar to Rewind, an app for the Mac I tried in December 2023, Recall can help Windows users find anything they've seen, heard, or opened on their PC. This includes files, documents, and apps, but also images, videos, and audio. Recall defaults to a scrollable timeline, which is broken up into discrete events detected by Recall, but users can also browse with semantic text search.

It's a simple feature to use, but its implications are vast. If Recall works as advertised, it could fundamentally change how people interact with Windows PCs. There's arguably little need to organize photos from a vacation or carefully file away notes if Recall can find anything, and everything, you've opened on your PC.

"It used to be if you interacted with your PC, you used a command line. Then we came up with the graphical user interface," said Ostrum. "Now, how do you find the things that you are looking for? Recall is a much more natural and richer way to interact with your files."

There's one unavoidable caveat: It's too early to know if Recall will do what Microsoft says. I tried the feature firsthand, and found that it could recall a fictional recipe I asked Microsoft Copilot to create. It did so immediately, and also after several hours had passed. Whether it can do the same next month, or next year, remains to be seen.

While Recall was the star, it was joined by several additional AI features. These include Cocreator, a new feature for Microsoft Paint that uses AI to convert simple sketches into more elaborate digital art, and Live Captions, which captions and translates video in real time. Like Recall, both features lean on a Copilot+ PC's neural processing unit (NPU). That means these features, again like Recall, won't be available on older PCs.

These features are intriguing, but they're shadowed by a concern: privacy. Recall could help you find lost documents, and live translation could lower language barriers, but they only work if Microsoft's AI captures what's happening on your PC. The company hopes to ease these concerns by running AI models on-device and encrypting any data that's stored.

Qualcomm partnership leaves Intel, AMD in the cold

Of course, running an AI model on-device isn't easy. CPUs can handle some AI models, but performance often isn't ideal, and many AI models aren't optimized for the hardware. GPUs are better fit for AI workloads but can draw a lot of power, which shortens battery life.

That's where Qualcomm comes into the picture. Its latest laptop chip, the Snapdragon X Elite, was designed by many of the same engineers responsible for Apple's M1 chip and includes an NPU.

Microsoft's two Copilot+ PCs, the Surface Laptop and Surface Pro, both have Snapdragon X Elite processors, and both quote AI performance of up to 45 trillion operations per second. Intel's current Intel Core Ultra processors are a step behind, with quoted AI performance up to 34 trillion operations per second.

That's apparently not enough for Microsoft: All Copilot+ PCs available at launch on 18 June will have Qualcomm chips inside. And many new AI features, including Windows' Recall, only work on Copilot+ PCs. Put simply: If you want to use Recall, you must buy Qualcomm.

Intel and AMD chips will appear in Copilot+ PCs eventually, but Ostrum said that may not happen until the end of 2024 or early 2025.

"We will continue to partner with [Intel and AMD] when it makes sense," said Ostrum. "There is both an element of how much performance there is, but there's also an element of how efficient that performance is [...] we don't want [AI] to be taxing multiple hours of battery life at a given time." Ostrum says activating AI features like Windows' Recall on a Copilot+ PC shaves no more than 30 to 40 minutes off a laptop's battery life, and all of Microsoft's battery-life quotes for Surface devices (which promise up to 15 hours of Web browsing and 22 hours of video playback) assume Copilot+ AI features are turned on.

It's unusual to see a major Windows product launch without Intel at the forefront of it, but that underscores Microsoft's belief that features like Recall only work on hardware that prioritizes AI performance and efficiency. If Microsoft has it their way, the Copilot key won't be a fad. It'll be the most important key on every Windows PC.

So, are you getting one or staying as far as away as you can?


Original Submission

posted by jelizondo on Sunday July 20, @06:18AM   Printer-friendly
from the triple-e dept.

Not much more to say than, damn.

Most of the companies in the list don't ring a bell. I do remember moving dBase data into FoxPro for a company I worked for in the early 90's. And of course the Skype, Nokia and GitHub deals were big news.

I suppose once Bill Gates completes his acquisition of all the farmland in the U.S. he can die happy?


Original Submission

posted by jelizondo on Sunday July 20, @01:33AM   Printer-friendly
from the Belt-and-Suspenders dept.

Is Tor Trustworthy and Safe?:

[Editor's Note: There is a suggestion that the reason this has surfaced (again) at this time is an attempt to attract more people to VPNs rather than relying on Tor. This is most evident from the later stages of this document. However, from our own experience we have noted that Tor is not a reliable way of maintaining true anonymity. --JR]

There is a lot of misinformation being promoted in various privacy circles about Tor. This article will examine some facts about Tor and assess whether it is the infallible privacy tool it's made out to be by some.

There is a growing chorus of people who blindly recommend Tor to anyone looking for online anonymity. This recommendation often ignores mountains of evidence suggesting that Tor is not the "privacy tool" it's made out to be.

No privacy tool is above criticism or scrutiny, and each has pros and cons. Unfortunately, Tor has garnered a cult-like following in recent years among people who pretend it's infallible. Honest criticism of Tor is often met with accusations of "FUD" and ad-hominem attacks, so as not to disrupt the collective Groupthink.

Never mind the fact that the Tor network is a popular hangout for pedophiles and drug dealers – along with the law enforcement these types attract. Today, Tor is being marketed as some kind of grass-roots privacy tool that will protect you against government surveillance and various bad actors.

According to Roger Dingledine (Tor co-founder) and other key Tor developers, getting people (outside the US government) to widely adopt Tor is very important for the US government's ability to use Tor for its own purposes. In this goal, they have largely succeeded with Tor being widely promoted in various privacy circles by people who don't know any better.

But is Tor really a secure and trustworthy privacy tool?

Here are the facts.

1. Tor is compromised (and not anonymous)

That governments can de-anonymize Tor users is another well-known point that's been acknowledged for years. In 2013 the Washington Post broke an article citing reports that US government agencies had figured out how to de-anonymize Tor users on a "wide scale". From the Washington Post:

Since 2006, according to a 49-page research paper titled simply "Tor," the agency has worked on several methods that, if successful, would allow the NSA to uncloak anonymous traffic on a "wide scale" — effectively by watching communications as they enter and exit the Tor system, rather than trying to follow them inside. One type of attack, for example, would identify users by minute differences in the clock times on their computers.

There are also reports of government agencies cooperating with researchers to "break" or somehow exploit Tor to de-anonymize users:

Then in July, a much anticipated talk at the Black Hat hacking conference was abruptly canceled. Alexander Volynkin and Michael McCord, academics from Carnegie Mellon University (CMU), promised to reveal how a $3,000 piece of kit could unmask the IP addresses of Tor hidden services as well as their users.

Its description bore a startling resemblance to the attack the Tor Project had documented earlier that month. Volynkin and McCord's method would deanonymize Tor users through the use of recently disclosed vulnerabilities and a "handful of powerful servers." On top of this, the pair claimed they had tested attacks in the wild.

For $3,000 worth of hardware, this team from Carnegie Mellon could effectively "unmask" Tor users. And this was in 2015. But a 2017 court case proves FBI can de-anonymize Tor users. "The means by which the FBI is able to de-anonymize Tor users and discover their real IP address remains classified information. In a 2017 court case, the FBI refused to divulge how it was able to do this, which ultimately led to child abusers on the Tor network going free."

From the Tech Times:

In this case, the FBI managed to breach the anonymity Tor promises and the means used to collect the evidence from the dark web make up a sensitive matter. The technique is valuable to the FBI, so the government would rather compromise this case rather than release the source code it used. "The government must now choose between disclosure of classified information and dismissal of its indictment," federal prosecutor Annette Hayes said in a court filing on Friday.

The cat is out of the bag. The FBI (and presumably other government agencies) has proven to be fully capable of de-anonymizing Tor users. Most Tor promoters simply ignore these different cases and the obvious implications.

2. Tor developers are cooperating with US government agencies

Some Tor users may be surprised to know the extent to which Tor developers are working directly with US government agencies. After all, Tor is often promoted as a grass-roots privacy effort to help you stay "anonymous" against Big Brother. One journalist was able to clarify this cooperation through FOIA requests, which revealed many interesting exchanges.

Here is one email correspondence in which Roger Dingledine discusses cooperation with the DOJ (Department of Justice) and FBI (Federal Bureau of Investigation), while also referencing "backdoors" being installed.

Tor developer Steven Murdoch discovered a vulnerability with the way Tor was handling TLS encryption. This vulnerability made it easier to de-anonymize Tor users, and as such, it would be valuable to government agencies. Knowing the problems this could cause, Steven suggested keeping the document internal,

...it might be a good to delay the release of anything like 'this attack is bad; I hope nobody realizes it before we fix it'.

Eight days later, Roger Dingledine alerted two government agents about this vulnerability. While there is disagreement as to the seriousness of these issues, one thing remains clear. Tor developers are closely working with the US government. [...] Whether or not you agree with the ultimate conclusion of this researcher, the facts remain for anyone who wants to acknowledge them. The big issue is the close cooperation between Tor developers and US government agencies.

And if you really want to dive in, check out the full FOIA cache here.

3. When you use Tor, you stand out like a glow stick

Meet Eldo Kim. He was the Harvard student who assumed Tor would make him "anonymous" when sending bomb threats. Kim didn't realize that when he connected to Tor on the university network, he would stand out like a [...]glow stick. The FBI and the network admins at Harvard were able to easily pinpoint Kim because he was using Tor around the time the bomb threat email was sent through the Tor network. From the criminal complaint:

Harvard University was able to determine that, in the several hours leading up to the receipt of the e-mail messages described above, ELDO KIM accessed TOR using Harvard's wireless network.

Eldo Kim is just one of many, many examples of people who have bought into the lie that Tor provides blanket online anonymity – and later paid the price. Had Kim used a bridge or VPN before accessing the Tor network, he probably would have gotten away with it (we'll discuss this more below).

4. Anybody can operate Tor nodes and collect your data and IP address

Many proponents of Tor argue that its decentralized nature is a benefit. While there are indeed advantages to decentralization, there are also some major risks. Namely, that anybody can operate the Tor nodes through which your traffic is being routed. There have been numerous examples of people setting up Tor nodes to collect data from gullible Tor users who thought they would be safe and secure. Take for example Dan Egerstad, a 22-year-old Swedish hacker. Egerstad set up a few Tor nodes around the world and collected vast amounts of private data in just a few months:

In time, Egerstad gained access to 1000 high-value email accounts. He would later post 100 sets of sensitive email logins and passwords on the internet for criminals, spies or just curious teenagers to use to snoop on inter-governmental, NGO and high-value corporate email.

The question on everybody's lips was: how did he do it? The answer came more than a week later and was somewhat anti-climactic. The 22-year-old Swedish security consultant had merely installed free, open-source software – called Tor – on five computers in data centres around the globe and monitored it. Ironically, Tor is designed to prevent intelligence agencies, corporations and computer hackers from determining the virtual – and physical – location of the people who use it.

People think they're protected just because they use Tor. Not only do they think it's encrypted, but they also think 'no one can find me'.

To not assume government agencies are doing this right now would be extremely naive. Commenting on this case, security consultant Sam Stover emphasized the risks of someone snooping traffic through Tor nodes:

Domestic, or international . . . if you want to do intelligence gathering, there's definitely data to be had there. (When using Tor) you have no idea if some guy in China is watching all your traffic, or some guy in Germany, or a guy in Illinois. You don't know.

In fact, that is exactly how Wikileaks got started. The founders simply setup Tor nodes to siphon off more than a million private documents. According to Wired:

WikiLeaks, the controversial whistleblowing site that exposes secrets of governments and corporations, bootstrapped itself with a cache of documents obtained through an internet eavesdropping operation by one of its activists, according to a new profile of the organization's founder.

The activist siphoned more than a million documents as they traveled across the internet through Tor, also known as "The Onion Router," a sophisticated privacy tool that lets users navigate and send documents through the internet anonymously.

Are governments running Tor nodes for bulk data collection?

Egerstad also suggests Tor nodes may be controlled by powerful agencies (governments) with vast resources:

In addition to hackers using Tor to hide their origins, it's plausible that intelligence services had set up rogue exit nodes to sniff data from the Tor network. "If you actually look in to where these Tor nodes are hosted and how big they are, some of these nodes cost thousands of dollars each month just to host because they're using lots of bandwidth, they're heavy-duty servers and so on," Egerstad says. "Who would pay for this and be anonymous?"

Back in 2014, government agencies seized a number of different Tor relays in what is known as "Operation Onymous". From the Tor Project blog:

Over the last few days, we received and read reports saying that several Tor relays were seized by government officials. We do not know why the systems were seized, nor do we know anything about the methods of investigation which were used. Specifically, there are reports that three systems of Torservers.net disappeared and there is another report by an independent relay operator.

This issue continues to gain attention. In this Gizmodo article from 2021, we find the same problems. Bad actors can and do operate Tor nodes. Additional reading: A mysterious threat actor is running hundreds of malicious Tor relays

The fundamental issue here is there is no real quality control mechanism for vetting Tor relay operators. Not only is there no authentication mechanism for setting up relays, but the operators themselves can also remain anonymous. Assuming that some Tor nodes are data collection tools, it would also be safe to assume that many different governments are involved in data collection, such as the Chinese, Russian, and US governments.

See also: Tor network exit nodes found to be sniffing passing traffic

5. Malicious Tor nodes do exist

If government-controlled Tor nodes weren't bad enough, you also have to consider malicious Tor nodes.

In 2016 a group of researchers presented a paper titled "HOnions: Towards Detection and Identification of Misbehaving Tor HSDirs" [PDF], which described how they identified 110 malicious Tor relays:

Over the last decade privacy infrastructures such as Tor proved to be very successful and widely used. However, Tor remains a practical system with a variety of limitations and open to abuse. Tor's security and anonymity is based on the assumption that the large majority of the its relays are honest and do not misbehave. Particularly the privacy of the hidden services is dependent on the honest operation of Hidden Services Directories (HSDirs). In this work we introduce, the concept of honey onions (HOnions), a framework to detect and identify misbehaving and snooping HSDirs. After the deployment of our system and based on our experimental results during the period of 72 days, we detect and identify at least 110 such snooping relays. Furthermore, we reveal that more than half of them were hosted on cloud infrastructure and delayed the use of the learned information to prevent easy traceback.

The malicious HSDirs identified by the team were mostly located in the United States, Germany, France, United Kingdom and the Netherlands. Just a few months after the HSDir issue broke, a different researcher identified a malicious Tor node injecting malware into file downloads.

Use at your own risk.

See also:

OnionDuke APT Malware Distributed Via Malicious Tor Exit Node

6. No warrant necessary to spy on Tor users

Another interesting case highlighting the flaws of Tor comes form 2016 when the FBI was able to infiltrate Tor to bust another pedophile group. According to Tech Times:

The U.S. Federal Bureau of Investigation (FBI) can still spy on users who use the Tor browser to remain anonymous on the web. Senior U.S. District Court Judge Henry Coke Morgan, Jr. has ruled that the FBI does not need a warrant to hack into a U.S. citizen's computer system. The ruling by the district judge relates to FBI sting called Operation Pacifier, which targeted a child pornography site called PlayPen on the Dark web. The accused used Tor to access these websites. The federal agency, with the help of hacking tools on computers in Greece, Denmark, Chile and the U.S., was able to catch 1,500 pedophiles during the operation.

While it's great to see these types of criminals getting shut down, this case also highlights the severe vulnerabilities of Tor as a privacy tool that can be trusted by journalists, political dissidents, whistleblowers, etc. The judge in this case officially ruled [PDF] that Tor users lack "a reasonable expectation of privacy" in hiding their IP address and identity. This essentially opens the door to any US government agency being able to spy on Tor users without obtaining a warrant or going through any legal channels.

This, of course, is a serious concern when you consider that journalists, activists, and whistleblowers are encouraged to use Tor to hide from government agencies and mass surveillance.

7. Tor was created by the US government (and not for your "right to privacy")

If you think Tor was created for "privacy rights" or some other noble-sounding cause, then you would be mistaken. The quote below, from the co-founder of Tor, speaks volumes.

I forgot to mention earlier, probably something that will make you look at me in a new light. I contract for the United States Government to build anonymity technology for them and deploy it. They don't think of it as anonymity technology, though we use that term. They think of it as security technology. They need these technologies so that they can research people they're interested in, so that they can have anonymous tip lines, so that they can buy things from people without other countries figuring out what they are buying, how much they are buying and where it is going, that sort of thing.

— Roger Dingledine, co-founder of Tor, 2004 speech

This quote alone should convince any rational person to never use the Tor network, unless of course you want to be rubbing shoulders with government spooks on the Dark Web.

The history of Tor goes back to the 1990s when the Office of Naval Research and DARPA were working to create an online anonymity network in Washington, DC. This network was called "onion routing" and bounced traffic across different nodes before exiting to the final destination.

In 2002, the Alpha version of Tor was developed and released by Paul Syverson (Office of Naval Research), as well as Roger Dingledine and Nick Mathewson, who were both on contract with DARPA. This three-person team, working for the US government, developed Tor into what it is today.

The quote above was taken from a 2004 speech by Roger Dingledine, which you can also listen to here.

After Tor was developed and released for public use, it was eventually spun off as its own non-profit organization, with guidance coming from the Electronic Frontier Foundation (EFF):

At the very end of 2004, with Tor technology finally ready for deployment, the US Navy cut most of its Tor funding, released it under an open source license and, oddly, the project was handed over to the Electronic Frontier Foundation.

8. Tor is funded by the US government

It's no secret that Tor is funded by various US government agencies. The key question is whether US government funding negatively affects Tor's independence and trustworthiness as a privacy tool.

Some journalists have closely examined the financial relationship between Tor and the US government:

Tor had always maintained that it was funded by a "variety of sources" and was not beholden to any one interest group. But I crunched the numbers and found that the exact opposite was true: In any given year, Tor drew between 90 to 100 percent of its budget via contracts and grants coming from three military-intel branches of the federal government: the Pentagon, the State Department and an old school CIA spinoff organization called the BBG.

Put simply: the financial data showed that Tor wasn't the indie-grassroots anti-state org that it claimed to be. It was a military contractor. It even had its own official military contractor reference number from the government.

Here are some of the different government funding sources for the Tor Project over the years:

Broadcasting Board of Governors:

"Broadcasting Board of Governors (BBG) [now called U.S. Agency for Global Media], a federal agency that was spun off from the CIA and today oversees America's foreign broadcasting operations, funded Tor to the tune of $6.1 million in the years from 2007 through 2015." (source)

State Department:

"The State Department funded Tor to the tune of $3.3 million, mostly through its regime change arm — State Dept's "Democracy, Human Rights and Labor" division." (source)

The Pentagon:

"From 2011 through 2013, the Pentagon funded Tor to the tune of $2.2 million, through a U.S. Department of Defense / Navy contract — passed through a defense contractor called SRI International." (source)

The grant is called: "Basic and Applied Research and Development in Areas Relating to the Navy Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance."

We can also see what the Tor project has to say about the matter. When soliciting funds in 2005, Tor claimed that donors would be able to "influence" the direction of the project:

We are now actively looking for new contracts and funding. Sponsors of Tor get personal attention, better support, publicity (if they want it), and get to influence the direction of our research and development!

There you have it. Tor claims donors influence the direction of research and development – a fact that the Tor team even admits. Do you really think the US government would invest millions of dollars into a tool that stifled its power?

9. When you use Tor, you help the US government do spooky stuff

The United States government can't simply run an anonymity system for everybody and then use it themselves only. Because then every time a connection came from it people would say, "Oh, it's another CIA agent looking at my website," if those are the only people using the network. So you need to have other people using the network so they blend together.

—Roger Dingledine, co-founder of the Tor Network, 2004 speech

The implications of this statement are quite serious. When you use Tor, you are literally helping the US government. Your traffic helps to conceal CIA agents who are also using Tor, as Dingledine and journalists are pointing out.

Just as Roger Dingledine asserted in the opening quote to this section, Paul Syverson (Tor co-founder) also emphasized the importance of getting other people to use Tor, thereby helping government agents perform their work and not stand out as the only Tor users:

If you have a system that's only a Navy system, anything popping out of it is obviously from the Navy. You need to have a network that carries traffic for other people as well.

Tor is branded by many different individuals and groups as a grassroots project to protect people from government surveillance. In reality, however, it is a tool for government agents who are literally using it for military and intelligence operations (including spying on those who think they are "anonymous" on Tor).

Tor's utility for the military-surveillance apparatus is explained well in the following quote:

Tor was created not to protect the public from government surveillance, but rather, to cloak the online identity of intelligence agents as they snooped on areas of interest. But in order to do that, Tor had to be released to the public and used by as diverse a group of people as possible: activists, dissidents, journalists, paranoiacs, kiddie porn scum, criminals and even would-be terrorists — the bigger and weirder the crowd, the easier it would be for agents to mix in and hide in plain sight.

According to these Tor developers and co-founders, when you use Tor you are helping US government agents in doing whatever they do on the Tor network. Why would anyone who advocates for privacy and human rights want to do that?

10. IP address leaks when using Tor

Another recurring problem with Tor is IP address leaks – a serious issue that will de-anonymize Tor users, even if the leak is brief.

In November 2017 a flaw was discovered that exposed the real IP address of Tor users if they clicked on a local file-based address, such as file://., rather than http:// or https://.

This issue illustrates a larger problem with Tor: it only encrypts traffic through the Tor browser, thereby leaving all other (non-Tor browser) traffic exposed.

Unlike a VPN that encrypts all traffic on your operating system, the Tor network only works through a browser configured for Tor. (See the 'what is a VPN' guide for an overview.)

This design leaves Tor users vulnerable to leaks which will expose their identity in many different situations:

  • Tor offers no protection when torrenting and will leak the user's IP address with torrent clients.
  • Tor may leak IP addresses when accessing files, such as PDFs or other documents, which will likely bypass proxy settings.
  • Windows users are also vulnerable to different types of leaks that will expose the user's real IP address.

It's important to note, however, that oftentimes de-anonymization is due to user error or misconfiguration. Therefore blame does not lie with Tor itself, but rather with people not using Tor correctly.

Dan Eggerstad emphasized this issue as well when he stated:

People think they're protected just because they use Tor. Not only do they think it's encrypted, but they also think 'no one can find me'. But if you've configured your computer wrong, which probably more than 50 per cent of the people using Tor have, you can still find the person (on) the other side.

Once again, non-technical users would be better off using a good VPN service that provides system-wide traffic encryption and an effective kill switch to block all traffic if the VPN connection drops.

11. Using Tor can make you a target

As we saw above with the bomb threat hoax, Eldo Kim was targeted because he was on the Tor network when the bomb threat was sent. Other security experts also warn about Tor users being targeted merely for using Tor.

In addition, most really repressive places actually look for Tor and target those people. VPNs are used to watch Netflix and Hulu, but Tor has only one use case – to evade the authorities. There is no cover. (This is assuming it is being used to evade even in a country incapable of breaking Tor anonymity.)

In many ways Tor can be riskier than a VPN:

  1. VPNs are (typically) not actively malicious
  2. VPNs provide good cover that Tor simply cannot – "I was using it to watch Hulu videos" is much better than – "I was just trying to buy illegal drugs online"

As we've pointed out here before, VPNs are more widely used than Tor – and for various (legitimate) reasons, such as streaming Netflix with a VPN.

So maybe you still need (or want?) to use Tor. How can you do so with more safety?

How hide your IP address when using Tor

Given that Tor is compromised and bad actors can see the real IP address of Tor users, it would be wise to take extra precautions. This includes hiding your real IP address before accessing the Tor network.

To hide your IP address when accessing Tor, simply connect to a VPN server (through a VPN client on your computer) and then access Tor as normal (such as through the Tor browser). This will add a layer of encryption between your computer and the Tor network, with the VPN server's IP address replacing your real IP address.

Note: There are different ways to combine VPNs and Tor. I am only recommending the following setup: You VPN Tor Internet (also called "Tor over VPN" or "Onion over VPN").

With this setup, even if a malicious actor was running a Tor server and logging all connecting IP addresses, your real IP address would remain hidden behind the VPN server (assuming you are using a good VPN with no leaks).

Here are the benefits of routing your traffic through a secure VPN before the Tor network:

  1. Your real IP address remains hidden from the Tor network (Tor cannot see who you are)
  2. Your internet provider (ISP) or network admin will not be able to see you are using Tor (because your traffic is being encrypted through a VPN server).
  3. You won't stand out as much from other users because VPNs are more popular than Tor.
  4. You are distributing trust between Tor and a VPN. The VPN could see your IP address and Tor could see your traffic (sites you visit), but neither would have both your IP address and browsing activities.

For anyone distrustful of VPNs, there are a handful of verified no logs VPN services that have been proven to be truly "no logs".

You can sign up for a VPN with a secure anonymous email account (not connected to your identity). For the truly paranoid, you can also pay with Bitcoin or any other anonymous payment method. Most VPNs do not require any name for registration, only a valid email address for account credentials. Using a VPN in a safe offshore jurisdiction (outside the 14 Eyes) may also be good, depending on your threat model.

For those seeking the highest levels of anonymity, you can chain multiple VPNs through Linux virtual machines (using Virtualbox, which is FOSS). You could also use VPN1 on your router, VPN2 on your computer, and then access the regular internet (or the Tor network) through two layers of encryption via two separate VPN services. This allows you to distribute trust across different VPN services and ensure neither VPN could have both your incoming IP address and traffic. This is discussed more in my guide on multi-hop VPN services.

Note: The claim that "VPN is fully, 100%, a single point/entity that you must trust" is false. This claim comes from this Tor promoter who coincidentlyworks for the US government's Naval Research Lab.

When you chain VPNs, you can distribute trust across different VPN services and different jurisdictions around the world, all paid for anonymously and not linked to your identity. With Tor alone, you put all your trust in The Onion Router...

Tor vulnerabilities and VPNs

There are other attacks that the Tor Project admits will de-anonymize Tor users (archived):

As mentioned above, it is possible for an observer who can view both you and either the destination website or your Tor exit node to correlate timings of your traffic as it enters the Tor network and also as it exits. Tor does not defend against such a threat model.

Once again, a VPN can help to mitigate the risk of de-anonymization by hiding your source IP address before accessing the guard node in the Tor circuit.

Can exit nodes eavesdrop on communications? From the Tor Project:

Yes, the guy running the exit node can read the bytes that come in and out there. Tor anonymizes the origin of your traffic, and it makes sure to encrypt everything inside the Tor network, but it does not magically encrypt all traffic throughout the Internet.

However, a VPN can not do anything about a bad Tor exit node eavesdropping on your traffic, although it will help hide who you are (but your traffic can also give you away).

Conclusion

No privacy tool is above criticism.

Just like with Tor, I have also pointed out numerous problems with VPNs, including VPNs that were caught lying about logs, VPN scams, and dangerous free VPN services. All privacy tools come with pros and cons. Selecting the best tool for the job all boils down to your threat model and unique needs.

Unfortunately, for many in the privacy community, Tor is now considered to be an infallible tool for blanket anonymity, and to question this dogma means you are "spreading FUD". This is pathetic.

In closing, for regular users seeking more security and online anonymity, I'd simply avoid Tor altogether. A VPN will offer system-wide encryption, much faster speeds, and user-friendly clients for various devices and operating systems. This will also prevent your ISP from seeing what you're up to online.

Additionally, VPNs are more mainstream and there are many legitimate (and legal!) reasons for using them. Compared to Tor, you definitely won't stand out as much with a VPN.

For those who still want to access the Tor network, doing so through a reliable VPN service will add an extra layer of protection while hiding your real IP address.

Further Reading:

Tor and its Discontents: Problems with Tor Usage as Panacea

Users Get Routed: Traffic Correlation on Tor by Realistic Adversaries

Tor network exit nodes found to be sniffing passing traffic

On the Effectiveness of Traffic Analysis Against Anonymity Networks Using Flow Records

Judge confirms what many suspected: Feds hired CMU to break Tor


Original Submission

posted by janrinok on Saturday July 19, @08:33PM   Printer-friendly
from the I-cut-myself-today dept.

The Guardian has a long and very interesting article about pain and its psychology and physiology. Some gripping anecdotes like the soldier who picks his torn arm from the ground and walks to receive medical attention or the woman who worked and walked around for 10 hours with a burst cyst and a "a belly full of blood."

Why some people can withstand high pain while others cry over a little knock in their knee?

Some say it was John Sattler's own fault. The lead-up to the 1970 rugby league grand final had been tense; the team he led, the South Sydney Rabbitohs, had lost the 1969 final. Here was an opportunity for redemption. The Rabbitohs were not about to let glory slip through their fingers again.

Soon after the starting whistle, Sattler went in for a tackle. As he untangled – in a move not uncommon in the sport at the time – he gave the Manly Sea Eagles' John Bucknall a clip on the ear.

Seconds later – just three minutes into the game – the towering second rower returned favour with force: Bucknall's mighty right arm bore down on Sattler, breaking his jaw in three places and tearing his skin; he would later need eight stitches. When his teammate Bob McCarthy turned to check on him, he saw his captain spurting blood, his jaw hanging low. Forty years later Sattler would recall that moment. One thought raged in his shattered head: "I have never felt pain like this in my life."

But he played on. Tackling heaving muscular players as they advanced. Being tackled in turn, around the head, as he pushed forward. All the while he could feel his jaw in pieces.

At half-time the Rabbitohs were leading. In the locker room, Sattler warned his teammates, "Don't play me out of this grand final."

McCarthy told him, "Mate, you've got to go off."

He refused. "I'm staying."

Sattler played the whole game. The remaining 77 minutes. At the end, he gave a speech and ran a lap of honour. The Rabbitohs had won. The back page of the next day's Sunday Mirror screamed "BROKEN JAW HERO".

[...]

How can a person bitten by a shark calmly paddle their surfboard to safety, then later liken the sensation of the predator clamping down on their limb to the feeling of someone giving their arm "a shake"? How is it that a woman can have a cyst on her ovary burst, her abdomen steadily fill with blood, but continue working at her desk for six hours? Or that a soldier can have his legs blown off then direct his own emergency treatment? [16:06 and quite moving.]

Each one of us feels pain. We all stub our toes, burn our fingers, knock our knees. And worse. The problem with living in just one mind and body is that we can never know whether our six out of 10 on the pain scale is the same as the patient in the chair next to us.

[...] But what is happening in the body and mind of a person who does not seem to feel the pain they "should" be feeling. Do we all have the capacity to be one of these heroic freaks?

And how did John Sattler play those 77 minutes?

Questions like these rattled around the mind of Lorimer Moseley when he showed up at Sydney's Royal North Shore hospital years ago as an undergraduate physiotherapy student. He wanted to interrogate a quip made by a neurology professor as he left the lecture theatre one day, that the worst injuries are often the least painful. So Moseley sat in the emergency room and watched people come in, recording their injuries and asking them how much they hurt.

"And this guy came in with a hammer stuck in his neck – the curly bit had got in the back and was coming out the front and blood was pouring all down," Moseley recalls. "But he was relaxed. He just walked in holding the hammer, relaxed. Totally fine."

Then the man turned around, hit his knee on a low table and began jumping up and down at the pain of the small knock.

"And I think, 'Whoa, what is happening there?'"

The curious student ruled out drugs, alcohol, shock. He realised that the reason the man did not feel pain from his hammer injury was due to the very point of pain itself.

"Pain is a feeling that motivates us to protect ourselves," says Moseley, now the chair in physiotherapy and a professor of clinical neurosciences at the University of South Australia.

"One of the beautiful things about pain is that it will motivate us to protect the body part that's in danger, really anatomically specific – it can narrow it right down to a tiny little spot."

[...] Prof Michael Nicholas is used to stories like these. "You can see it in probably every hospital ward. If you stay around long enough you'll hear comments like 'this person has more pain than they should have' or 'you might be surprised that they're not in pain'," he says. "What that highlights to me is the general tendency for all of us to think there should be a close relationship between a stimulus like an injury or a noxious event and the degree of pain the person feels.

"In fact, that's generally wrong. But it doesn't stop us believing it."

The reason we get it wrong, Nicholas says, "is that we have a sort of mind-body problem".

Eastern medicine and philosophy has long recognised the interconnectedness of body and mind, and so too did the west in early civilisations. In ancient Greece the Algea, the gods of physical pain, were also gods associated with psychic pain – with grief and distress. But in the 1600s the French philosopher René Descartes set western thinking on a different course, asserting that the mind and body were separate entities.

"When people come to see me, they're often worried they're being told it's all in their head," Nicholas says.

"Of course pain is in your head. It's in your brain. You know, it's the brain that is where you get that experience ... It's never all physical."

This is true of people who tolerate acute pain. It's never all physical. And it has little to do with heroism or freakishness.

[...] And so the experience of acute pain is caught in the realm of mystery and mythology; where we can understand much of what is happening in a body and part of what is happening in a brain but never actually know what another person feels.

The legend of John Sattler goes that after that fateful right hook from Bucknall, the bloodied captain turned to his teammate Matthew Cleary. That no one knew, perhaps not even himself, the damage that had been done to him became his mythological power.

"Hold me up," he said. "So they don't know I'm hurt."


Original Submission

posted by janrinok on Saturday July 19, @03:50PM   Printer-friendly

FuguIta has been mentioned here recently.

The creator has released a, "FuguIta desktop environment demo version" featuring:

  • Desktop environment: xfce-4.20.0

  • Web browser: firefox-137.0
  • Mailer: thunderbird-128.9.0
  • Office: libreoffice-25.2.1.2v0
  • Media player: vlc-3.0.21p2
  • Audio player: audacious-4.4.2
  • Fonts: noto-cjk-20240730, noto-emoji-20240730, noto-fonts-24.9.1v0

From the creator:

I made a demo version of FuguIta with a desktop environment. This demo version demonstrates that FuguIta can be used with a desktop environment as easily as a regular live system.

This demo version uses the following features of Fuguita and OpenBSD.

  • Automatic file saving at shutdown using the /etc/rc.shutdown file
  • Automatic startup using the noasks file
  • Automatic login using the xenodm-config file
  • Additional partition mounting using the /etc/fuguita/fstab.tail file
  • Initialization only at first startup using /etc/rc.firsttime

There's also the example on how to setup the Fluxbox Window Manager, too, for example.


Original Submission

posted by janrinok on Saturday July 19, @11:08AM   Printer-friendly
from the fresh-air-is-killing-us! dept.

Arthur T Knackerbracket has processed the following story:

Satellite data suggests cloud darkening is responsible for much of the warming since 2001, and the good news is that it is a temporary effect due to a drop in sulphate pollution

Clouds have been getting darker and reflecting less sunlight as a result of falling sulphate air pollution, and this may be responsible for a lot of recent warming beyond that caused by greenhouse gases.

“Two-thirds of the global warming since 2001 is SO2 reduction rather than CO2 increases,” says Peter Cox at the University of Exeter in the UK.

Some of the sunshine that reaches Earth is reflected and some is absorbed and later radiated as heat. Rising carbon dioxide levels trap more of that radiant heat – a greenhouse effect that causes global warming. But the planet’s albedo – how reflective it is – also has a big influence on its temperature.

Since 2001, satellite instruments called CERES have been directly measuring how much sunlight is reflected versus how much is absorbed. These measurements show a fall in how much sunlight is being reflected, meaning the planet is getting darker – its albedo is falling – and this results in additional warming.

There are many reasons for the falling albedo, from less snow and sea ice to less cloud cover. But an analysis of CERES data from 2001 to 2019 by Cox and Margaux Marchant, also at Exeter, suggests the biggest factor is that clouds are becoming darker.

It is known that sulphate pollution from industry and ships can increase the density of droplets in clouds, making them brighter or more reflective. This is the basis of one proposed form of geoengineering, known as marine cloud brightening. But these emissions have been successfully reduced in recent years, partly by moving away from high-sulphur fuels such as coal.

So Marchant and Cox looked at whether the decline in cloud brightness corresponded with areas with falling levels of SO2 pollution, and found that it did. The pair presented their preliminary results at the Exeter Climate Forum earlier this month.

The results are encouraging because the rapid warming in recent years has led some researchers to suggest that Earth’s climate sensitivity – how much it warms in response to a given increase in atmospheric CO2 – is on the high side of estimates. As it turns out, extra warming due to falling pollution will be short-lived, whereas if the cloud darkening was a feedback caused by rising CO2, it would mean ever more warming due to this as CO2 levels keep rising.

“If this darkening is a change in cloud properties due to the recent decrease in SO2 emissions, rather than a change in cloud feedbacks that indicate a higher-than-anticipated climate sensitivity, then this is great news,” says Laura Wilcox at the University of Reading in the UK, who wasn’t involved in the study.

There are some limitations with the datasets Marchant and Cox used, says Wilcox. For instance, the data on SO2 pollution has been updated since the team did their analysis.

And two recent studies have suggested the darkening is mainly due to a reduction in cloud cover, rather than darker clouds, she says. “The drivers of the recent darkening trends are a hotly debated topic at the moment.”

Overall, though, Wilcox says her own work also supports the conclusion that the recent acceleration in global warming has been primarily driven by the decrease in air pollution, and that it is likely to be a temporary effect.


Original Submission

posted by hubie on Saturday July 19, @06:20AM   Printer-friendly
from the good!-your-hate-has-made-you-powerful dept.

Brothers-in-law use construction knowledge to compete against Comcast in Michigan:

Samuel Herman and Alexander Baciu never liked using Comcast's cable broadband. Now, the residents of Saline, Michigan, operate a fiber Internet service provider that competes against Comcast in their neighborhoods and has ambitions to expand.

[...] "Many times we would have to call Comcast and let them know our bandwidth was slowing down... then they would say, 'OK, we'll refresh the system.' So then it would work again for a week to two weeks, and then again we'd have the same issues," he said.

Herman, now 25, got married in 2021 and started building his own house, and he tried to find another ISP to serve the property. He was familiar with local Internet service providers because he worked in construction for his father's company, which contracts with ISPs to build their networks.

But no fiber ISP was looking to compete directly against Comcast where he lived, though Metronet and 123NET offer fiber elsewhere in the city, Herman said. He ended up paying Comcast $120 a month for gigabit download service with slower upload speeds. Baciu, who lives about a mile away from Herman, was also stuck with Comcast and was paying about the same amount for gigabit download speeds.

Herman said he was the chief operating officer of his father's construction company and that he shifted the business "from doing just directional drilling to be a turnkey contractor for ISPs." Baciu, Herman's brother-in-law (having married Herman's oldest sister), was the chief construction officer. Fueled by their knowledge of the business and their dislike of Comcast, they founded a fiber ISP called Prime-One.

Now, Herman is paying $80 a month to his own company for symmetrical gigabit service. Prime-One also offers 500Mbps for $75, 2Gbps for $95, and 5Gbps for $110. The first 30 days are free, and all plans have unlimited data and no contracts.

[...] Comcast seems to have noticed, Herman said. "They've been calling our clients nonstop to try to come back to their service, offer them discounted rates for a five-year contract and so on," he said.

A Comcast spokesperson told Ars that "we have upgraded our network in this area and offer multi-gig speeds there, and across Michigan, as part of our national upgrade that has been rolling out."

Meanwhile, Comcast's controversial data caps are being phased out. With Comcast increasingly concerned about customer losses, it recently overhauled its offerings with four plans that come with unlimited data. The Comcast data caps aren't quite dead yet because customers with caps have to switch to a new plan to get unlimited data.

Comcast told us that customers in Saline "have access to our latest plans with simple and predictable all-in pricing that includes unlimited data, Wi-Fi equipment, a line of Xfinity Mobile, and the option for a one or five-year price guarantee."


Original Submission