Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Idiosyncratic use of punctuation - which of these annoys you the most?

  • Declarations and assignments that end with }; (C, C++, Javascript, etc.)
  • (Parenthesis (pile-ups (at (the (end (of (Lisp (code))))))))
  • Syntactically-significant whitespace (Python, Ruby, Haskell...)
  • Perl sigils: @array, $array[index], %hash, $hash{key}
  • Unnecessary sigils, like $variable in PHP
  • macro!() in Rust
  • Do you have any idea how much I spent on this Space Cadet keyboard, you insensitive clod?!
  • Something even worse...

[ Results | Polls ]
Comments:39 | Votes:86

posted by mrpg on Sunday August 04, @08:50PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The agency has been advancing optical communications, which use infrared light signals instead of the more conventional radio waves to transmit data. As part of these efforts, it recently conducted a series of flight tests that involved installing a laser terminal on the belly of a Pilatus PC-12 aircraft. This single-engine plane then proceeded to beam 4K video while soaring over Lake Erie to a ground station in Cleveland, Ohio.

From there, the video signal went on an epic journey, passing through NASA's White Sands facility in New Mexico before being fired off into space 22,000 miles away using infrared lasers towards an experimental satellite called the Laser Communications Relay Demonstration (LCRD). The LCRD then relayed the data to a special terminal aboard the ISS called ILLUMA-T, which beamed it back to Earth.

Despite this incredibly long distance, NASA says the laser link achieved transmission rates of over 900 Mbps. To put that into perspective, the average household internet in the US churns out 245 Mbps as of June 2024.


Original Submission

posted by hubie on Sunday August 04, @04:11PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

For many years, insecticide-treated bed nets and indoor spraying have been crucial and highly effective methods for controlling mosquitoes that spread malaria, a serious global health threat. These strategies have also, incidentally, helped to reduce populations of other unwanted household pests such as bed bugs, cockroaches, and flies.

Now, a new North Carolina State University study reviewing the academic literature on indoor pest control shows that as the household insects developed resistance to the insecticides targeting mosquitoes, the return of these bed bugs, cockroaches and flies into homes has led to community distrust and often abandonment of these treatments – and to rising rates of malaria.

In short, the bed nets and insecticide treatments that were so effective in preventing mosquito bites – and therefore malaria – are increasingly viewed as the causes of household pest resurgence.

“These insecticide-treated bed nets were not intended to kill household pests like bed bugs, but they were really good at it,” said Chris Hayes, an NC State Ph.D. student and co-corresponding author of a paper describing the work. “It’s what people really liked, but the insecticides are not working as effectively on household pests anymore.”

“Non-target effects are usually harmful, but in this case they were beneficial,” said Coby Schal, Blanton J. Whitmire Distinguished Professor of Entomology at NC State and co-corresponding author of the paper.

“The value to people wasn’t necessarily in reducing malaria, but was in killing other pests,” Hayes added. “There’s probably a link between the use of these nets and widespread insecticide resistance in these house pests, at least in Africa.”

[...] The researchers say that all hope is not lost, though.

“There are, ideally, two routes,” Schal said. “One would be a two-pronged approach with both mosquito treatment and a separate urban pest management treatment that targets pests. The other would be the discovery of new malaria-control tools that also target these household pests at the same time. For example, the bottom portion of a bed net could be a different chemistry that targets cockroaches and bed bugs.

“If you offer something in bed nets that suppresses pests, you might reduce the vilification of bed nets.”

Reference: “Review on the impacts of indoor vector control on domiciliary pests: good intentions challenged by harsh realities” by Christopher C. Hayes and Coby Schal, 1 July 2024, Proceedings of the Royal Society B. DOI: 10.1098/rspb.2024.0609


Original Submission

posted by hubie on Sunday August 04, @11:26AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Consider the drone: Although it is critical to national defense and prosperity, nearly all its components are made in China.

A country’s economic security—its ability to generate both national security and economic prosperity—is grounded in it having significant technological capabilities that outpace those of its adversaries and complement those of its allies. Though this is a principle well known throughout history, the move over the last few decades toward globalization and offshoring of technologically advanced industrial capacity has made ensuring a nation state's security and economic prosperity increasingly problematic. A broad span of technologies ranging from automation and secure communications to energy storage and vaccine design are the basis for wider economic prosperity—and high priorities for governments seeking to maintain national security. However, the necessary capabilities do not spring up overnight. They rely upon long decades of development, years of accumulated knowledge, and robust supply chains.

For the US and, especially, its allies in NATO, a particular problem has emerged: a “missing middle” in technology investment. Insufficient capital is allocated toward the maturation of breakthroughs in critical technologies to ensure that they can be deployed at scale. Investment is allocated either toward the rapid deployment of existing technologies or to scientific ideas that are decades away from delivering practical capability or significant economic impact (for example, quantum computers). But investment in scaling manufacturing technologies, learning while doing, and maturing of emerging technologies to contribute to a next-generation industrial base, is too often absent. Without this middle-ground commitment, the United States and its partners lack the production know-how that will be crucial for tomorrow’s batteries, the next generation of advanced computing, alternative solar photovoltaic cells, and active pharmaceutical ingredients.

While this once mattered only for economic prosperity, it is now a concern for national security too—especially given that China has built strong supply chains and other domestic capabilities that confer both economic security and significant geopolitical leverage.

Consider drone technology. Military doctrine has shifted toward battlefield technology that relies upon armies of small, relatively cheap products enabled by sophisticated software—from drones above the battlefield to autonomous boats to CubeSats in space.

Drones have played a central role in the war in Ukraine. First-person viewer (FPV) drones—those controlled by a pilot on the ground via a video stream—are often strapped with explosives to act as precision kamikaze munitions and have been essential to Ukraine’s frontline defenses. While many foundational technologies for FPV drones were pioneered in the West, China now dominates the manufacturing of drone components and systems, which ultimately enables the country to have a significant influence on the outcome of the war.

[...] China’s manufacturing dominance has resulted in a domestic workforce with the experience to achieve process innovations and product improvements that have no equal in the West.  And it has come with the sophisticated supply chains that support a wide range of today’s technological capabilities and serve as the foundations for the next generation. None of that was inevitable. For example, most drone electronics are integrated on printed circuit boards (PCBs), a technology that was developed in the UK and US.However, first-mover advantage was not converted into long-term economic or national security outcomes, and both countries have lost the PCB supply chain to China.

[...] China’s dominance in LiPo batteries for drones reflects its overall dominance in Li-ion manufacturing. China controls approximately 75% of global lithium-ion capacity—the anode, cathode, electrolyte, and separator subcomponents as well as the assembly into a single unit. It dominates the manufacture of each of these subcomponents, producing over 85% of anodes and over 70% of cathodes, electrolytes, and separators. China also controls the extraction and refinement of minerals needed to make these subcomponents.

[...] While the absence of the high-tech industrial capacity needed for economic security is easy to label, it is not simple to address. Doing so requires several interrelated elements, among them designing and incentivizing appropriate capital investments, creating and matching demand for a talented technology workforce, building robust industrial infrastructure, ensuring visibility into supply chains, and providing favorable financial and regulatory environments for on- and friend-shoring of production. This is a project that cannot be done by the public or the private sector alone. Nor is the US likely to accomplish it absent carefully crafted shared partnerships with allies and partners across both the Atlantic and the Pacific.

The opportunity to support today’s drones may have passed, but we do have the chance to build a strong industrial base to support tomorrow’s most critical technologies—not simply the eye-catching finished assemblies of autonomous vehicles, satellites, or robots but also their essential components. This will require attention to our manufacturing capabilities, our supply chains, and the materials that are the essential inputs. Alongside a shift in emphasis to our own domestic industrial base must come a willingness to plan and partner more effectively with allies and partners.

If we do so, we will transform decades of US and allied support for foundational science and technology into tomorrow’s industrial base vital for economic prosperity and national security. But to truly take advantage of this opportunity, we need to value and support our shared, long-term economic security. And this means rewarding patient investment in projects that take a decade or more, incentivizing high-capital industrial activity, and maintaining a determined focus on education and workforce development—all within a flexible regulatory framework.

Everyone thinks they know but no one can agree. And that’s a problem.


Original Submission

posted by hubie on Sunday August 04, @06:38AM   Printer-friendly
from the pitching-to-contact-or-to-the-injured-list? dept.

I've written previously about how Statcast data is changing professional baseball, but the application of the data has caused at least one very adverse effect: being a pitcher in today's game is bad for your health.

Two of the ways to be an effective pitcher are to generate a lot of swings and misses, and to induce a lot of poor contact. Poor contact means balls that are hit with low exit velocities, or at very high or low launch angles, and these disproportionately result in outs. Statcast data shows that pitchers can achieve this by throwing at high velocities and with a lot of vertical or lateral movement on their pitches. The pitch movement is achieved by spinning the ball at a high rotation rate, and the Magnus effect creates a pressure gradient force across the baseball that deflects it away from its original trajectory. Fastballs tend to have backspin, which imparts an upward acceleration. However, curveballs spin forward and have a downward acceleration, and it's also possible to generate lateral movement. The direction and amount of movement on a pitch is also sometimes referred to as its shape.

The desire for higher velocity and spin rates has led to the rise of "pitching labs" that develop training programs that are very effective at increasing arm strength, improving pitching mechanics, and raising the spin rate of pitches. This comes at a price, however, which is more stress on a pitcher's arm. Major League Baseball (MLB) teams have tried to account for this by allowing pitchers to throw fewer pitches per game and giving them more rest between outings. The added rest helps pitchers consistently throw with high velocity and spin rates, at least for awhile. But all of this added stress seems to have a cumulative effect on a pitcher's elbow. The weakest point is often the Ulnar collateral ligament (UCL), and a partially or completely torn UCL has become an increasingly common pitching injury.

Prior to the increased focus on pitch velocity and shape, high pitch counts were generally considered the biggest factor in UCL injuries. However, the data show an upward trend in fastball velocity in recent years corresponding with a large increase in elbow injuries. As this YouTube video from WIRED shows, throwing a fastball at the hardest velocities seen in MLB places an incredible amount of strain on a pitcher's elbow to the point that it exceeds what the UCL can withstand. Small tears form in the UCL from the forces needed to throw a pitch that hard, and the long-term effect of continuing to pitch under these conditions is often a ruptured ligament.

Several decades ago, a torn UCL was generally a career ending injury. In 1974, Dodgers' pitcher Tommy John was the first baseball player to undergo a UCL reconstruction, which involves grafting a tendon in place of the UCL, taking the tendon from elsewhere in the body or a donor. The procedure has become known as "Tommy John surgery" and has a high success rate, though with a long recovery time. However, continuing to pitch with high velocity and spin rates has led to the injury recurring a few years later and requiring a second surgery. There is also evidence that spin rates place a high level of stress on the elbow and are also correlated with arm injuries. MLB also imposes a pitch clock, limiting the amount of time a pitcher can rest between pitches. Although the pitch clock improves the pace of games, it has also been cited as a potential injury risk.

The obvious question is why pitchers would be willing to throw pitches at high velocities and spin rates knowing that the result is likely Tommy John surgery. The answer is that there are only so many spots on a major league roster available, and if one pitcher isn't willing to assume that risk, someone else will. The best starting pitchers get massive contracts that pay tens of millions of dollars per year, so there's a lot of money potentially available for those willing to accept the high risk of injuries. Even at lower levels, pitchers know that if they want to be successful, they need to be able to throw the ball hard. There has even been a large increase in youth pitchers having UCL injuries and undergoing Tommy John surgery. Some MLB pitchers like Josh Hader and Garrett Crochet have tried to impose their own limits on how teams can use them, a move that has been somewhat controversial.

Fortunately, the same data that allows us to link pitch velocity and spin rate with effectiveness may also offer a solution to reduce injuries. Tracking pitch velocity and spin rate can allow us to determine how frequently pitchers are throwing pitches that contribute most to UCL injuries. One proposal is to track the number of high risk pitches thrown by each pitcher and imposing a cap on a pitcher's innings in a season, progressively lowering that cap for pitchers who throw more high risk pitches. Part of a pitcher's value to a team is their availability. If a pitcher is unavailable because they've reached their innings cap, they're less valuable to a team, providing an incentive to reduce the number of pitches thrown at high velocity and perhaps high spin rates. The proposed rule in the linked article focuses on fastballs, but a similar strategy could be applied to other high risk pitches.


Original Submission

posted by hubie on Sunday August 04, @01:55AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Do you have your VMware ESXi hypervisor joined to Active Directory? Well, the latest news from Microsoft serves as a reminder that you might not want to do that given the recently patched vulnerability that has security experts deeply concerned.

CVE-2024-37085 only carries a 6.8 CVSS rating, but has been used as a post-compromise technique by many of the world's most high-profile ransomware groups and their affiliates, including Black Basta, Akira, Medusa, and Octo Tempest/Scattered Spider.

The vulnerability allows attackers who have the necessary privileges to create AD groups – which isn't necessarily an AD admin – to gain full control of an ESXi hypervisor.

This is bad for obvious reasons. Having unfettered access to all running VMs and critical hosted servers offers attackers the ability to steal data, move laterally across the victim's network, or just cause chaos by ending processes and encrypting the file system.

The "how" of the exploit is what caused such a stir in cyber circles. There are three ways of exploiting CVE-2024-37085, but the underlying logic flaw in ESXi enabling them is what's attracted so much attention.

Essentially, if an attacker was able to add an AD group called "ESX Admins," any user added to it would by default be considered an admin.

That's it. That's the exploit.

[...] Broadcom said in a security advisory that it already issued a patch for CVE-2024-37085 on June 25, but only updated Cloud Foundation as recently as July 23, which is perhaps why Microsoft's report only just went live.

Jake Williams, VP of research and development at Hunter Strategy and IANS faculty member, was critical of Broadcom's approach to security, especially with regard to the severity it assigned the vulnerability.

[...] "I can only conclude Broadcom is not serious about security. I don't know how you conclude anything else. Oh also, there are no patches planned for ESXi 7.0."

Many commentators have questioned why an organization would join their ESXi hosts to AD in the first place, despite it being a relatively common practice.

"Why are ESX servers joined with an active directory in the first place? Because it is convenient to manage admin access to servers using a centralized platform in large corporations," Dr Martin J Kraemer, security awareness advocate at KnowBe4, told The Register

"This is very common but also creates challenges. In many environments, the AD itself might run on a VM. Cold boot can be a nightmare. A chicken and egg problem. How can you start ESX without AD while AD runs on ESX? Admins must think about this. A well-known challenge.

[...] "Over the last year, we have seen ransomware actors targeting ESXi hypervisors to facilitate mass encryption impact in few clicks, demonstrating that ransomware operators are constantly innovating their attack techniques to increase impact on the organizations they target," it said.

Microsoft also said that ESXi hypervisors often fly further under the radar in security operations centers (SOCs) because security solutions often don't have the necessary visibility into ESXi, potentially allowing attackers to go undetected for longer periods of time.

Because of the destruction a successful ESXi attack could cause, attacks have risen sharply. In the past three years, the targeting of ESXi hypervisors has doubled.

[...] Microsoft recommends that all ESXi users install the available patches and scrub up their credential hygiene to prevent future attacks, as well as use a robust vulnerability scanner, if you don't already.


Original Submission

posted by hubie on Saturday August 03, @09:09PM   Printer-friendly
from the Politics-of-Politics dept.

From ScienceBlog: A comprehensive analysis of 24 state-of-the-art Large Language Models (LLMs) has uncovered a significant left-of-center bias in their responses to politically charged questions. The study, published in PLOS ONE, sheds light on the potential political leanings embedded within AI systems that are increasingly shaping our digital landscape.

The underlying paper at PLOS One: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0306621

The researcher used a variety of tests of political alignment to assess the bias of some Large Language Models (LLMs) and found that they exhibited a left-of-center bias. To discover whether that bias can be affected by changing the training data, versions of LLMs were trained on selected sources, producing biases to order.

Here's a question for the community: Is the 'centerpoint' of political bias, as judged by these tests, arbitrary and reflective of the gamut of bias that is accepted as normal at this time? Is that centerpoint an absolute that can be used as a reference, or is it simply an artifact of how the political universe is currently understood? It seems to me that the phase space it exists in is limited by the kinds of political organizations which are preset in the world today, and that there might be valid solutions which have not yet been explored.


Original Submission

posted by hubie on Saturday August 03, @04:22PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The company’s latest Cost of a Data Breach report found that severe staffing shortages are linked to higher data breach costs, while AI is being used to significantly reduce the average cost of a breach.

[...] The company’s latest report found that the global average cost of a data breach from March 2023 to February 2024 was $4.88m, an increase of 10pc compared to the previous year. IBM attributed the cost spike to lost business as a result of a breach, along with post-breach customer and third-party response costs.

The latest Cost of a Data Breach report also shows that the impacts of data breaches are becoming more severe for businesses, as 70pc of breached organisations reported that a breach caused significant or very significant disruptions. The after-effects are also rising, as recovery takes more than 100 days for most of the breached organisations that were able to fully recover.

Nearly half of all breaches involved customer personal identifiable information, which can include tax identification numbers, emails, phone numbers and home addresses. Breaches involving stolen or compromised credentials took the longest to identify and contain of any attack vector, taking an average of 292 days.

Kevin Skapinetz, IBM Security VP of strategy and product design, said businesses are caught in a “continuous cycle of breaches, containment and fallout response”.

“This cycle now often includes investments in strengthening security defences and passing breach expenses on to consumers – making security the new cost of doing business,” Skapinetz said.

The IBM report suggests that severe staffing shortages are linked to higher data breach costs – more than half of the 604 organisations studied had severe or high-level staffing shortages last year.

Businesses with high levels of staffing issues had an average data breach cost of €5.28m, compared to €3.66m for businesses with lower levels. This trend may be reduced in the near future, as more organisations said they are planning to increase security budgets compared to last year.

IBM’s 2023 report suggested that AI and automation had the biggest impact on the speed of breach identification and containment, showing the role this technology was beginning to play in the cybersecurity sector.

[...] Many experts have spoken about the impact AI will have on the cybersecurity sector, for both defenders and attackers. BT threat intelligence specialist Catherine Williams described AI as a “double-edged sword” for the cybersecurity sector.


Original Submission

posted by hubie on Saturday August 03, @11:37AM   Printer-friendly
from the wind-me-up dept.

Arthur T Knackerbracket has processed the following story:

An international team of scientists, including two researchers who now work in the Center for Advanced Sensor Technology (CAST) at UMBC, has shown that twisted carbon nanotubes can store three times more energy per unit mass than advanced lithium-ion batteries. The finding may advance carbon nanotubes as a promising solution for storing energy in devices that need to be lightweight, compact, and safe, such as medical implants and sensors. The research was published recently in the journal Nature Nanotechnology.

[...] The researchers studied single-walled carbon nanotubes, which are like straws made from pure carbon sheets only 1-atom thick. Carbon nanotubes are lightweight, relatively easy to manufacture, and about 100 times stronger than steel. Their amazing properties have led scientists to explore their potential use in a wide range of futuristic-sounding technology, including space elevators.

To investigate carbon nanotubes' potential for storing energy, the UMBC researchers and their colleagues manufactured carbon nanotube "ropes" from bundles of commercially available nanotubes. After pulling and twisting the tubes into a single thread, the researchers then coated them with different substances intended to increase the ropes' strength and flexibility.

The team tested how much energy the ropes could store by twisting them up and measuring the energy that was released as the ropes unwound. They found that the best-performing ropes could store 15,000 times more energy per unit mass than steel springs, and about three times more energy than lithium-ion batteries.

The stored energy remains consistent and accessible at temperatures ranging from -76 to +212 °F (-60 to +100 °C). The materials in the carbon nanotube ropes are also safer for the human body than those used in batteries.

"Humans have long stored energy in mechanical coil springs to power devices such as watches and toys," Kumar Ujjain says. "This research shows twisted carbon nanotubes have great potential for mechanical energy storage, and we are excited to share the news with the world."

Journal information: Nature Nanotechnology


Original Submission

posted by janrinok on Saturday August 03, @06:49AM   Printer-friendly
from the corporate-schadenfreude dept.

https://arstechnica.com/gadgets/2024/07/reddit-ceo-stands-by-change-that-blocks-most-non-google-search-engines/

Reddit CEO Steve Huffman is standing by Reddit's decision to block companies from scraping the site without an AI agreement.

Last week, 404 Media noticed that search engines that weren't Google were no longer listing recent Reddit posts in results. This was because Reddit updated its Robots Exclusion Protocol (txt file) to block bots from scraping the site. The file reads: "Reddit believes in an open Internet, but not the misuse of public content." Since the news broke, OpenAI announced SearchGPT, which can show recent Reddit results.
[...]
In an interview with The Verge today, Huffman stood by the changes that led to Google temporarily being the only search engine able to show recent discussions from Reddit. Reddit and Google signed an AI training deal in February said to be worth $60 million a year. It's unclear how much Reddit's OpenAI deal is worth.
[...]
Per The Verge, Huffman claimed that Microsoft, Anthropic, and Perplexity haven't been negotiating. The three companies haven't commented on Huffman's interview.

"[It's been] a real pain in the ass to block these companies," Huffman told The Verge.
[...]
A Microsoft spokesperson told me last week that "Microsoft respects the robots.txt standard and we honor the directions provided by websites that do not want content on their pages to be used with our generative AI models."
[...]
Huffman also reportedly made reference to a June CNBC interview where Mustafa Suleyman, CEO of Microsoft AI, said: "I think that with respect to content that is already on the open web, the social contract of that content since the '90s has been that it is fair use. Anyone can copy it, re-create with it, reproduce with it. That has been freeware, if you like. That's been the understanding." Suleyman added that his comment didn't refer to certain types of web content, like news organizations.

"We've had Microsoft, Anthropic, and Perplexity act as though all of the content on the internet is free for them to use. That's their real position," Huffman said.

Related stories on SoylentNews:
Reddit Faces New Reality After Cashing in on its IPO - 20240328
Reddit Aims for $6.4bn Valuation Ahead of Initial Public Offering - 20240313
Reddit Sells Training Data to Unnamed AI Company Ahead of IPO - 20240223
Reddit is Removing Ability to Opt Out of Ad Personalization Based on Your Activity on the Platform - 20231004
Reddit Beats Film Industry, Won't Have to Identify Users Who Admitted Torrenting - 20230803
No Apologies as Reddit Halfheartedly Tries to Repair Ties With Moderators - 20230722
Ongoing Reddit Woes: Blackout Explained, Threatened Hacker Leak, Creative Continuing Protests - 20230620
Reddit Rollup: IPO Dreams and Developer Discontent - 20230612


Original Submission

posted by janrinok on Saturday August 03, @02:03AM   Printer-friendly
from the enjoy-your-self-immolation-crowdstrike dept.

CrowdStrike has sent a DMCA takedown notice to parody site ClownStrike, a clear abuse of United States copyright law, as the site in question is undoubtably covered by fair use in United States copyright law. Editor: See first link for more detail.

It is unfortunately well known that the DMCA is used by corporate cyberbullies to take down content that they disagree with; but, is otherwise legal. The Counternotice system is also hillariously ineffective. The DMCA requires service providers to "act expeditiously to remove or disable access to the infringing material;" yet, it gives those same "service providers" 14 days to restore access in the event of a counternotice! The DMCA, like much American legislation, is heavily biased towards corporations, instead of the actual, living, breathing, citizens of the country.

It's absolutely asinine and I would love absolutely nothing more than to have a lawsuit "win" against CrowdStrike. That would be absolutely amazing for marketing! Especially given the timing of such events...

Additionally, using the Digital Millenium Copyright Act to attempt to takedown a parody site for Trademark Infringement is absolutely hillarious.

There are several ways that anyone is allowed to use a trademark belonging to "others." This is considered "Fair Use." Fair Use is an important aspect of trademark and copyright law. It is a right to use trademarks and copyrighted works for parody, criticism, transformative works, news reporting / journalism, education, etc. Corporate cyberbullies don't like that anyone else has rights. Again, I don't care, because the only thing that matters is the law, and what a court thinks about it.


Original Submission

posted by hubie on Friday August 02, @09:20PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Motion at speeds beyond the speed of light is one of the most controversial issues in physics. Hypothetical particles that could move at superluminal speeds, called tachyons (from the Greek tachýs — fast, quick), are the ‘enfant terrible’ of modern physics. Until recently, they were widely regarded as creations that do not fit into the special theory of relativity.

At least three reasons for the non-existence of tachyons within quantum theory were known so far. The first: the ground state of the tachyon field was supposed to be unstable, which would mean that such superluminal particles would form `avalanches’. The second: a change in the inertial observer was supposed to lead to a change in the number of particles observed in his reference system, yet the existence of, say, seven particles cannot depend on who is looking at them. The third reason: the energy of the superluminal particles could take on negative values.

[...] It turned out that the ‘boundary conditions’ that determine the course of physical processes include not only the initial state but also the final state of the system. The results of the international team of researchers have just been published in the prestigious journal Physical Review D.

To put it simply: in order to calculate the probability of a quantum process involving tachyons, it is necessary to know not only its past initial state but also its future final state. Once this fact was incorporated into the theory, all the difficulties mentioned earlier completely disappeared and tachyon theory became mathematically consistent. “It’s a bit like internet advertising — one simple trick can solve your problems,” says Andrzej Dragan, chief inspirer of the whole research endeavor.

“The idea that the future can influence the present instead of the present determining the future is not new in physics. However, until now, this type of view has at best been an unorthodox interpretation of certain quantum phenomena, and this time we were forced to this conclusion by the theory itself. To ‘make room’ for tachyons we had to expand the state space,” concludes Dragan.

The authors also predict that the expansion of the boundary conditions has its consequences: a new kind of quantum entanglement appears in the theory, mixing past and future, which is not present in conventional particle theory. The paper also raises the question of whether tachyons described in this way are purely a ‘mathematical possibility’ or whether such particles are likely to be observed one day.

According to the authors, tachyons are not only a possibility but are, in fact, an indispensable component of the spontaneous breaking process responsible for the formation of matter. This hypothesis would mean that Higgs field excitations, before the symmetry was spontaneously broken, could travel at superluminal speeds in the vacuum.

Reference: “Covariant quantum field theory of tachyons” by Jerzy Paczos, Kacper Dębski, Szymon Cedrowski, Szymon Charzyński, Krzysztof Turzyński, Artur Ekert and Andrzej Dragan, 9 July 2024, Physical Review D. DOI: 10.1103/PhysRevD.110.015006


Original Submission

posted by janrinok on Friday August 02, @04:31PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The AI Act is finally here and big changes are on the way. Here are the key details of the Act and the tips businesses should heed before its full arrival.

The EU’s AI Act – its landmark regulation to rein in the growing power of artificial intelligence – has officially entered into force today (1 August), heralding big changes for Big Tech.

The Act has been in development for years, being first discussed in 2021 and altered in recent years with the sudden rise of generative AI technology. The Act has also been put under heavy scrutiny – challenges from member states towards the end of 2023 made it seem like the Act could collapse before coming to fruition.

But after delays, adjustments and multiple landslide votes, the AI Act is finally here. The changes won’t be felt immediately – it will be years until all of the rules come into effect – but this will give businesses and member states time to prepare for the Act’s full arrival.

Simply put, the AI Act is an attempt to balance managing the risks of this technology while letting the EU benefit from its potential. It has been argued that this is the most robust and detailed form of AI regulation in the world, which could influence legislation in other parts of the world.

The Act is designed to regulate AI technology through a risk-based approach – the riskier an AI application is, the more rules that apply to it. Minimum risk systems such as spam filters and recommender systems do not face any obligations under the AI Act.

Meanwhile, high-risk applications such as AI systems used for recruitment, AI-based loan assessments or autonomous robots will face much stricter requirements, including human oversight, high-quality data sets and cybersecurity. Some systems are banned entirely, such as emotion recognition systems used at the workplace.

The AI Act also introduces rules for “general-purpose AI models”, which are highly capable AI models that are designed to perform a wide variety of tasks such as generating human-like text – think ChatGPT and similar chatbots.

The AI Act won’t be felt until six months, when prohibitions will apply against unacceptable-risk AI applications. The rules for general-purpose AI models will apply one year from now, while the majority of rules of the AI Act will start applying on 2 August 2026.

Meanwhile, EU member states have until 2 August 2025 to designate “national competent authorities”, which will oversee the application of the AI Act and carry out market surveillance activities.

With AI making its way into so many use cases, it will be important for businesses of all sizes to consider the type of AI systems they are using and where they fall into the AI Act’s risk tiers. Phil Burr, head of product at Lumai, said the biggest risk businesses face is ignoring the Act.

“The good news is that the Act takes a risk-based approach and, given that the vast majority of AI will be minimal or low-risk, the requirements on businesses using AI will be relatively small,” Burr said. “It’s likely to be far less than the effort required to implement the GDPR regulations, for example.

“The biggest problem for compliance is the need to document and then perform regular assessments to ensure that the AI risks – and therefore requirements – haven’t changed. For the majority of businesses there won’t be a change in risk, but business at least need to remember to perform these.”

While businesses have plenty of time to prepare, the road ahead is not clear for them. Forrester principal analyst Enza Iannopollo noted that firms don’t have any pre-existing experience of complying with these type of rules, which adds “complexity to the challenge”.

“Right now, it’s crucial that organisations ensure they understand what theirs and their providers’ obligations are in order to be compliant on time,” Iannopollo said. “This is the time for organisations to map their AI projects, classify their AI systems and risk assess their use-cases.

“They also need to execute a compliance roadmap that is specific to the amount and combination of use-cases they have. Once this work is done, every company will have a compliance roadmap that is unique to them.”

To bridge the period between now and the full implementation of the Act, the European Commission has launched the ‘AI Pact’, which is initiative for AI developers to voluntarily adopt key obligations of the Act ahead of its legal deadlines.

The EU has been introducing stronger penalties for breaches in its more recent legislation, with the Digital Markets Act and Digital Services Act carrying heavy fines for non-compliance.

The AI Act is no exception to this approach, as companies that breach the Act could face fines of up to 7pc of their global annual turnover for violations of banned AI applications. They will also face fines of up to 3pc for violations of other obligations and up to 1.5pc for supplying incorrect information.

[...] “For reference, GDPR caps maximum fines to 4pc of annual turnover, whereas EU competition law caps this at 10pc,” Koskinen said. “This comparison shows a clear movement in regulatory enforcement for the AI Act, as the maximum fines inch closer to those imposed on anticompetitive behaviour.

“As businesses around the world look to Europe, the AI Act’s requirements will lead the way in responsible AI innovation and governance, while ensuring organisations are prepared for its rapidly approaching enforcement.”


Original Submission

posted by janrinok on Friday August 02, @12:54PM   Printer-friendly

Just to give you advance notice that the continual problem with the renewal of SSL certificates is due to occur on Monday 5 Aug.

Nobody in the new team has the necessary access nor knowledge of the current hardware configuration, and control remains with NCommander. The transfer of assets has been initiated but as one of the two members of the current Board is out of the country everything has temporarily ground to a halt. We cannot reconfigure the existing structure as legally we do not yet 'own' the database or existing hardware assets.

I have requested that NCommander assist by renewing the certificates but that depends upon his availability. He has been kind enough to help in the past. There is nothing more I can do at the moment.

I know that this is easily fixed - but until the formal exchange of the assets takes place we are on very shaky ground with regards to liabilities and responsibilities.

posted by janrinok on Friday August 02, @11:44AM   Printer-friendly
from the thankfully-never-used dept.

https://coldwar-ct.com/Home_Page_S1DO.html

The Cheshire ATT facility is an underground complex originally built in 1966. It was an underground terminal and repeater station for the hardened analog L4 carrier cable (coax) that went from Miami to New England carrying general toll circuits and critical military communication circuits. It reportedly housed an AUTOVON 4-wire switch as part of the switching fabric of that critical global military communications network. Cheshire also connected via terrestrial microwave to the major, semi-hardened AT&T Durham station which linked to many other sites including paths to New London (Navy Sub base) and to Green Hill, RI to meet a transatlantic cable to Europe.


Original Submission

posted by janrinok on Friday August 02, @06:53AM   Printer-friendly
from the dumpster-fire dept.

https://arstechnica.com/tech-policy/2024/07/amazon-forced-to-recall-400k-products-that-could-kill-electrocute-people/

Amazon failed to adequately alert more than 300,000 customers to serious risks—including death and electrocution—that US Consumer Product Safety Commission (CPSC) testing found with more than 400,000 products that third parties sold on its platform.
[...]
Instead of recalling the products, which were sold between 2018 and 2021, Amazon sent messages to customers that the CPSC said "downplayed the severity" of hazards.

In these messages—"despite conclusive testing that the products were hazardous" by the CPSC—Amazon only warned customers that the products "may fail" to meet federal safety standards and only "potentially" posed risks of "burn injuries to children," "electric shock," or "exposure to potentially dangerous levels of carbon monoxide."

Typically, a distributor would be required to specifically use the word "recall" in the subject line of these kinds of messages, but Amazon dodged using that language entirely.
[...]
The CPSC has additional concerns about Amazon's "insufficient" remedies. It is particularly concerned that anyone who received the products as a gift or bought them on the secondary market likely was not informed of serious known hazards. The CPSC found that Amazon resold faulty hair dryers and carbon monoxide detectors, proving that secondary markets for these products exist.

"Amazon has made no direct attempt to reach consumers who obtained the hazardous products as gifts, hand-me-downs, donations, or on the secondary market," the CPSC said.
[...]
After the CPSC's testing, Amazon stopped allowing these products to be listed on its platform, but that and other remedies were deemed insufficient. So, over the next two months, to protect the public, Amazon must now make a plan to "provide notice of the product hazards to purchasers and the public" and "incentivize the removal of these hazardous products from consumers' homes," the CPSC ordered.
[...]
To make up for "significant deficiencies" in Amazon's initial messaging, mandatory recall notices will likely include "a description of the product (including a photograph), hazard, injuries, deaths, action being taken, and remedy," provide "relevant dates and number of units" sold, and specifically use "the word 'recall' in the heading and text," the CPSC said.

Amazon's spokesperson told Ars that "in the event of a product recall in our store, we remove impacted products promptly after receiving actionable information from recalling agencies, and we continue to seek ways to innovate on behalf of our customers."

"Our recalls alerts service also ensures our customers are notified of important product safety information fast, and the recalls process is effective and efficient," Amazon's spokesperson said.

Customers can keep up with Amazon recalls in a designated safety alert section of its website.


Original Submission