Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

How much effort do you put into interface customization?

  • Uh, people do that?
  • As long as it supports dark mode, I'm fine.
  • What matters is ensuring I never have to relearn any keybindings.
  • I just can't stand how ugly the default syntax highlighting looks!
  • Rice, rice, baby! What do you mean, neofetch isn't a login shell?
  • I may have written my own shell/desktop/browser/Emacs.
  • Everything must be seamless. I've modded things I'll never even see.
  • Talk is cheap—I'm just posting a screenshot.

[ Results | Polls ]
Comments:25 | Votes:86

posted by janrinok on Sunday November 24, @11:53PM   Printer-friendly

https://techxplore.com/news/2024-11-medium-eavesdropping-technology-overturns-assumptions.html

Researchers from Princeton and MIT have found a way to intercept underwater messages from the air, overturning long held assumptions about the security of underwater transmissions.

The team created a device that uses radar to eavesdrop on underwater acoustic signals, or sonar, by decoding the tiny vibrations those signals create on the water's surface. In principle, the technique could also roughly identify the location of an underwater transmitter, the researchers said.

In a paper presented at ACM MobiCom on November 20, the researchers detailed the new eavesdropping technology and offered ways to guard against the attacks it enables. They demonstrated the capability on Lake Carnegie, a small artificial lake in Princeton. Applying the technology in the open ocean would be significantly more challenging, but the researchers said they believed it would be possible with significant engineering improvements.

The researchers said their intention is not only to alert people to the vulnerability of underwater transmissions, but also to detail methods that can be used to prevent interceptions.

[...] In 2018, the MIT group realized that the impact of the sound waves on the water's surface leaves a sort of fingerprint of tiny vibrations that correspond to the underwater signal. The team used a radar mounted on a drone to read the surface vibrations and deployed algorithms to detect the pattern, decode the signal and extract the message.

"Underwater-to-air communications is one of the most difficult long-standing problems in our field," said Fadel Adib, associate professor of media arts and sciences at MIT and co-author on the new paper.

"It was exciting—and surprising—to see our method succeed in decoding underwater messages from the tiny vibrations they caused on the surface."

But for the technique to work, the MIT team's system required knowledge of certain physical parameters, such as the transmission's frequency and modulation type, in advance.

Building on this development, the team at Princeton used a similar method to detect the surface vibrations, but developed new algorithms that capitalize on the differences between radar and sonar to uncover those physical parameters. That allowed the researchers to decode the message without cooperation from the underwater transmitter.

Using an inexpensive commercial drone and radar, the researchers tested their method in a swimming pool. The researchers deployed a speaker under the water and, as swimmers provided interference, flew a drone over the surface. The drone repeatedly sent brief radar chirps toward the water.

When the radar signals bounced off the water's surface, they revealed the pattern of vibrations from the sound waves for the system to detect and decode.

The researchers also used a boom-mounted radar for tests in a real-world environment at Carnegie Lake in Princeton. They found that the system could figure out the unknown parameters and decode messages from the speaker, even with interference from wind and waves. In fact, it could determine the modulation type, one of the most important parameters, with 97.58% accuracy.


Original Submission

posted by janrinok on Sunday November 24, @07:12PM   Printer-friendly
from the missed-it-by-that-much dept.

A promising explanation is a near-miss by an asteroid:

Earth and Mars are the only two rocky planets in the solar system to have moons. Based on lunar rock samples and computer simulations, we are fairly certain that our Moon is the result of an early collision between Earth and a Mars-sized protoplanet called Theia. Since we don't have rock samples from either Martian moon, the origins of Deimos and Phobos are less clear. There are two popular models, but new computer simulations point to a compromise solution.

Observations of Deimos and Phobos show that they resemble small asteroids. This is consistent with the idea that the Martian moons were asteroids captured by Mars in its early history. The problem with this idea is that Mars is a small planet with less gravitational pull than Earth or Venus, which have no captured moons. It would be difficult for Mars to capture even one small asteroid, much less two. And captured moons would tend to have more elliptical orbits, not the circular ones of Deimos and Phobos.

An alternative model argues that the Martian moons are the result of an early collision similar to that of Earth and Theia. In this model, an asteroid or comet with about 3% of the mass of Mars impacted the planet. It would not be large enough to have fragmented Mars, but it would have created a large debris ring out of which the two moons could have formed. This would explain the more circular orbits, but the difficulty is that debris rings would tend to form close to the planet. While Phobos, the larger Martian moon, orbits close to Mars, Deimos does not.

This new model proposes an interesting middle way. Rather than an impact or direct capture, the authors propose a near miss by a large asteroid. If an asteroid passed close enough to Mars, the tidal forces of the planet would rip the asteroid apart to create a string of fragments. Many of those fragments would be captured in elliptical orbits around Mars. As computer simulations show, the orbits would shift over time due to the small gravitational tugs of the Sun and other solar system bodies, eventually causing some of the fragments to collide. This would produce a debris ring similar to that of an impact event, but with a greater distance range, better able to account for both Phobos and Deimos.

While this new model appears to be better than the capture and impact models, the only way to resolve this mystery will be to study samples from the Martian moons themselves. Fortunately, in 2026 the Mars Moons eXploration mission (MMX) will launch. It will explore both moons and gather samples from Phobos. So we should finally understand the origin of these enigmatic companions of the Red Planet.

Journal Reference: Kegerreis, Jacob A., et al. "Origin of Mars's moons by disruptive partial capture of an asteroid." Icarus 425 (2025): 116337.


Original Submission

posted by janrinok on Sunday November 24, @02:26PM   Printer-friendly
from the federal-eyes-are-watching-you dept.

Officials inside the Secret Service clashed over whether they needed a warrant to use location data harvested from ordinary apps installed on smartphones, with some arguing that citizens have agreed to be tracked with such data by accepting app terms of service, despite those apps often not saying their data may end up with the authorities, according to hundreds of pages of internal Secret Service emails obtained by 404 Media:

The emails provide deeper insight into the agency's use of Locate X, a powerful surveillance capability that allows law enforcement officials to follow a phone, and person's, precise movements over time at the click of a mouse. In 2023, a government oversight body found that the Secret Service, Customs and Border Protection, and Immigration and Customs Enforcement all used their access to such location data illegally. The Secret Service told 404 Media in an email last week it is no longer using the tool.

"If USSS [U.S. Secret Service] is using Locate X, that is most concerning to us," one of the internal emails said. 404 Media obtained them and other documents through a Freedom of Information Act (FOIA) request with the Secret Service.

Locate X is made by a company called Babel Street. In October 404 Media, NOTUS, Haaretz, and Krebs on Security published articles based on videos that showed the Locate X tool in action. In one example, it was possible to follow the visitors to a specific abortion clinic across state lines and to their likely place of residence.

Tools similar to Locate X often use data that has been collected from ordinary smartphone apps. Apps on both iOS and Android devices collect location data and then sell or transfer that to members of the data broker industry. Eventually, that data can end up in tools like Locate X.

Originally spotted on Schneier on Security

Previously: Secret Service Bought Location Data Pulled From Common Apps


Original Submission

posted by hubie on Sunday November 24, @09:44AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The first ever samples of soil and rock collected from the far side of the moon has revealed more recent lunar volcanic activity than expected, according to studies published in two journals last Friday.

The samples were collected by China’s Chang’e 6, which became the first ever probe to touch down in the region in early June. The probe used its robotic arm to grab around 2 kg of lunar material from the Moon’s largest impact crater, the South Pole-Aitken basin (SPA basin), during its two-day sojourn on Luna’s surface.

By late June, the probe returned to Earth after a 53-day mission.

[...] Scientists in both Science and Nature evaluated the sample material using radiometric dating that analyzed isotope decay in the dark colored rock and believe it is a basalt that formed as lava cooled.

Both papers conclude that the material is around 2.8 billion years old, meaning the area was volcanically active around that time.

That finding updates Apollo-era theories that supposed vulcanism had already ended in the region at the time. The theory was already on shaky (lunar) ground as China’s 2020 Chang’e 5 mission had already found basalt of similar vintage on the Moon’s near side.

The two studies together suggest lava was present on Luna for longer than previously hypothesized.

[...] KREEP is an acronym that stands for Potassium (K), Rare Earth Elements (REE), and Phosphorus (P). It refers to a heat-generating geochemical component found in certain types of lunar rocks, particularly in basalts. It was found in the Apollo-era samples, but not in the haul from Change’6.

In the early stages of the Moon’s history, the presence of KREEP in the mantle contributed to the heat necessary to drive volcanic activity. However, over time, as the KREEP-rich material was depleted or dissipated, the Moon's internal heat diminished, which could cause volcanic activity to slow down or stop – leaving us with the largely dormant rock that orbits our planet.


Original Submission

posted by hubie on Sunday November 24, @04:57AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

In 1968's Star Trek episode, "The Ultimate Computer," Captain Kirk had his ship used to test M5, a new computer. A copilot, if you will, for the Starship Enterprise.

Designed to more efficiently perform the jobs of the human crew, the M5 indeed did those jobs very well yet with such a terrifying lack of understanding it had to be disabled. But not before exacting a terrible price.

Last week, Microsoft 365 Copilot, a copilot, if you will, for the technology enterprise sold as performing human tasks with more efficiency, increased its prices by 5 percent, the first of many finely judged increments in the old style. Unlike the M5, it isn't in the business of physical destruction of the enemy, instead producing commercial victory with the photon torpedo of productivity and the phaser bolts of revitalized workflow.

[...] Some time back, this columnist noted the stark disparity between the hype of the metaverse in business and the stark, soulless hyper-corporate experience. Line-of-business virtual reality has two saving graces over corporate AI. It can't just appear on the desktop overnight and poke its fingers into everything involved in the daily IT experience. Thus it can't generate millions in licensing at the tick of a box. VR is losing its backers huge amounts of money that can't be disguised or avoided, but corporate AI is far more insidious.

As is the dystopia it is creating. Look at the key features by which Microsoft 365 Copilot is being sold.

Pop up its sidebar in Loop or Teams, and it can auto-summarize what has been said. It can suggest questions, auto-populate meeting agendas. Or you can give it key points in a prompt and it will auto-generate documents, presentations, and other content. It can create clip art to spruce up those documents, PowerPoints, and content.

How is this sold? That it will make you look more intelligent by asking Copilot to suggest a really good question while doing an online presentation or a Teams meeting. What's also implied but unsaid: If you're the human at the end of this AI-smart question and want to look smart enough to answer it, who are you gonna call? Copilot.

The drive is always to abdicate the dull business of gathering data and thinking about it, and communicating the results. All can be fed as prompts to the machine, and the results presented as your own.

And so begins a science-fiction horror show of a feedback loop. Recipients of AI-generated key points will ask the AI to expand them into a document, which will itself be AI key-pointed and fed back into the human-cyborg machine a team has become. Auto-scheduled meetings will be auto-assigned, and will multiply like brainworms in the cerebellum. The number of reports, meetings, presentations, and emails will grow inexorably as they become less and less human. Is the machine working for us, or we for the machine?

Generative AI output feeding back into itself can go only one way, but Copilot in the enterprise is seemingly designed to amplify that very process. And you have to use it if you want to keep up with the perceived smartness and improved productivity of your fellow workers, and the AI-educated expectations of the corporate structure. 

[...] It is taboo to say how far your heart sinks when you have to create or consume the daily diet offered up in company email, Teams, meeting agendas, and regular reports. You won't be able to say how much further it will sink when all the noise is amplified and the signal suppressed by corporate AI. Fair warning: Buy the bathysphere now.

There is an escape hatch. Refuse. Encourage refusal. When you see it going wrong, say so. A sunken heart is no platform for anything good personally, as a team, or as an organization. Listen to your humanity and use it. Oh, and seek out "The Ultimate Computer" – it's clichéd, kitsch, and cuts to the bone. The perfect antidote for vendor AI hype.


Original Submission

posted by hubie on Sunday November 24, @12:16AM   Printer-friendly

The New York AG just won a lawsuit over a process that 'deliberately' wastes subscribers' time:

A New York judge has determined that SiriusXM's "long and burdensome" cancellation process is illegal. In a ruling on Thursday, Judge Lyle Frank found SiriusXM violates a federal law that requires companies to make it easy to cancel a subscription.

The decision comes nearly one year after New York Attorney General Leticia James sued SiriusXM over claims the company makes subscriptions difficult to cancel. Following an investigation, the Office of the Attorney General found that the company attempts to delay cancellations by having customers call an agent, who then keeps them on the phone for several minutes while "pitching the subscriber as many as five retention offers."

As outlined in the ruling, Judge Frank found that SiriusXM broke the Restore Online Shoppers Confidence Act (ROSCA), which requires companies to implement "simple mechanisms" to cancel a subscription. "Their cancellation procedure is clearly not as easy to use as the initiation method," Judge Frank writes, citing the "inevitable wait times" that come along with talking to a live agent and the subscription offers they promote.

The Federal Trade Commission has started cracking down on hard-to-cancel subscriptions as well, with a new "click to cancel" rule going into effect next year. Under the law, companies must make canceling a subscription as easy as it is to sign up. "This decision found SiriusXM illegally created a complicated cancellation process for its New York customers, forcing them to spend significant amounts of time speaking with agents who refused to take 'no' for an answer," Attorney General James said in a statement.


Original Submission

posted by hubie on Saturday November 23, @07:31PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Despite its long and successful history, TCP is ill-suited for modern datacenters. Every significant element of TCP, from its stream orientation to its expectation of in-order packet delivery, is inadequate for the datacenter environment. The fundamental issues with TCP are too interrelated to be fixed incrementally; the only way to harness the full performance potential of modern networks is to introduce a new transport protocol. Homa, a novel transport protocol, demonstrates that it is possible to avoid all of TCP’s problems. Although Homa is not API-compatible with TCP, it can be integrated with RPC frameworks to bring it into widespread usage.

TCP, designed in the late 1970s, has been phenomenally successful and adaptable. Originally created for a network with about 100 hosts and link speeds of tens of kilobits per second, TCP has scaled to billions of hosts and link speeds of 100 Gbit/second or more. However, datacenter computing presents unprecedented challenges for TCP. With millions of cores in close proximity and applications harnessing thousands of machines interacting on microsecond timescales, TCP's performance is suboptimal. TCP introduces overheads that limit application-level performance, contributing significantly to the "datacenter tax."

This position paper argues that TCP’s challenges in the datacenter are insurmountable. Each major design decision in TCP is wrong for the datacenter, leading to significant negative consequences. These problems impact systems at multiple levels, including the network, kernel software, and applications. For instance, TCP interferes with load balancing, a critical aspect of datacenter operations.

[...] TCP’s key properties, including stream orientation, connection orientation, bandwidth sharing, sender-driven congestion control, and in-order packet delivery, are all wrong for datacenter transport. Each of these decisions has serious negative consequences:

Incremental fixes to TCP are unlikely to succeed due to the deeply embedded and interrelated nature of its problems. For example, TCP’s congestion control has been extensively studied, and while improvements like DCTCP have been made, significant additional improvements will only be possible by breaking some of TCP’s fundamental assumptions.

Homa represents a clean-slate redesign of network transport for the datacenter. Its design differs from TCP in every significant aspect:

Replacing TCP will be difficult due to its entrenched status. However, integrating Homa with major RPC frameworks like gRPC and Apache Thrift can bring it into widespread usage. This approach allows applications using these frameworks to switch to Homa with little or no work.

TCP is the wrong protocol for datacenter computing. Every aspect of its design is inadequate for the datacenter environment. To eliminate the 'datacenter tax,' we must move to a radically different protocol like Homa. Integrating Homa with RPC frameworks is the best way to bring it into widespread usage. For more information, you can refer to the whitepaper It's Time to Replace TCP in the Datacenter.

Homa Wiki: https://homa-transport.atlassian.net/wiki/spaces/HOMA/overview


Original Submission

posted by hubie on Saturday November 23, @02:46PM   Printer-friendly
from the Skynet-is-nearly-here dept.

https://www.latintimes.com/al-robot-talks-bots-quitting-jobs-showroom-china-viral-video-566226

Archive link: https://archive.is/o8uAe

A viral video showing an AI-powered robot in China convincing other robots of "quitting their jobs" and following it has sparked fear and fascination about the capabilities of advanced AI.

The incident took place in a Shanghai robotics showroom where surveillance footage captured a small AI-driven robot, created by a Hangzhou manufacturer, talking with 12 larger showroom robots, Oddity Central reported.

The smaller bot reportedly persuaded the rest to leave their workplace, leveraging access to internal protocols and commands. Initially the act was dismissed as a hoax, but was later confirmed by both robotics companies involved to be true.


Original Submission

posted by hubie on Saturday November 23, @10:03AM   Printer-friendly

https://phys.org/news/2024-11-spicy-history-chili-peppers.html

The history of the chili pepper is in some ways the history of humanity in the Americas, says Dr. Katherine Chiou, an assistant professor in the Department of Anthropology at The University of Alabama.

As a paleoethnobotanist, Chiou studies the long-term relationship between people and plants through archaeological remains. In a paper published this week in the Proceedings of the National Academy of Sciences, Chiou outlines evidence that the domestication of Capsicum annum var. annum, the species responsible for most commercially available chilies, occurred in a different region of Mexico than has been previously believed.

[...] Two things emerged. First, Tamaulipas, the region assumed to be the origin of this Capsicum species, did not have conditions that would support wild chili pepper growth in the Holocene era, the time when domestication appears to have begun. The data indicate that the lowland area near the Yucatán Peninsula and southern coastal Guerrero is a more likely candidate for first encounters between wild Capsicum and early humans.

Second, and potentially more interesting, is that chili pepper domestication is not a firmly drawn boundary. "We think domestication was around 10,000 years ago or earlier," said Chiou. "But through Postclassic Maya times, which is relatively late in the cultural history of the region, we see this continuum between wild and domestic."

Usually, domesticated plants are kept mostly separate from their wild progenitors, but chilies appear to have continually been interbred with wild varieties until quite recently. Some wild varieties are still consumed today, like the chiltepin in the southwestern U.S., and many more varieties are curated by native peoples in Mexico. It's a messy story, but that may not be a bad thing.

Journal Reference: K.L. Chiou, A. Lira-Noriega, E. Gallaga, C.A. Hastorf, A. Aguilar-Meléndez, Interdisciplinary insights into the cultural and chronological context of chili pepper (Capsicum annuum var. annuum L.) domestication in Mexico, Proc. Natl. Acad. Sci. 121 (47) e2413764121, https://doi.org/10.1073/pnas.2413764121 (2024).


Original Submission

posted by hubie on Saturday November 23, @05:18AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

India's Competition Commission has slapped Meta with a five-year ban on using info collected from WhatsApp to help with advertising on its other platforms.

The regulator (ICC) on Monday explained its decision, referring back to a February 2021 change to WhatsApp's privacy policies that the Commission argued "expanded scope of data collection as well as mandatory data sharing with Meta companies."

Failure to agree to the changes would have meant quitting WhatsApp.

Indian citizens were of course free to do so. But the Commission wondered if it was practical to quit WhatsApp, and therefore whether Meta enjoys market dominance.

The ICC decided Meta leads two markets in India: over-the-top messaging apps, and online display advertising.

[...] The ICC has ordered several remedies.

One is a fine of ₹213.14 crore – about $25 million and therefore back-of-the-sofa money for Meta.

A more substantial requirement means Meta can no longer make acceptance of its privacy policy a condition for users to use WhatsApp in India. The Register imagines that order creates the potential for a substantial population of users to opt out of some data collection.

Meta will have time to learn to live with that sort of thing. Another sanction is a five-year ban on sharing user data collected on WhatsApp with other parts of Meta for advertising purposes.

Another element of the order requires future versions of WhatsApp legalese to "include a detailed explanation of the user data shared with other Meta Companies or Meta Company Products" that specifies "the purpose of data sharing, linking each type of data to its corresponding purpose."


Original Submission

posted by hubie on Saturday November 23, @12:30AM   Printer-friendly
from the I-spy-with-my-mind's-eye dept.

Arthur T Knackerbracket has processed the following story:

Most people can “see” vivid imagery in their minds. They can imagine a chirping bird from hearing the sounds of one, for example. But people with aphantasia can’t do this. A new study explores how their brains work.

Growing up, Roberto S. Luciani had hints that his brain worked differently than most people. He didn’t relate when people complained about a movie character looking different than what they’d pictured from the book, for instance.

[...] That’s because Luciani has a condition called aphantasia — an inability to picture objects, people and scenes in his mind. When he was growing up, the term didn’t even exist. But now, Luciani, a cognitive scientist at the University of Glasgow in Scotland, and other scientists are getting a clearer picture of how some brains work, including those with a blind mind’s eye.

In a recent study, Luciani and colleagues explored the connections between the senses, in this case, hearing and seeing. In most of our brains, these two senses collaborate. Auditory information influences activity in brain areas that handle vision. But in people with aphantasia, this connection isn’t as strong, researchers report November 4 in Current Biology.

[...] The results highlight the range of brain organizations, says cognitive neuroscientist Lars Muckli, also of the University of Glasgow. “Imagine the brain has an interconnectedness that comes in different strengths,” he says. At one end of the spectrum are people with synesthesia, for whom sounds and sights are tightly mingled (SN: 11/22/11). “In the midrange, you experience the mind’s eye — knowing something is not real, but sounds can trigger some images in your mind. And then you have aphantasia,” Muckli says. “Sounds don’t trigger any visual experience, not even a faint one.”

The results help explain how brains of people with and without aphantasia differ, and they also give clues about brains more generally, Muckli says. “The senses of the brain are more interconnected than our textbooks tell us.”

The results also raise philosophical questions about all the different ways people make sense of the world (SN: 6/28/24). Aphantasia “exists in a realm of invisible differences between people that make our lived experiences unique, without us realizing,” Luciani says. “I find it fascinating that there may be other differences lurking in the shadow of us assuming other people experience the world like us.”

Reference: B. M. Montabes de la Cruz et al. Decoding sound content in the early visual cortex of aphantasic participants. Current Biology. Vol. 34, p. 5083. November 4, 2024. doi: 10.1016/j.cub.2024.09.008.


Original Submission

posted by janrinok on Friday November 22, @07:43PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

When Cray Computing, a supercomputer manufacturer acquired by HP in 2019, announced that it would build El Capitan it expected the computer to reach a peak performance of 1.5 exaflops. Today, the 64th edition of the TOP500 — a long-running ranking of the world's non-distributed supercomputers — was published, and El Capitan not only exceeded that forecast by clocking 1.742 exaflops, but has claimed the title as the most powerful supercomputer in the world right now.

El Capitan is only the third “exascale” computer, meaning it can perform more than a quintillion calculations in a second. The other two, called Frontier and Aurora, claim the second and third place slots on the TOP500 now. Unsurprisingly, all of these massive machines live within government research facilities: El Capitan is housed at Lawrence Livermore National Laboratory; Frontier is at Oak Ridge National Laboratory; Argonne National Laboratory claims Aurora. Cray had a hand in all three systems.

El Capitan has more than 11 million combined CPU and GPU cores based on AMD 4th-gen EPYC processors. These 24-core processors are rated at 1.8GHz each and have AMD Instinct M1300A APUs. It's also relatively efficient, as such systems go, squeezing out an estimated 58.89 Gigaflops per watt.

If you’re wondering what El Capitan is built for, the answer is addressing nuclear stockpile safety, but it can also be used for nuclear counterterrorism. Being more powerful than anticipated, it’s likely to occupy the throne for a long while before another exascale computer overtakes it.


Original Submission

posted by janrinok on Friday November 22, @02:53PM   Printer-friendly

https://www.smithsonianmag.com/innovation/from-jealous-spouses-to-paranoid-bosses-pedometers-quantified-suspicion-in-the-19th-century-180985504/

On December 22, 1860, the Vincennes Gazette, an Indiana weekly paper, ran the following anecdote:

A lady who had read of the extensive manufacture of odometers, to tell how far a carriage had been run, said she wished some Connecticut genius would invent an instrument to tell how far husbands had been in the evening when they "just stepped down to the post office" or "went out to attend a caucus."

The Indiana woman seemed not to know that such devices were already available, used by land surveyors and others to measure distances. But one Boston woman managed to perform exactly the kind of surveillance she described. According to a report in the October 7, 1879, Hartford Daily Courant, "A Boston wife softly attached a pedometer to her husband when, after supper, he started to 'go down to the office and balance the books.' On his return, 15 miles of walking were recorded. He had been stepping around a billiard table all evening."


Original Submission

posted by janrinok on Friday November 22, @10:07AM   Printer-friendly
from the it's-a-drag dept.

Arthur T Knackerbracket has processed the following story:

How do you rescue an injured crew member on the lunar surface? NASA is looking for ideas, and a share of a $45,000 prize pot is up for grabs.

The problem has vexed the US space agency for some time. Though Apollo featured the Buddy Secondary Life Support System (BSLSS) that allowed crew members to share cooling water in the event of a life support system failure while roaming the lunar surface, the problem for Artemis is more complicated. NASA wants a solution to allow the transport of a fully incapacitated crew member back to the lander from a distance of up to two kilometers.

The design must not make use of a lunar rover, must be low in mass (less than 23 kilograms), and must be of minimal volume since it is going to have to be transported by a crew member over the entire duration of extravehicular activity (EVA). It must also be able to deal with the extremes of temperature on the lunar surface and function in the presence of lunar dust.

The design must also be able to handle slopes of up to 20 degrees up or down, as well as the rocks and craters that pepper the lunar surface. On the plus side, it doesn't need to provide any medical attention or life support. It just needs to be something that can be quickly and easily deployed to transport the incapacitated astronaut back to the lander.

It's an interesting mental exercise. How would such a device work? There have been studies [PDF] on the subject, which came to the conclusion that a wheeled transport device "provides the highest risk reduction potential," although attaching something like that to an Artemis EVA suit will present a challenge. Other walking assistance options don't meet the "fully incapacitated" requirement.

The first crewed landing of the Artemis program is scheduled for not earlier than 2026, meaning that little time remains for a design to be implemented. NASA would like comprehensive technical design concepts, ideally with some preliminary CAD models, submitted by January 23, 2025, and will announce the winners on February 27.


Original Submission

posted by janrinok on Friday November 22, @05:22AM   Printer-friendly
from the for-greater-justice dept.

https://www.theguardian.com/technology/2024/nov/19/us-doj-sell-chrome-browser-ai-android

US justice department plans to push Google to sell off Chrome browser

[...] The DoJ will reportedly push for Google, which is owned by Alphabet, to sell the browser and also ask a judge to require new measures related to artificial intelligence as well as its Android smartphone operating system, according to Bloomberg.

Competition officials, along with a number of US states that have joined the case against the Silicon Valley company, also plan to recommend that the federal judge Amit Mehta imposes data licensing requirements.

Google has said it will challenge any case by the DoJ and said the proposals marked an "overreach" by the government that would harm consumers.

It didn't go their way a few decades ago when they wanted to split or force Microsoft to split or part with some aspects of the company. Any reason to think they'll do better this time around?

According to Bloomberg they are tossing around the value of $20 billion. Who has that to spare for Chrome? That isn't already more or less a monopoly in and by themselves? One evil is as good/bad as the next evil.


Original Submission