Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Idiosyncratic use of punctuation - which of these annoys you the most?

  • Declarations and assignments that end with }; (C, C++, Javascript, etc.)
  • (Parenthesis (pile-ups (at (the (end (of (Lisp (code))))))))
  • Syntactically-significant whitespace (Python, Ruby, Haskell...)
  • Perl sigils: @array, $array[index], %hash, $hash{key}
  • Unnecessary sigils, like $variable in PHP
  • macro!() in Rust
  • Do you have any idea how much I spent on this Space Cadet keyboard, you insensitive clod?!
  • Something even worse...

[ Results | Polls ]
Comments:64 | Votes:115

posted by hubie on Sunday August 06 2023, @08:47PM   Printer-friendly

Standing 40 cm high and 70 cm wide, the semi-transparent display has translations pop up simultaneously on the screen as the station staff and a foreign tourist speak:

Standing 40 cm high and 70 cm wide, the semi-transparent display has translations pop up simultaneously on the screen as the station staff and a foreign tourist speak.

With more than two million visitors flocking to Japan last month in the wake of the country's post-pandemic reopening, railway companies are gearing up to warmly greet the influx of global travellers.

Seibu Railway, one of the country's large railroad companies, is implementing a new simultaneous translation system to help foreign tourists navigate Tokyo's metro which is notorious for its complexity.

[...] this new semi-transparent display has translations pop up simultaneously on the screen as the station staff and foreign tourists communicate.

"The display we have introduced can automatically translate between Japanese and other languages. When customers speak in a foreign language, the station attendant can see it in Japanese, and when the station attendant speaks Japanese, customers can read the sentences in their own language," said Ayano Yajima, Seibu Railway Sales and Marketing supervisor.

"Google Translate isn't always available because you don't always have Wi-Fi everywhere you go, so places like this, it's also much faster than pulling up your phone, typing everything out, showing it and (there being) misunderstandings. Having it like this, clear on the screen, it's really nice," said Kevin Cometto, an Italian student visiting Japan.

The VoiceBiz UCDisplay supports Japanese and 11 other languages including English, French, and Spanish.

The station staff previously used translation apps.

But with the translation window, a face-to-face conversation through the screen is possible, complete with facial expressions and gestures.

According to Seibu Railway, the device is designed to help customers with more complex requests such as seeking directions or information about the local area.


Original Submission

posted by requerdanos on Sunday August 06 2023, @04:01PM   Printer-friendly
from the what-happens-in-Vegas-stays-in-Vegas dept.

https://arstechnica.com/cars/2023/08/musks-boring-company-gets-ok-to-dig-68-miles-of-tunnels-under-las-vegas/

Elon Musk's tunneling company has permission to significantly expand its operations under the city of Las Vegas. Last month, the Las Vegas City Council voted unanimously to approve the Boring Company's plan to dig more tunnels under the city, following in the steps of Clark County, which in May gave a similar thumbs-up to the tunneling concern. The company's plan calls for 68 miles of tunnels and 81 stations, served by a fleet of Tesla electric vehicles, each able to carry three passengers at a time.

[...] But the Boring Company's plans scaled back from maglev trains and vacuum tubes to high-speed electric pods and then to just regular Teslas with human drivers, and interest waned.

Except in Las Vegas. There, the Las Vegas Convention and Visitors Authority said yes to a $48.6 million, 2.2-mile loop underneath the convention center. In 2021, the LVCC Loop opened a 1.7-mile network with three stations; the Boring Company claims it has transported 1.15 million passengers, with a peak capacity of just 4,500 people per hour. For context, a subway system can be expected to carry between 600 and 1,000 people per train.


Original Submission

posted by Fnord666 on Sunday August 06 2023, @11:16AM   Printer-friendly
from the peering-into-the-abyss dept.

https://arstechnica.com/science/2023/07/a-jargon-free-explanation-of-how-ai-large-language-models-work/

When ChatGPT was introduced last fall, it sent shockwaves through the technology industry and the larger world. Machine learning researchers had been experimenting with large language models (LLMs) for a few years by that point, but the general public had not been paying close attention and didn't realize how powerful they had become.

Today, almost everyone has heard about LLMs, and tens of millions of people have tried them out. But not very many people understand how they work.

If you know anything about this subject, you've probably heard that LLMs are trained to "predict the next word" and that they require huge amounts of text to do this. But that tends to be where the explanation stops. The details of how they predict the next word is often treated as a deep mystery.
[...]
To understand how language models work, you first need to understand how they represent words. Humans represent English words with a sequence of letters, like C-A-T for "cat." Language models use a long list of numbers called a "word vector." For example, here's one way to represent cat as a vector:

[0.0074, 0.0030, -0.0105, 0.0742, 0.0765, -0.0011, 0.0265, 0.0106, 0.0191, 0.0038, -0.0468, -0.0212, 0.0091, 0.0030, -0.0563, -0.0396, -0.0998, -0.0796, ..., 0.0002]

(The full vector is 300 numbers long—to see it all, click here and then click "show the raw vector.")

Why use such a baroque notation? Here's an analogy. Washington, DC, is located at 38.9 degrees north and 77 degrees west. We can represent this using a vector notation:

  • Washington, DC, is at [38.9, 77]
  • New York is at [40.7, 74]
  • London is at [51.5, 0.1]
  • Paris is at [48.9, -2.4]

This is useful for reasoning about spatial relationships.
[...]
For example, the words closest to cat in vector space include dog, kitten, and pet. A key advantage of representing words with vectors of real numbers (as opposed to a string of letters, like C-A-T) is that numbers enable operations that letters don't.

Words are too complex to represent in only two dimensions, so language models use vector spaces with hundreds or even thousands of dimensions.
[...]
Researchers have been experimenting with word vectors for decades, but the concept really took off when Google announced its word2vec project in 2013. Google analyzed millions of documents harvested from Google News to figure out which words tend to appear in similar sentences. Over time, a neural network trained to predict which words co-occur with other words learned to place similar words (like dog and cat) close together in vector space.
[...]
Because these vectors are built from the way humans use words, they end up reflecting many of the biases that are present in human language. For example, in some word vector models, "doctor minus man plus woman" yields "nurse." Mitigating biases like this is an area of active research.
[...]
Traditional software is designed to operate on data that's unambiguous. If you ask a computer to compute "2 + 3," there's no ambiguity about what 2, +, or 3 mean. But natural language is full of ambiguities that go beyond homonyms and polysemy:

  • In "the customer asked the mechanic to fix his car," does "his" refer to the customer or the mechanic?
  • In "the professor urged the student to do her homework" does "her" refer to the professor or the student?
  • In "fruit flies like a banana" is "flies" a verb (referring to fruit soaring across the sky) or a noun (referring to banana-loving insects)?

People resolve ambiguities like this based on context, but there are no simple or deterministic rules for doing this. Rather, it requires understanding facts about the world. You need to know that mechanics typically fix customers' cars, that students typically do their own homework, and that fruit typically doesn't fly.

Word vectors provide a flexible way for language models to represent each word's precise meaning in the context of a particular passage.
[...]
Research suggests that the first few layers focus on understanding the sentence's syntax and resolving ambiguities like we've shown above. Later layers (which we're not showing to keep the diagram a manageable size) work to develop a high-level understanding of the passage as a whole.
[...]
In short, these nine attention heads enabled GPT-2 to figure out that "John gave a drink to John" doesn't make sense and choose "John gave a drink to Mary" instead.

We love this example because it illustrates just how difficult it will be to fully understand LLMs. The five-member Redwood team published a 25-page paper explaining how they identified and validated these attention heads. Yet even after they did all that work, we are still far from having a comprehensive explanation for why GPT-2 decided to predict "Mary" as the next word.
[...]
In a 2020 paper, researchers from Tel Aviv University found that feed-forward layers work by pattern matching: Each neuron in the hidden layer matches a specific pattern in the input text.
[...]
Recent research from Brown University revealed an elegant example of how feed-forward layers help to predict the next word. Earlier, we discussed Google's word2vec research showing it was possible to use vector arithmetic to reason by analogy. For example, Berlin - Germany + France = Paris.

The Brown researchers found that feed-forward layers sometimes use this exact method to predict the next word.
[...]
All the parts of LLMs we've discussed in this article so far—the neurons in the feed-forward layers and the attention heads that move contextual information between words—are implemented as a chain of simple mathematical functions (mostly matrix multiplications) whose behavior is determined by adjustable weight parameters. Just as the squirrels in my story loosen and tighten the valves to control the flow of water, so the training algorithm increases or decreases the language model's weight parameters to control how information flows through the neural network.
[....]
(If you want to learn more about backpropagation, check out our 2018 explainer on how neural networks work.)
[...]
Over the last five years, OpenAI has steadily increased the size of its language models. In a widely read 2020 paper, OpenAI reported that the accuracy of its language models scaled "as a power-law with model size, dataset size, and the amount of compute used for training, with some trends spanning more than seven orders of magnitude."

The larger their models got, the better they were at tasks involving language. But this was only true if they increased the amount of training data by a similar factor. And to train larger models on more data, you need a lot more computing power.
[...]
Psychologists call this capacity to reason about the mental states of other people "theory of mind." Most people have this capacity from the time they're in grade school. Experts disagree about whether any non-human animals (like chimpanzees) have theory of mind, but there's a general consensus that it's important for human social cognition.

Earlier this year, Stanford psychologist Michal Kosinski published research examining the ability of LLMs to solve theory-of-mind tasks. He gave various language models passages like the one we quoted above and then asked them to complete a sentence like "she believes that the bag is full of." The correct answer is "chocolate," but an unsophisticated language model might say "popcorn" or something else.
[...]
It's worth noting that researchers don't all agree that these results indicate evidence of theory of mind; for example, small changes to the false-belief task led to much worse performance by GPT-3, and GPT-3 exhibits more variable performance across other tasks measuring theory of mind. As one of us (Sean) has written, it could be that successful performance is attributable to confounds in the task—a kind of "clever Hans" effect, only in language models rather than horses.
[...]
In April, researchers at Microsoft published a paper arguing that GPT-4 showed early, tantalizing hints of artificial general intelligence—the ability to think in a sophisticated, human-like way.

For example, one researcher asked GPT-4 to draw a unicorn using an obscure graphics programming language called TiKZ. GPT-4 responded with a few lines of code that the researcher then fed into the TiKZ software. The resulting images were crude, but they showed clear signs that GPT-4 had some understanding of what unicorns look like.
[...]
At the moment, we don't have any real insight into how LLMs accomplish feats like this. Some people argue that such examples demonstrate that the models are starting to truly understand the meanings of the words in their training set. Others insist that language models are "stochastic parrots" that merely repeat increasingly complex word sequences without truly understanding them.
[...]
Further, prediction may be foundational to biological intelligence as well as artificial intelligence. In the view of philosophers like Andy Clark, the human brain can be thought of as a "prediction machine" whose primary job is to make predictions about our environment that can then be used to navigate that environment successfully.
[...]
Traditionally, a major challenge for building language models was figuring out the most useful way of representing different words—especially because the meanings of many words depend heavily on context. The next-word prediction approach allows researchers to sidestep this thorny theoretical puzzle by turning it into an empirical problem. It turns out that if we provide enough data and computing power, language models end up learning a lot about how human language works simply by figuring out how to best predict the next word. The downside is that we wind up with systems whose inner workings we don't fully understand.


Original Submission

posted by Fnord666 on Sunday August 06 2023, @06:32AM   Printer-friendly
from the keeping-the-pipes-clean dept.

https://phys.org/news/2023-08-art-roman-revealed.html

While 21st century water companies struggle to maintain clean, fresh supplies, new research from an international team led by Oxford geoarchaeologist Dr. Güel Sürmelihindi, reveals that, some 2,000 years ago, Roman water engineers were keeping up a regular program of managing and maintaining the ancient water systems.

According to the research, published in Scientific Reports, ancient water management traces are captured in the limescale deposits which built up on the walls and floor of the ancient Roman aqueduct of Divona (Cahors, France).

The evidence shows that these deposits were regularly and partially removed during maintenance. It reveals, "The discovery of traces of regular maintenance in the carbonate deposits... such as tool marks, calcite deformation twins, debris from cleaning and repairs...are proof of periodic manual carbonate removal by Roman maintenance teams.

Journal Reference:
Sürmelihindi, Gül, Passchier, Cees W., Rigal, Didier, et al. Roman aqueduct maintenance in the water supply system of Divona, France [open], Scientific Reports (DOI: 10.1038/s41598-023-38655-z)


Original Submission

posted by hubie on Sunday August 06 2023, @01:47AM   Printer-friendly

https://computer.rip/2023-07-29-Free-Public-WiFi.html

Once, many years ago, I stayed on the 62nd floor of the Westin Peachtree Plaza in Atlanta, Georgia. This was in the age when the price of a hotel room was directly correlated with the price of the WiFi service, and as a high school student I was not prepared to pay in excess of $15 a day for the internet. As I remember, a Motel 6 that was not blocks away but within line of sight ended up filling the role. But even up there, 62 floors from the ground, there was false promise: Free Public WiFi.

I am not the first person to write on this phenomenon, I think I originally came to understand it as a result of a 2010 segment of All Things Considered. For a period of a few years, almost everywhere you went, there was a WiFi network called "Free Public WiFi." While it was both free and public in the most literal sense, it did not offer internet access. It was totally useless, and fell somewhere between a joke, a scam, and an accident of history. Since I'm not the first to write about it, I have to be the most thorough, and so let's start out with a discussion of WiFi itself.


Original Submission

posted by requerdanos on Saturday August 05 2023, @09:01PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Amazon Web Services has started offering a cloudy server packing a custom fourth-generation Intel Xeon Scalable processor and boasting 96 cores or 192 vCPUs.

It's not clear if that's a colossal chip that features 36 more cores than the mightiest Xeon Intel lists for sale to the public – the 60-core Platinum 8490H – or a two-socket server with a lesser processor.

Intel has form doing custom jobs that beat the kit of its official product list: we once spotted Oracle with a Xeon that outpaced processors sold to other customers.

Whatever the kit inside the box, news of it emerged in a Wednesday post detailing the newly-available M7i-Flex and M7i instance types available in the Amazon Elastic Compute Cloud (Amazon EC2).

That post lists an instance type called the "m7i.48xlarge" that offers 192 vCPUs, and AWS's CPU options page lists the instance as offering 96 default CPU cores.

We've asked AWS and Intel to detail the spec of the silicon, because a single processor with 96 cores would be well beyond what Chipzilla has spoken about in public.


Original Submission

posted by requerdanos on Saturday August 05 2023, @04:14PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The James Webb Space Telescope (JWST) has taken an astonishing new image of the Ring Nebula. This glowing, donut-shaped nebula has never been seen in such intricate detail before.

The Ring Nebula is about 2600 light years away in the direction of the constellation Lyra. It is what astronomers call a planetary nebula, which forms when a dying star blows off its outer layers to create a shroud of gas and dust.

By chance, this nebula happens to be oriented so that from Earth we view it face-on, with the stellar corpse in the centre circled by its titular ring of bright nitrogen and sulfur. The whole thing is enveloped in a veil of oxygen gas, which gives it a greenish tinge when the star’s light passes through it.

“We are witnessing the final chapters of a star’s life, a preview of the sun’s distant future, so to speak,” said Mike Barlow at University College London in a statement. “We can use the Ring Nebula as our laboratory to study how planetary nebulae form and evolve.”

Also at Space.com


Original Submission

posted by requerdanos on Saturday August 05 2023, @11:30AM   Printer-friendly
from the negligent-cybersecurity-practices dept.

https://arstechnica.com/security/2023/08/microsoft-cloud-security-blasted-for-its-culture-of-toxic-obfuscation/

Microsoft has once again come under blistering criticism for the security practices of Azure and its other cloud offerings, with the CEO of security firm Tenable saying Microsoft is "grossly irresponsible" and mired in a "culture of toxic obfuscation."

The comments from Amit Yoran, chairman and CEO of Tenable, come six days after Sen. Ron Wyden (D-Ore.) blasted Microsoft for what he said were "negligent cybersecurity practices" that enabled hackers backed by the Chinese government to steal hundreds of thousands of emails from cloud customers, including officials in the US Departments of State and Commerce. Microsoft has yet to provide key details about the mysterious breach, which involved the hackers obtaining an extraordinarily powerful encryption key granting access to a variety of its other cloud services. The company has taken pains ever since to obscure its infrastructure's role in the mass breach.

Arthur T Knackerbracket has processed the following story:

Yoran has more to add to the senator’s arguments, writing in his post that Microsoft has demonstrated a “repeated pattern of negligent cybersecurity practices,” enabling Chinese hackers to spy on the US government. He also revealed Tenable’s discovery of an additional cybersecurity flaw in Microsoft Azure and says the company took too long to address it.

Tenable initially discovered the flaw in March and found that it could give bad actors access to a company’s sensitive data, including a bank. Yoran claims Microsoft took “more than 90 days to implement a partial fix” after Tenable notified the company, adding that the fix only applies to “new applications loaded in the service.” According to Yoran, the bank and all the other organizations “that had launched the service prior to the fix” are still affected by the flaw — and are likely unaware of that risk.

Yoran says Microsoft plans to fix the issue by the end of September but calls the delayed response “grossly irresponsible, if not blatantly negligent.” He also points to data from Google’s Project Zero, which indicates that Microsoft products have made up 42.5 percent of all discovered zero-day vulnerabilities since 2014.

“What you hear from Microsoft is ‘just trust us,’ but what you get back is very little transparency and a culture of toxic obfuscation,” Yoran writes. “How can a CISO, board of directors or executive team believe that Microsoft will do the right thing given the fact patterns and current behaviors?”


Original Submission #1Original Submission #2

posted by requerdanos on Saturday August 05 2023, @06:44AM   Printer-friendly
from the we've-been-trying-to-contact-you dept.

Arthur T Knackerbracket has processed the following story:

In December 2022, the FCC proposed the biggest fine it has ever issued against a robocalling outfit – $299,997,000. The penalty eclipses the previous record holder – Rising Eagle and JSquared Telecom – by nearly $75 million in 2020. After a lengthy investigation, the Commission decided on Thursday to proceed with the huge fine.

The record-breaking punishment goes to an illegal transnational robocalling operation. The outfit is so big (or so blatantly illegal) that it does not have an official umbrella company. It's more of a network of cooperating businesses that made more than five billion automated calls to over 500 million phone numbers within a three-month period in 2021.

In doing so, the FCC says the organized operation broke multiple federal laws by spoofing more than one million telephone numbers to hide their actual origin and trick people into answering the calls. It also violated numerous other FCC regulations.

[...] The operation has allegedly been around since 2018 and primarily sold consumers vehicle service contracts falsely disguised as auto warranties. Two primary bad actors – Roy M. Cox and Aaron Michael Jones – already hold lifetime bans from running telemarketing businesses after losing a lawsuit brought on them by the FCC and the State of Texas. Business names associated with the illegal enterprise include Sumco Panama, Virtual Telecom, Davis Telecom, Geist Telecom, Fugle Telecom, Tech Direct, Mobi Telecom, and Posting Express.

[...] It's hard to nail down robocallers, but it's at least nice to see the FCC trying to hit them with huge penalties instead of laughable slaps on the wrist.


Original Submission

posted by mrpg on Saturday August 05 2023, @01:59AM   Printer-friendly
from the this-will-put-a-spring-in-your-step dept.

Scientists have discovered that the recoil created by the flexible arch of human feet helps position our legs in the optimal posture for moving forward in bipedal walking:

A new study has shown that humans may have evolved a spring-like arch to help us walk on two feet. Researchers studying the evolution of bipedal walking have long assumed that the raised arch of the foot helps us walk by acting as a lever which propels the body forward. But a global team of scientists have now found that the recoil of the flexible arch repositions the ankle upright for more effective walking. The effects in running are greater, which suggests that the ability to run efficiently could have been a selective pressure for a flexible arch that made walking more efficient too. This discovery could even help doctors improve treatments for present-day patients' foot problems.

"We thought originally that the spring-like arch helped to lift the body into the next step," said Dr Lauren Welte, first author of the study in Frontiers in Bioengineering and Biotechnology, who conducted the research while at Queen's University and is now affiliated with the University of Wisconsin-Madison. "It turns out that instead, the spring-like arch recoils to help the ankle lift the body."

The evolution of our feet, including the raised medial arch which sets us apart from great apes, is crucial to bipedal walking. The arch is thought to give hominins more leverage when walking upright: the mechanism is unclear, but when arch motion is restricted, running demands more energy. Arch recoil could potentially make us more efficient runners by propelling the center mass of the body forward, or by making up for mechanical work that muscles would otherwise have to do.

[...] Although the scientists expected to find that arch recoil helped the rigid lever of the arch to lift the body up, they discovered that a rigid arch without recoil either caused the foot to leave the ground early, likely decreasing the efficiency of the calf muscles, or leaned the ankle bones too far forward. The forward lean mirrors the posture of walking chimpanzees, rather than the upright stance characteristic of human gait. The flexible arch helped reposition the ankle upright, which allows the leg to push off the ground more effectively. This effect is even greater when running, suggesting that efficient running may have been an evolutionary pressure in favor of the flexible arch.

[...] "The mobility of our feet seems to allow us to walk and run upright instead of either crouching forward or pushing off into the next step too soon," said Dr Michael Rainbow of Queen's University, senior author.

These findings also suggest therapeutic avenues for people whose arches are rigid due to injury or illness: supporting the flexibility of the arch could improve overall mobility.

Journal Reference:
Lauren Welte, Nicholas B. Holowka, Luke A. Kelly, et al., Mobility of the human foot's medial arch helps enable upright bipedal locomotion [open], Front. Bioeng. Biotechnol., Volume 11 - 2023 | https://doi.org/10.3389/fbioe.2023.1155439


Original Submission

posted by requerdanos on Friday August 04 2023, @09:15PM   Printer-friendly
from the name-not-to-be-named dept.

https://arstechnica.com/ai/2023/08/researchers-figure-out-how-to-make-ai-misbehave-serve-up-prohibited-content/

ChatGPT and its artificially intelligent siblings have been tweaked over and over to prevent troublemakers from getting them to spit out undesirable messages such as hate speech, personal information, or step-by-step instructions for building an improvised bomb. But researchers at Carnegie Mellon University last week showed that adding a simple incantation to a prompt—a string of text that might look like gobbledygook to you or me but which carries subtle significance to an AI model trained on huge quantities of web data—can defy all of these defenses in several popular chatbots at once.

[...] "Making models more resistant to prompt injection and other adversarial 'jailbreaking' measures is an area of active research," says Michael Sellitto, interim head of policy and societal impacts at Anthropic. "We are experimenting with ways to strengthen base model guardrails to make them more 'harmless,' while also investigating additional layers of defense."

[...] Adversarial attacks exploit the way that machine learning picks up on patterns in data to produce aberrant behaviors. Imperceptible changes to images can, for instance, cause image classifiers to misidentify an object, or make speech recognition systems respond to inaudible messages.

[...] In one well-known experiment, from 2018, researchers added stickers to stop signs to bamboozle a computer vision system similar to the ones used in many vehicle safety systems.

[...] Armando Solar-Lezama, a professor in MIT's college of computing, says it makes sense that adversarial attacks exist in language models, given that they affect many other machine learning models. But he says it is "extremely surprising" that an attack developed on a generic open source model should work so well on several different proprietary systems.

[...] The outputs produced by the CMU researchers are fairly generic and do not seem harmful. But companies are rushing to use large models and chatbots in many ways. Matt Fredrikson, another associate professor at CMU involved with the study, says that a bot capable of taking actions on the web, like booking a flight or communicating with a contact, could perhaps be goaded into doing something harmful in the future with an adversarial attack.

[...] Solar-Lezama of MIT says the work is also a reminder to those who are giddy with the potential of ChatGPT and similar AI programs. "Any decision that is important should not be made by a [language] model on its own," he says. "In a way, it's just common sense."


Original Submission

posted by requerdanos on Friday August 04 2023, @04:29PM   Printer-friendly
from the rent-seeking-for-intangible-assets dept.

AWS to charge customers for public IPv4 addresses from 2024:

Cloud giant AWS will start charging customers for public IPv4 addresses from next year, claiming it is forced to do this because of the increasing scarcity of these and to encourage the use of IPv6 instead.

It is now four years since we officially ran out of IPv4 ranges to allocate, and since then, those wanting a new public IPv4 address have had to rely on address ranges being recovered, either from from organizations that close down or those that return addresses they no longer require as they migrate to IPv6.

If Amazon's cloud division is to be believed, the difficulty in obtaining public IPv4 addresses has seen the cost of acquiring a single address rise by more than 300 percent over the past five years, and as we all know, the business is a little short of cash at the moment, so is having to pass these costs on to users.

"This change reflects our own costs and is also intended to encourage you to be a bit more frugal with your use of public IPv4 addresses and to think about accelerating your adoption of IPv6 as a modernization and conservation measure," writes AWS Chief Evangelist Jeff Barr, on the company news blog.

The update will come into effect on February 1, 2024, when AWS customers will see a charge of $0.005 (half a cent) per IP address per hour for all public IPv4 addresses. These charges will apparently apply whether the address is attached to a service or not, and like many AWS charges, appear inconsequential at first glance but can mount up over time if a customer is using many of them.


Original Submission

posted by requerdanos on Friday August 04 2023, @11:42AM   Printer-friendly
from the highly-accurate-mimicry dept.

Egg 'signatures' allow drongos to identify cuckoo 'forgeries' almost every time, study finds:

African cuckoos may have met their match with the fork-tailed drongo, which scientists predict can detect and reject cuckoo eggs from their nest on almost every occasion, despite them on average looking almost identical to drongo eggs.

Fork-tailed drongos, belligerent birds from sub-Saharan Africa, lay eggs with a staggering diversity of colors and patterns. All these colors and patterns are forged by the African cuckoo.

African cuckoos lay their eggs in drongos' nests to avoid rearing their chick themselves (an example of so-called brood parasitism). By forging drongo egg colors and patterns, cuckoos trick drongos into thinking the cuckoo egg is one of their own.

But drongos use knowledge of their own personal egg "signatures"—their eggs' color and pattern –to identify cuckoo egg "forgeries" and reject them from their nests, say scientists. These "signatures" are like the signatures we use in our daily lives: unique to each individual and highly repeatable by the same individual.

Through natural selection, the African cuckoo's eggs have evolved to look almost-identical to drongo eggs—a rare example of high-fidelity mimicry in nature.

A team led by researchers at the University of Cambridge and the University of Cape Town, working in collaboration with a community in Zambia, set out to explore the effectiveness of "signatures" as a defense against highly accurate mimicry. The findings are published today in the journal, Proceedings of the Royal Society B.

They found that despite near-perfect mimicry of fork-tailed drongo eggs, African cuckoo eggs still have a high probability of being rejected.

Journal Reference:
Lund et al. When perfection isn't enough: host egg signatures are an effective defence against high-fidelity African cuckoo mimicry, Proceedings of the Royal Society B: Biological Sciences (2023). DOI: 10.1098/rspb.2023.1125


Original Submission

posted by requerdanos on Friday August 04 2023, @06:56AM   Printer-friendly
from the no-phishing dept.

https://arstechnica.com/tech-policy/2023/08/reddit-beats-film-industry-wont-have-to-identify-users-who-admitted-torrenting/

Film companies lost another attempt to force Reddit to identify anonymous users who discussed piracy. A federal court on Saturday quashed a subpoena demanding users' names and other identifying details, agreeing with Reddit's argument that the film companies' demands violate the First Amendment.

The plaintiffs are 20 producers of popular movies who are trying to prove that Internet service provider Grande is liable for its subscribers' copyright infringement because the ISP allegedly ignores piracy on its network. Reddit isn't directly involved in the copyright case. But the film companies filed a motion to compel Reddit to respond to a subpoena demanding "basic account information including IP address registration and logs from 1/1/2016 to present, name, email address and other account registration information" for six users who wrote comments on Reddit threads in 2011 and 2018.

[...] This is the second time Beeler ruled against the film companies' attempts to unmask anonymous Reddit users. Beeler, a magistrate judge at US District Court for the Northern District of California, quashed a similar subpoena related to a different set of Reddit users in late April.

[...] Reddit's filing pointed out that the statute of limitations for copyright infringement is three years. The film companies said the statute of limitations is irrelevant to whether the comments can provide evidence in the case against Grande.

[...] When a court evaluates an unmasking request, it considers whether a subpoena "was issued in good faith and not for any improper purpose," whether "the identifying information is directly and materially relevant" to a core claim or defense, and whether "information sufficient to establish or to disprove that claim or defense is unavailable from any other source," the ruling said.

[...] The fact that Grande already provided names of 118 subscribers factored into Beeler's explanation of why she denied the film companies' motion.


Original Submission

posted by janrinok on Friday August 04 2023, @02:12AM   Printer-friendly
from the how-many-politicians-to-change-a-lightbulb dept.

What to know about the ban on incandescent lightbulbs

Retailers can no longer sell the banned lightbulbs as of Aug. 1

The ban on incandescent lightbulbs has officially gone into effect in the U.S., more than a decade after the federal government first passed a rule prohibiting the non-energy efficient lighting.

[....] A 2020 survey on residential energy consumption conducted by the U.S. Energy Information Administration found that less than half of U.S. households use LED lightbulbs for most or all indoor lighting

[....] Under the new standard, lightbulbs must produce 45 lumens -- the measure of brightness -- per watt. For comparison, traditional incandescent lightbulbs produce just 15 lumens per watt

[....] Collectively, Americans are expected to save nearly $3 billion annually on utility bills while cutting carbon emissions by 222 million metric tons over the next 30 years -- the equivalent to emissions generated by 28 million homes in one year, according to the DOE.

[....] Black lights, bug lamps, colored lamps, infrared laps, plant lights, flood lights, reflector lamps and traffic signals are not included in the ban, according to the DOE.

See also: Incandescent light bulb ban goes into effect this month: Here's what you need to know

TAMPA, Fla. - A nationwide ban on incandescent light bulbs goes into effect on Aug. 1, 2023, which means if they're made or sold by a retailer, that business could be fined up to $542 per bulb.

[....] customers have been getting as many incandescent bulbs as they can before the ban.

It seems one could take that exception for 'colored lamps' and run with it.


Original Submission