Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 19 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Idiosyncratic use of punctuation - which of these annoys you the most?

  • Declarations and assignments that end with }; (C, C++, Javascript, etc.)
  • (Parenthesis (pile-ups (at (the (end (of (Lisp (code))))))))
  • Syntactically-significant whitespace (Python, Ruby, Haskell...)
  • Perl sigils: @array, $array[index], %hash, $hash{key}
  • Unnecessary sigils, like $variable in PHP
  • macro!() in Rust
  • Do you have any idea how much I spent on this Space Cadet keyboard, you insensitive clod?!
  • Something even worse...

[ Results | Polls ]
Comments:63 | Votes:115

posted by hubie on Monday August 07 2023, @08:35PM   Printer-friendly
from the Relatives dept.

https://phys.org/news/2023-08-china-human-lineage.html

A team of paleontologists at the Chinese Academy of Sciences, working with colleagues from Xi'an Jiaotong University, the University of York, the University of Chinese Academy of Sciences and the National Research Center on Human Evolution, has found evidence of a previously unknown human lineage. In their study, reported in Journal of Human Evolution, the group analyzed the fossilized jawbone, partial skull and some leg bones of a hominin dated to 300,000 years ago.

The fossils were excavated at a site in Hualongdong, in what is now a part of East China. They were subsequently subjected to both a morphological and a geometric assessment, with the initial focus on the jawbone, which exhibited unique features—a triangular lower edge and a unique bend.


Original Submission

posted by requerdanos on Monday August 07 2023, @08:30PM   Printer-friendly
from the soylentnews-is-people dept.

Meeting Announcement: The next meeting of the SoylentNews governance committee will be this coming Friday, August 11th, 2023 at 20:30 UTC (1:30pm PDT, 4:30pm EDT) in #governance on SoylentNews IRC. Logs of the meeting will be available afterwards for review, and minutes will be published when available.

The agenda for the upcoming meeting will come out within the next few days, 24 hours or more before the meeting. The agenda is expected to cover, at a minimum, actions arising from the previous meeting, such as exploring the formation of a new entity, and janrinok's report on management structure.

Minutes and agenda, and other governance committee information can be found on the SoylentNews Wiki at: https://wiki.staging.soylentnews.org/wiki/Governance

Call for experts: The committee is calling for experts with relevant knowledge of entity formation to attend the meeting. Their advice may be helpful to the committee and the greater community going forward and would be greatly appreciated.

As always, the community is welcome to observe and participate and is hereby invited to come and do both. SoylentNews is People!

posted by requerdanos on Monday August 07 2023, @03:52PM   Printer-friendly
from the microchips-into-his-brain dept.

AI-powered brain implants restore touch and movement to paralysed man:

In a world first, a quadriplegic man in the United States has regained touch and movement after surgeons successfully implanted microchips into his brain.

AI is then used to read, interpret and translate his thoughts into action.

Keith Thomas, 45, broke his neck in an accident and became paralysed from his chest down.

[...] A team of medical professionals first spent months mapping Thomas' brain using MRIs to help pinpoint the areas responsible for both arm movement and the sensation of touch in his hand.

He then underwent a 15-hour open-brain surgery.

[...] Dr Ashesh Mehta, the surgeon who performed Thomas' brain surgery said the wiring in Thomas' brain was "broken".

[...] "What we did was a bypass, so we bypassed the block. So, we're basically using a computer to read Keith's thoughts and then translate that into a machine that then stimulates his hand so that he can move it," explained Mehta.

The procedure - dubbed as a "double neural bypass" - goes the other direction as well. He can now "feel" something through tiny electrodes instead of neurons responsible for feeling his fingertips.

The tiny sensors at his fingertips and palm send touch and pressure information back to the sensory area of his brain implant to restore sensation through a computer instead of through the normal pathway through the spinal cord.

"It's almost like fooling the nervous system to make it work," said Mehta.


Original Submission

posted by requerdanos on Monday August 07 2023, @11:07AM   Printer-friendly
from the on-the-blockchain dept.

Arthur T Knackerbracket has processed the following story:

Ilya Lichtenstein and Heather Morgan, the couple who were arrested last year for the massive 2016 Bitfinex hack involving billions of dollars of cryptocurrency, have pleaded guilty in court. Lichtenstein has admitted that he used multiple advanced hacking tools and techniques to gain entry into the cryptocurrency exchange's network. He then authorized 2,000 transactions to move 119,754 bitcoins to wallets he controlled. To cover his tracks, he said he deleted access credentials, logs and other digital breadcrumbs that could give him away. Morgan, his wife, helped him move and launder the stolen funds. 

If you'll recall, the Justice Department seized 95,000 of the stolen bitcoins at the time of their arrest. Back then, that digital coin hoard was worth a whopping $3.6 billion and was the largest financial seizure in the agency's history. Authorities were able to trace more of the stolen funds after that to recover an additional $475 million worth of cryptocurrency.

According to the DOJ, Lichtenstein and Morgan used false identities to set up online accounts on darknet markets and cryptocurrency exchanges. They then withdrew the funds and distributed the bitcoins from there by converting them into other forms of cryptocurrency and keeping them in crypto mixing services. By doing so, they obfuscated the coins' sources and made them harder to trace.


Original Submission

posted by hubie on Monday August 07 2023, @06:21AM   Printer-friendly

Brave Software, maker of the Brave web browser, has tuned its search engine to run on a homegrown index of images and videos in an effort to end its dependency on "Big Tech" rivals.

On Thursday, the biz said image and video results from Brave Search – available on the web at search.brave.com and via its browser – will be served from Brave's own index.

[...] Brave now aims to ride the wave of discontent with Big Tech by highlighting its commitment to privacy and independence – Small Tech.

(As some have pointed out, Brave has some skin in the AI content-generation game alongside OpenAI et al: it offers an API that takes search queries and outputs answers formatted for use with, say, machine-learning models.)

"Brave Search is 100 percent private and anonymous, which sets a high bar for image/video search to meet," the developer said in a blog post provided earlier to The Register.

"Whether it’s a matter of personal safety or personal preference, users should be able to discover content without their search engine reporting and profiling those results to a Big Tech company."

[...] Brave argues that having its own index frees the company from content decisions made by others. As an example, the browser biz points to an incident two years ago when Bing briefly stopped serving search results for the Tiananmen Square "tank man," an inquiry that remains unwelcome in China. Brave Search also couldn't find "tank man" at the time because the service sourced its image results from Microsoft Bing.

No longer. However, Brave says it is committed to making it easy to conduct searches using other search engines for queries that Brave Search cannot answer. For Brave Search on the web, that means those making inquiries have the option to send their keywords to other search services – via links shown below the top 10 results – if Brave's index proves disappointing.

"Brave is on a mission to build a user-first Web," the company said in its blog post. "That mission starts with the Brave browser and Brave Search. With the release of image and video search, we’re continuing to innovate within the search industry, providing viable and preferable products for users who want choice and transparency in their search for information online."


Original Submission

posted by janrinok on Monday August 07 2023, @01:33AM   Printer-friendly
from the who-owns-the-vehicle dept.

Arthur T Knackerbracket has processed the following story:

Many of the most attractive premium features in Tesla vehicles are things that all of the cars are physically capable of but are locked down at the software level. Since there are always security researchers and hackers trying to pick those proverbial locks, it was inevitable that someone would figure it out. And as of this week, that has finally happened.

According to a new report from TechCrunch, the jailbreak was discovered by three Ph.D. candidate student researchers at Germany's Technische Universität Berlin. They plan on presenting their findings at next week's Black Hat cybersecurity conference in Las Vegas.

"We are not the evil outsider, but we're actually the insider, we own the car," researcher and TU Berlin Ph.D. candidate Christian Werling told TechCrunch. "And we don't want to pay these $300 for the rear heated seats."

Specifically, he and his colleagues used a technique called voltage glitching or a voltage fault injection attack to disrupt the AMD processor that powers the car's Tesla Infotainment System and get it to do their bidding. "If we do it at the right moment, we can trick the CPU into doing something else," Welling added. "It has a hiccup, skips an instruction, and accepts our manipulated code. That's basically what we do in a nutshell."

According to the report, this new exploit could also enable hackers to activate the $15,000 self-driving feature in regions where it's locked out, though the researchers haven't tried that themselves yet. But since the vulnerability — while affecting software capabilities — is hardware-based, Tesla can't patch it, with the researchers telling TechCrunch that a fix would require replacing the affected hardware. Tesla did not respond to TechCrunch's request for comment on the exploit.


Original Submission

posted by hubie on Sunday August 06 2023, @08:47PM   Printer-friendly

Standing 40 cm high and 70 cm wide, the semi-transparent display has translations pop up simultaneously on the screen as the station staff and a foreign tourist speak:

Standing 40 cm high and 70 cm wide, the semi-transparent display has translations pop up simultaneously on the screen as the station staff and a foreign tourist speak.

With more than two million visitors flocking to Japan last month in the wake of the country's post-pandemic reopening, railway companies are gearing up to warmly greet the influx of global travellers.

Seibu Railway, one of the country's large railroad companies, is implementing a new simultaneous translation system to help foreign tourists navigate Tokyo's metro which is notorious for its complexity.

[...] this new semi-transparent display has translations pop up simultaneously on the screen as the station staff and foreign tourists communicate.

"The display we have introduced can automatically translate between Japanese and other languages. When customers speak in a foreign language, the station attendant can see it in Japanese, and when the station attendant speaks Japanese, customers can read the sentences in their own language," said Ayano Yajima, Seibu Railway Sales and Marketing supervisor.

"Google Translate isn't always available because you don't always have Wi-Fi everywhere you go, so places like this, it's also much faster than pulling up your phone, typing everything out, showing it and (there being) misunderstandings. Having it like this, clear on the screen, it's really nice," said Kevin Cometto, an Italian student visiting Japan.

The VoiceBiz UCDisplay supports Japanese and 11 other languages including English, French, and Spanish.

The station staff previously used translation apps.

But with the translation window, a face-to-face conversation through the screen is possible, complete with facial expressions and gestures.

According to Seibu Railway, the device is designed to help customers with more complex requests such as seeking directions or information about the local area.


Original Submission

posted by requerdanos on Sunday August 06 2023, @04:01PM   Printer-friendly
from the what-happens-in-Vegas-stays-in-Vegas dept.

https://arstechnica.com/cars/2023/08/musks-boring-company-gets-ok-to-dig-68-miles-of-tunnels-under-las-vegas/

Elon Musk's tunneling company has permission to significantly expand its operations under the city of Las Vegas. Last month, the Las Vegas City Council voted unanimously to approve the Boring Company's plan to dig more tunnels under the city, following in the steps of Clark County, which in May gave a similar thumbs-up to the tunneling concern. The company's plan calls for 68 miles of tunnels and 81 stations, served by a fleet of Tesla electric vehicles, each able to carry three passengers at a time.

[...] But the Boring Company's plans scaled back from maglev trains and vacuum tubes to high-speed electric pods and then to just regular Teslas with human drivers, and interest waned.

Except in Las Vegas. There, the Las Vegas Convention and Visitors Authority said yes to a $48.6 million, 2.2-mile loop underneath the convention center. In 2021, the LVCC Loop opened a 1.7-mile network with three stations; the Boring Company claims it has transported 1.15 million passengers, with a peak capacity of just 4,500 people per hour. For context, a subway system can be expected to carry between 600 and 1,000 people per train.


Original Submission

posted by Fnord666 on Sunday August 06 2023, @11:16AM   Printer-friendly
from the peering-into-the-abyss dept.

https://arstechnica.com/science/2023/07/a-jargon-free-explanation-of-how-ai-large-language-models-work/

When ChatGPT was introduced last fall, it sent shockwaves through the technology industry and the larger world. Machine learning researchers had been experimenting with large language models (LLMs) for a few years by that point, but the general public had not been paying close attention and didn't realize how powerful they had become.

Today, almost everyone has heard about LLMs, and tens of millions of people have tried them out. But not very many people understand how they work.

If you know anything about this subject, you've probably heard that LLMs are trained to "predict the next word" and that they require huge amounts of text to do this. But that tends to be where the explanation stops. The details of how they predict the next word is often treated as a deep mystery.
[...]
To understand how language models work, you first need to understand how they represent words. Humans represent English words with a sequence of letters, like C-A-T for "cat." Language models use a long list of numbers called a "word vector." For example, here's one way to represent cat as a vector:

[0.0074, 0.0030, -0.0105, 0.0742, 0.0765, -0.0011, 0.0265, 0.0106, 0.0191, 0.0038, -0.0468, -0.0212, 0.0091, 0.0030, -0.0563, -0.0396, -0.0998, -0.0796, ..., 0.0002]

(The full vector is 300 numbers long—to see it all, click here and then click "show the raw vector.")

Why use such a baroque notation? Here's an analogy. Washington, DC, is located at 38.9 degrees north and 77 degrees west. We can represent this using a vector notation:

  • Washington, DC, is at [38.9, 77]
  • New York is at [40.7, 74]
  • London is at [51.5, 0.1]
  • Paris is at [48.9, -2.4]

This is useful for reasoning about spatial relationships.
[...]
For example, the words closest to cat in vector space include dog, kitten, and pet. A key advantage of representing words with vectors of real numbers (as opposed to a string of letters, like C-A-T) is that numbers enable operations that letters don't.

Words are too complex to represent in only two dimensions, so language models use vector spaces with hundreds or even thousands of dimensions.
[...]
Researchers have been experimenting with word vectors for decades, but the concept really took off when Google announced its word2vec project in 2013. Google analyzed millions of documents harvested from Google News to figure out which words tend to appear in similar sentences. Over time, a neural network trained to predict which words co-occur with other words learned to place similar words (like dog and cat) close together in vector space.
[...]
Because these vectors are built from the way humans use words, they end up reflecting many of the biases that are present in human language. For example, in some word vector models, "doctor minus man plus woman" yields "nurse." Mitigating biases like this is an area of active research.
[...]
Traditional software is designed to operate on data that's unambiguous. If you ask a computer to compute "2 + 3," there's no ambiguity about what 2, +, or 3 mean. But natural language is full of ambiguities that go beyond homonyms and polysemy:

  • In "the customer asked the mechanic to fix his car," does "his" refer to the customer or the mechanic?
  • In "the professor urged the student to do her homework" does "her" refer to the professor or the student?
  • In "fruit flies like a banana" is "flies" a verb (referring to fruit soaring across the sky) or a noun (referring to banana-loving insects)?

People resolve ambiguities like this based on context, but there are no simple or deterministic rules for doing this. Rather, it requires understanding facts about the world. You need to know that mechanics typically fix customers' cars, that students typically do their own homework, and that fruit typically doesn't fly.

Word vectors provide a flexible way for language models to represent each word's precise meaning in the context of a particular passage.
[...]
Research suggests that the first few layers focus on understanding the sentence's syntax and resolving ambiguities like we've shown above. Later layers (which we're not showing to keep the diagram a manageable size) work to develop a high-level understanding of the passage as a whole.
[...]
In short, these nine attention heads enabled GPT-2 to figure out that "John gave a drink to John" doesn't make sense and choose "John gave a drink to Mary" instead.

We love this example because it illustrates just how difficult it will be to fully understand LLMs. The five-member Redwood team published a 25-page paper explaining how they identified and validated these attention heads. Yet even after they did all that work, we are still far from having a comprehensive explanation for why GPT-2 decided to predict "Mary" as the next word.
[...]
In a 2020 paper, researchers from Tel Aviv University found that feed-forward layers work by pattern matching: Each neuron in the hidden layer matches a specific pattern in the input text.
[...]
Recent research from Brown University revealed an elegant example of how feed-forward layers help to predict the next word. Earlier, we discussed Google's word2vec research showing it was possible to use vector arithmetic to reason by analogy. For example, Berlin - Germany + France = Paris.

The Brown researchers found that feed-forward layers sometimes use this exact method to predict the next word.
[...]
All the parts of LLMs we've discussed in this article so far—the neurons in the feed-forward layers and the attention heads that move contextual information between words—are implemented as a chain of simple mathematical functions (mostly matrix multiplications) whose behavior is determined by adjustable weight parameters. Just as the squirrels in my story loosen and tighten the valves to control the flow of water, so the training algorithm increases or decreases the language model's weight parameters to control how information flows through the neural network.
[....]
(If you want to learn more about backpropagation, check out our 2018 explainer on how neural networks work.)
[...]
Over the last five years, OpenAI has steadily increased the size of its language models. In a widely read 2020 paper, OpenAI reported that the accuracy of its language models scaled "as a power-law with model size, dataset size, and the amount of compute used for training, with some trends spanning more than seven orders of magnitude."

The larger their models got, the better they were at tasks involving language. But this was only true if they increased the amount of training data by a similar factor. And to train larger models on more data, you need a lot more computing power.
[...]
Psychologists call this capacity to reason about the mental states of other people "theory of mind." Most people have this capacity from the time they're in grade school. Experts disagree about whether any non-human animals (like chimpanzees) have theory of mind, but there's a general consensus that it's important for human social cognition.

Earlier this year, Stanford psychologist Michal Kosinski published research examining the ability of LLMs to solve theory-of-mind tasks. He gave various language models passages like the one we quoted above and then asked them to complete a sentence like "she believes that the bag is full of." The correct answer is "chocolate," but an unsophisticated language model might say "popcorn" or something else.
[...]
It's worth noting that researchers don't all agree that these results indicate evidence of theory of mind; for example, small changes to the false-belief task led to much worse performance by GPT-3, and GPT-3 exhibits more variable performance across other tasks measuring theory of mind. As one of us (Sean) has written, it could be that successful performance is attributable to confounds in the task—a kind of "clever Hans" effect, only in language models rather than horses.
[...]
In April, researchers at Microsoft published a paper arguing that GPT-4 showed early, tantalizing hints of artificial general intelligence—the ability to think in a sophisticated, human-like way.

For example, one researcher asked GPT-4 to draw a unicorn using an obscure graphics programming language called TiKZ. GPT-4 responded with a few lines of code that the researcher then fed into the TiKZ software. The resulting images were crude, but they showed clear signs that GPT-4 had some understanding of what unicorns look like.
[...]
At the moment, we don't have any real insight into how LLMs accomplish feats like this. Some people argue that such examples demonstrate that the models are starting to truly understand the meanings of the words in their training set. Others insist that language models are "stochastic parrots" that merely repeat increasingly complex word sequences without truly understanding them.
[...]
Further, prediction may be foundational to biological intelligence as well as artificial intelligence. In the view of philosophers like Andy Clark, the human brain can be thought of as a "prediction machine" whose primary job is to make predictions about our environment that can then be used to navigate that environment successfully.
[...]
Traditionally, a major challenge for building language models was figuring out the most useful way of representing different words—especially because the meanings of many words depend heavily on context. The next-word prediction approach allows researchers to sidestep this thorny theoretical puzzle by turning it into an empirical problem. It turns out that if we provide enough data and computing power, language models end up learning a lot about how human language works simply by figuring out how to best predict the next word. The downside is that we wind up with systems whose inner workings we don't fully understand.


Original Submission

posted by Fnord666 on Sunday August 06 2023, @06:32AM   Printer-friendly
from the keeping-the-pipes-clean dept.

https://phys.org/news/2023-08-art-roman-revealed.html

While 21st century water companies struggle to maintain clean, fresh supplies, new research from an international team led by Oxford geoarchaeologist Dr. Güel Sürmelihindi, reveals that, some 2,000 years ago, Roman water engineers were keeping up a regular program of managing and maintaining the ancient water systems.

According to the research, published in Scientific Reports, ancient water management traces are captured in the limescale deposits which built up on the walls and floor of the ancient Roman aqueduct of Divona (Cahors, France).

The evidence shows that these deposits were regularly and partially removed during maintenance. It reveals, "The discovery of traces of regular maintenance in the carbonate deposits... such as tool marks, calcite deformation twins, debris from cleaning and repairs...are proof of periodic manual carbonate removal by Roman maintenance teams.

Journal Reference:
Sürmelihindi, Gül, Passchier, Cees W., Rigal, Didier, et al. Roman aqueduct maintenance in the water supply system of Divona, France [open], Scientific Reports (DOI: 10.1038/s41598-023-38655-z)


Original Submission

posted by hubie on Sunday August 06 2023, @01:47AM   Printer-friendly

https://computer.rip/2023-07-29-Free-Public-WiFi.html

Once, many years ago, I stayed on the 62nd floor of the Westin Peachtree Plaza in Atlanta, Georgia. This was in the age when the price of a hotel room was directly correlated with the price of the WiFi service, and as a high school student I was not prepared to pay in excess of $15 a day for the internet. As I remember, a Motel 6 that was not blocks away but within line of sight ended up filling the role. But even up there, 62 floors from the ground, there was false promise: Free Public WiFi.

I am not the first person to write on this phenomenon, I think I originally came to understand it as a result of a 2010 segment of All Things Considered. For a period of a few years, almost everywhere you went, there was a WiFi network called "Free Public WiFi." While it was both free and public in the most literal sense, it did not offer internet access. It was totally useless, and fell somewhere between a joke, a scam, and an accident of history. Since I'm not the first to write about it, I have to be the most thorough, and so let's start out with a discussion of WiFi itself.


Original Submission

posted by requerdanos on Saturday August 05 2023, @09:01PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Amazon Web Services has started offering a cloudy server packing a custom fourth-generation Intel Xeon Scalable processor and boasting 96 cores or 192 vCPUs.

It's not clear if that's a colossal chip that features 36 more cores than the mightiest Xeon Intel lists for sale to the public – the 60-core Platinum 8490H – or a two-socket server with a lesser processor.

Intel has form doing custom jobs that beat the kit of its official product list: we once spotted Oracle with a Xeon that outpaced processors sold to other customers.

Whatever the kit inside the box, news of it emerged in a Wednesday post detailing the newly-available M7i-Flex and M7i instance types available in the Amazon Elastic Compute Cloud (Amazon EC2).

That post lists an instance type called the "m7i.48xlarge" that offers 192 vCPUs, and AWS's CPU options page lists the instance as offering 96 default CPU cores.

We've asked AWS and Intel to detail the spec of the silicon, because a single processor with 96 cores would be well beyond what Chipzilla has spoken about in public.


Original Submission

posted by requerdanos on Saturday August 05 2023, @04:14PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The James Webb Space Telescope (JWST) has taken an astonishing new image of the Ring Nebula. This glowing, donut-shaped nebula has never been seen in such intricate detail before.

The Ring Nebula is about 2600 light years away in the direction of the constellation Lyra. It is what astronomers call a planetary nebula, which forms when a dying star blows off its outer layers to create a shroud of gas and dust.

By chance, this nebula happens to be oriented so that from Earth we view it face-on, with the stellar corpse in the centre circled by its titular ring of bright nitrogen and sulfur. The whole thing is enveloped in a veil of oxygen gas, which gives it a greenish tinge when the star’s light passes through it.

“We are witnessing the final chapters of a star’s life, a preview of the sun’s distant future, so to speak,” said Mike Barlow at University College London in a statement. “We can use the Ring Nebula as our laboratory to study how planetary nebulae form and evolve.”

Also at Space.com


Original Submission

posted by requerdanos on Saturday August 05 2023, @11:30AM   Printer-friendly
from the negligent-cybersecurity-practices dept.

https://arstechnica.com/security/2023/08/microsoft-cloud-security-blasted-for-its-culture-of-toxic-obfuscation/

Microsoft has once again come under blistering criticism for the security practices of Azure and its other cloud offerings, with the CEO of security firm Tenable saying Microsoft is "grossly irresponsible" and mired in a "culture of toxic obfuscation."

The comments from Amit Yoran, chairman and CEO of Tenable, come six days after Sen. Ron Wyden (D-Ore.) blasted Microsoft for what he said were "negligent cybersecurity practices" that enabled hackers backed by the Chinese government to steal hundreds of thousands of emails from cloud customers, including officials in the US Departments of State and Commerce. Microsoft has yet to provide key details about the mysterious breach, which involved the hackers obtaining an extraordinarily powerful encryption key granting access to a variety of its other cloud services. The company has taken pains ever since to obscure its infrastructure's role in the mass breach.

Arthur T Knackerbracket has processed the following story:

Yoran has more to add to the senator’s arguments, writing in his post that Microsoft has demonstrated a “repeated pattern of negligent cybersecurity practices,” enabling Chinese hackers to spy on the US government. He also revealed Tenable’s discovery of an additional cybersecurity flaw in Microsoft Azure and says the company took too long to address it.

Tenable initially discovered the flaw in March and found that it could give bad actors access to a company’s sensitive data, including a bank. Yoran claims Microsoft took “more than 90 days to implement a partial fix” after Tenable notified the company, adding that the fix only applies to “new applications loaded in the service.” According to Yoran, the bank and all the other organizations “that had launched the service prior to the fix” are still affected by the flaw — and are likely unaware of that risk.

Yoran says Microsoft plans to fix the issue by the end of September but calls the delayed response “grossly irresponsible, if not blatantly negligent.” He also points to data from Google’s Project Zero, which indicates that Microsoft products have made up 42.5 percent of all discovered zero-day vulnerabilities since 2014.

“What you hear from Microsoft is ‘just trust us,’ but what you get back is very little transparency and a culture of toxic obfuscation,” Yoran writes. “How can a CISO, board of directors or executive team believe that Microsoft will do the right thing given the fact patterns and current behaviors?”


Original Submission #1Original Submission #2

posted by requerdanos on Saturday August 05 2023, @06:44AM   Printer-friendly
from the we've-been-trying-to-contact-you dept.

Arthur T Knackerbracket has processed the following story:

In December 2022, the FCC proposed the biggest fine it has ever issued against a robocalling outfit – $299,997,000. The penalty eclipses the previous record holder – Rising Eagle and JSquared Telecom – by nearly $75 million in 2020. After a lengthy investigation, the Commission decided on Thursday to proceed with the huge fine.

The record-breaking punishment goes to an illegal transnational robocalling operation. The outfit is so big (or so blatantly illegal) that it does not have an official umbrella company. It's more of a network of cooperating businesses that made more than five billion automated calls to over 500 million phone numbers within a three-month period in 2021.

In doing so, the FCC says the organized operation broke multiple federal laws by spoofing more than one million telephone numbers to hide their actual origin and trick people into answering the calls. It also violated numerous other FCC regulations.

[...] The operation has allegedly been around since 2018 and primarily sold consumers vehicle service contracts falsely disguised as auto warranties. Two primary bad actors – Roy M. Cox and Aaron Michael Jones – already hold lifetime bans from running telemarketing businesses after losing a lawsuit brought on them by the FCC and the State of Texas. Business names associated with the illegal enterprise include Sumco Panama, Virtual Telecom, Davis Telecom, Geist Telecom, Fugle Telecom, Tech Direct, Mobi Telecom, and Posting Express.

[...] It's hard to nail down robocallers, but it's at least nice to see the FCC trying to hit them with huge penalties instead of laughable slaps on the wrist.


Original Submission