Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Idiosyncratic use of punctuation - which of these annoys you the most?

  • Declarations and assignments that end with }; (C, C++, Javascript, etc.)
  • (Parenthesis (pile-ups (at (the (end (of (Lisp (code))))))))
  • Syntactically-significant whitespace (Python, Ruby, Haskell...)
  • Perl sigils: @array, $array[index], %hash, $hash{key}
  • Unnecessary sigils, like $variable in PHP
  • macro!() in Rust
  • Do you have any idea how much I spent on this Space Cadet keyboard, you insensitive clod?!
  • Something even worse...

[ Results | Polls ]
Comments:39 | Votes:86

posted by janrinok on Thursday August 08, @09:13PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Users of Cryptonator – an online digital wallet and cryptocurrency exchange – received an unpleasant surprise last weekend after the service was shuttered in a combined operation run by the FBI, the US Internal Revenue Service (IRS), and German police.

Cryptonator allows people to store crypto-coins remotely, transfer funds, and exchange cryptocurrencies for other types. It could be used for innocent purposes, though given it was happy to turn a blind eye to whoever was using it, it is no surprise it was allegedly utilized by questionable netizens, hence its seizure by the Feds.

According to testimony [PDF] from IRS special agent Justin Allen, Bitcoin wallet addresses controlled by Cryptonator have been used to send and receive $71 million to and from entities sanctioned by US authorities. Another $54 million passing through the service involved "hacked or stolen funds," $25 million was transacted with dark-web marketplaces or fraud outfits, and over $8 million linked to ransomware campaigns.

The court documents state that the exchange took a 0.9 percent cut of merchant transactions.

"The operation of Cryptonator involved an international money laundering scheme that, by virtue of its business model, catered to criminals," states a criminal indictment [PDF] filed by the Feds in a Tampa, Florida court.

"Cryptonator facilitated crimes, including but not limited to, computer hacking, ransomware, fraud, and identity theft. Since its founding Cryptonator received criminal proceeds of, among other crimes, numerous computer intrusions and hacking incidents, ransomware scams, various fraud markets, and identity theft schemes," prosecutors allege.

The accused CEO of the site, Roman Boss – who also uses the surname Pikulev – is a Russian national living in Germany. He is said to have set up the exchange in December 2013, using the email address trucksale@gmail.com.

When presented with a warrant to access that Gmail account, Google found it had a message in it from the address contact@cryptonator.com to a financial institution with the note: "I'm Roman, CEO of Cryptonator."

In a sting operation in July, 2021, a FBI operative in Florida registered an account with Cryptonator. The agent spent around $4,195 in digicash at a known dark web site that, it's claimed, trades in stolen identification records. Court documents allege that Boss facilitated that purchase via Cryptonator, telling the agent all transactions were anonymous and currency would be "mixed" at his end to disguise its source.

In September another FBI agent, who had been operating undercover as an affiliate of a known ransomware group, is said to have used a Cryptonator account to offer the criminals fictitious stolen data in exchange for a cut of the proceeds. That agent received Bitcoin worth $14,500 in two transactions through Cryptonator, it's said.

Boss is charged with two offenses: Operating an unlicensed money transmitting business and conspiracy to commit money laundering. The DoJ and IRS are seeking his arrest and the forfeiture of any and all illegal funds. Neither agency has responded to The Register's request for comment at the time of writing.


Original Submission

posted by janrinok on Thursday August 08, @04:28PM   Printer-friendly
from the what-goes-up... dept.

NASA Weighs SpaceX Rescue for Stranded Boeing Starliner Crew

NASA weighs SpaceX rescue for stranded Boeing Starliner crew:

A final decision on whether to persist with Boeing's troubled Starliner -- which experienced worrying propulsion issues as it flew up to the orbital platform in June -- is expected later this month, officials said Wednesday in a call with reporters.

Detailed planning is already underway with Boeing's rival SpaceX, owned by Elon Musk, to potentially launch their scheduled Crew-9 mission on September 24 with just two astronauts rather than the usual four.

Boeing's Starliner Astronauts Could Return on SpaceX Capsule in Feb 2025, NASA Says

Boeing's Starliner astronauts could return on SpaceX capsule in Feb 2025, NASA says - West Hawaii Today:

NASA officials said on Wednesday the two astronauts delivered to the International Space Station in June by Boeing's Starliner could return on SpaceX's Crew Dragon in February 2025 if Starliner is still deemed unsafe to return to Earth.

The U.S. space agency has been discussing potential plans with SpaceX to leave two seats empty on an upcoming Crew Dragon launch for NASA astronauts Butch Wilmore and Suni Williams, who became the first crew to fly Boeing's Starliner capsule.

The astronauts' test mission, initially expected to last about eight days on the station, has been drawn out by issues on Starliner's propulsion system that have increasingly called into question the spacecraft's ability to safely return them to Earth as planned.

A Boeing spokesperson said if NASA decides to change Starliner's mission, the company "will take the actions necessary to configure Starliner for an uncrewed return."

Thruster failures during Starliner's initial approach to the ISS in June and several leaks of helium – used to pressurize those thrusters – have set Boeing off on a testing campaign to understand the cause and propose fixes to NASA, which has the final say. Recent results have unearthed new information, causing greater alarm about a safe return.

The latest test data have stirred disagreements and debate within NASA about whether to accept the risk of a Starliner return to Earth, or make the call to use Crew Dragon instead.

Using a SpaceX craft to return astronauts that Boeing had planned to bring back on Starliner would be a major blow to an aerospace giant that has struggled for years to compete with SpaceX and its more experienced Crew Dragon.

Starliner has been docked to the ISS for 63 of the maximum 90 days it can stay, and it is parked at the same port that Crew Dragon will have to use to deliver the upcoming astronaut crew.

Early Tuesday morning, NASA, using a SpaceX rocket and a Northrop Grumman capsule, delivered a routine shipment of food and supplies to the station, including extra clothes for Wilmore and Williams.

Starliner's high-stakes mission is a final test required before NASA can certify the spacecraft for routine astronaut flights to and from the ISS. Crew Dragon received NASA approval for astronaut flights in 2020.

Starliner development has been set back by management issues and numerous engineering problems. It has cost Boeing $1.6 billion since 2016, including $125 million from Starliner's current test mission, securities filings show.


Original Submission #1Original Submission #2

posted by hubie on Thursday August 08, @11:42AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Intel will be releasing a microcode update to prevent further damage to crashing 13th- and 14th-generation desktop processors sometime this month if it can stick to its previously announced schedule. This fix should be available via BIOS updates from PC and motherboard makers and from Microsoft as a Windows update. But it will take time for those updates to roll out to users, and Intel has said that processors that are already exhibiting crashes have been permanently damaged and won't be fixed by the microcode update.

In an effort to provide peace of mind to buyers and cover anyone whose CPU is subtly damaged but not showing explicit signs of instability, Intel is extending the warranty on all affected 13th- and 14th-generation CPUs by an additional two years, Tom's Hardware reports. This raises the warranty on a new boxed Intel CPU from three years to five, and it applies to both previously purchased processors and new ones that are bought from today onward. For processors that came installed in pre-built PCs, Intel says users are covered, but that they should reach out to their PC's manufacturer for support. Those customers should only contact Intel directly if they have a problem getting a new CPU from the PC company.

[...] According to Intel, the root cause of the issue was "a microcode algorithm resulting in incorrect voltage requests to the processor," a bug that caused motherboards to supply too much power to a CPU. This resulted in damage to the silicon over time, leading to crashing and instability. The problem was also exacerbated by enthusiast motherboards that didn't stick to Intel's recommended default power and performance settings.

Intel says it is also "investigating options to easily identify affected processors" to help give users peace of mind.


Original Submission

posted by hubie on Thursday August 08, @06:54AM   Printer-friendly

https://matduggan.com/teaching-to-the-test/

A lot has been written in the last few weeks about the state of IT security in the aftermath of the CrowdStrike outage. A range of opinions have emerged, ranging from blaming Microsoft for signing the CrowdStrike software (who in turn blame the EU for making them do it) to blaming the companies themselves for allowing all of these machines access to the Internet to receive the automatic template update. Bike-shedding among the technical community continues to be focused on the underlying technical deployment, which misses the forest for the trees.

The better question is what was the forcing mechanism that convinced every corporation in the world that it was a good idea to install software like this on every single machine? Why is there such a cottage industry of companies that are effectively undermining Operating System security with the argument that they are doing more "advanced" security features and allowing (often unqualified) security and IT departments to make fundamental changes to things like TLS encryption and basic OS functionality? How did all these smart people let a random company push updates to everyone on Earth with zero control? The justification often give is "to pass the audit".

These audits and certifications, of which there are many, are a fundamentally broken practice. The intent of the frameworks was good, allowing for the standardization of good cybersecurity practices while not relying on the expertise of an actual cybersecurity expert to validate the results. We can all acknowledge there aren't enough of those people on Earth to actually audit all the places that need to be audited. The issue is the audits don't actually fix real problems, but instead create busywork for people so it looks like they are fixing problems. It lets people cosplay as security experts without needing to actually understand what the stuff is.

[...] We all know this crap doesn't work and the sooner we can stop pretending it makes a difference, the better. AT&T had every certification on the planet and still didn't take the incredibly basic step of enforcing 2FA on a database of all the most sensitive data it has in the world. If following these stupid checklists and purchasing the required software ended up with more secure platforms, I'd say "well at least there is a payoff". But time after time we see the exact same thing which is an audit is not an adequate replacement for someone who knows what they are doing looking at your stack and asking hard questions about your process. These audits aren't resulting in organizations doing the hard but necessary step of taking downtime to patch critical flaws or even applying basic security settings across all of their platforms.

[...] I don't know what the solution is, but I know this song and dance isn't working. The world would be better off if organizations stopped wasting so much time and money on these vendor solutions and instead stuck to much more basic solutions. Perhaps if we could just start with "have we patched all the critical CVEs in our organization" and "did we remove the shared username and password from the cloud database with millions of call records", then perhaps AFTER all the actual work is done we can have some fun and inject dangerous software into the most critical parts of our employees devices.


Original Submission

posted by hubie on Thursday August 08, @02:06AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

A federal judge declared what many have known for years: Google is a monopoly. Specifically, it has a monopoly over online search and the search text ad market.

If you’ve used Google in the past few years, you know that it sucks. It once felt like magic: a search engine that would quickly take you wherever you needed to go online. But in 2024 all that old magic is gone. Relevant search results are buried under ads and worthless AI answers scraped from Reddit.

This 286-page ruling issued by U.S. District Judge Amit Mehta on Monday helps explain why. Google has no competition in search and doesn’t need to appeal to consumers the way it would in a competitive market. The judgment is the result of a year-long showdown between the U.S. Department of Justice (DOJ) and Google. It’s the biggest antitrust lawsuit since the DOJ went after Microsoft 20 years ago.

[...] If you’ve bought a device or searched the web in the past 15 years then you’ve experienced Google’s market dominance. Google comes pre-loaded onto countless devices and browsers. It’s so ubiquitous that its name has become the common verb for “searching on the internet.”

Google pays billions of dollars every year to make that happen. Around $26 billion in 2021 alone, according to the judgment. “Usually, the amount is calculated as a percentage of the advertisements revenue that Google generates from queries run through the default search access points,” the judgment said.

[...] Basically, Google pays huge sums of cash to every other tech company, including rivals like Apple, to keep itself the default search engine. Being the default engine gives it dominance when serving ads. It uses the ad money to pay to keep itself the default search engine. And on and on and on.

[...] And what would happen if Google stopped paying to be the default engine? A catastrophic loss of revenue.

“In 2020, Google’s internal modeling projected that it would lose between 60-80% of its iOS query volume should it be replaced as the default GSE on Apple devices…which would translate into net revenue losses between $28.2 and $32.7 billion,” the judgment said.

[...] It’s not hard to change your default search engine on a mobile device or PC, exactly. It’s just a pain in the ass. And the more complicated and annoying it is to switch, the more likely a person is to just stick to the default. And the default is Google.

Internal documents quoted in the judgment make it clear that Google knows all this.

“’Seemingly small friction points in user experiences can have a dramatically disproportionate effect on whether people drop or stick,’” Mehta wrote, quoting a document from Google’s Behavioral Economics Team. You want to think about each step, as small as it might be, and see if there is a way to eliminate it, delay it, simplify it, default it…of the tiny fraction of end users who try to change the default, many will become frustrated and simply leave the default as originally set.’”

[...] So what does all this mean for the user? The fallout from this might take years to materialize. Mehta has scheduled a follow-up hearing for September 6 where he’ll talk about what should be done about Google’s monopoly power. It’s possible he’ll recommend a selloff or restructuring of some kind. He may even tell Google it has to stop paying to be the default search engine on so many devices.

As Walker said, Google will appeal. It has an army of lawyers and billions of dollars, everything it needs to drag its fight with the Justice Department as long as possible. Billions it earned from its monopoly power.


Original Submission

posted by hubie on Wednesday August 07, @09:18PM   Printer-friendly

A Japanese company becomes the first to approach a piece of space junk in low-Earth orbit:

There are more than 2,000 mostly intact dead rockets circling the Earth, but until this year, no one ever launched a satellite to go see what one looked like after many years of tumbling around the planet.

In February, a Japanese company named Astroscale sent a small satellite into low-Earth orbit on top of a Rocket Lab launcher. A couple of months later, Astroscale's ADRAS-J (Active Debris Removal by Astroscale-Japan) spacecraft completed its pursuit of a Japanese rocket stuck in orbit for more than 15 years.

ADRAS-J photographed the upper stage of an H-IIA rocket from a range of several hundred meters and then backed away. This was the first publicly released image of space debris captured from another spacecraft using rendezvous and proximity operations.

Since then, Astroscale has pulled off more complex maneuvers around the H-IIA upper stage, which hasn't been controlled since it deployed a Japanese climate research satellite in January 2009. Astroscale attempted to complete a 360-degree fly-around of the H-IIA rocket last month, but the spacecraft triggered an autonomous abort one-third through the maneuver after detecting an attitude anomaly.

ADRAS-J flew away from the H-IIA rocket for several weeks. After engineers determined the cause of the glitch that triggered the abort, ADRAS-J fired thrusters to approach the upper stage again this month. The ADRAS-J spacecraft is about the size of a kitchen oven, while the H-IIA rocket it's visiting is nearly the size of a city bus.

[...] These types of complex maneuvers, known as rendezvous and proximity operations (RPO), are common for crew and cargo spacecraft around the International Space Station. Other commercial satellites have demonstrated formation-flying and even docking with a spacecraft that wasn't designed to connect with another vehicle in orbit.

Military satellites from the United States, Russia, and China also have RPO capabilities, but as far as we know, these spacecraft have only maneuvered in ultra-close range around so-called "cooperative" objects designed to receive them. In 2003, the Air Force Research Laboratory launched a small satellite named XSS-10 to inspect the upper stage of a Delta II rocket in orbit, but it had a head start. XSS-10 maneuvered around the same rocket that deployed it, rather than pursuing a separate target.

[...] US Space Command said in December that the population of space debris in orbit has increased by 76 percent since 2019 to 44,600 objects. The uptick in space junk is primarily due to debris-generating events, such as anti-satellite tests or occasional explosions. The number of active satellites has also increased to more than 7,000, driven by launches of mega-constellations like SpaceX's Starlink Internet network.

The European Space Agency breaks down the different types of space debris. As of June, ESA reported more than 2,000 intact rocket bodies were orbiting Earth, along with thousands more rocket-related debris fragments. Nearly half of these are in low-Earth orbit, flying at altitudes up to 1,200 miles (2,000 kilometers), where most active satellites are located. Experts have ranked these spent rocket stages as the most dangerous type of space debris because they are large and sometimes retain propellants and electrical energy that can cause explosions well after their missions are complete.


Original Submission

posted by janrinok on Wednesday August 07, @04:34PM   Printer-friendly
from the piii...iii...i...ii...iinnng dept.

The IETF has published a discussion about how to deal with networking in high-latency situations with occasional interruptions, such as interplanetary space where packet round trip times can make the traditional 3-way and 4-way handshake protocols quite impractical.

It takes some 2.4 to 2.7 seconds to send a signal to the moon and back. If we are talking about sending a signal to Mars and back, then the comparable delays are between 10 to 45 minutes. There is also the factor of extended interruption where an orbiting spacecraft is behind the object it is orbiting. If we look at communications with other planets in the solar system, there is a periodic interval when the planet aligns with the sun. For example, for an interval of around two weeks, the Earth's view of Mars is blocked by the Sun every two years.

Such protracted Round-Trip Time (RTT) intervals are well beyond what we experience in the everyday Internet, even in the most bizarre of fault scenarios! For an end-to-end reliable protocol, the sender must retain a copy of the sent data until it is acknowledged as received from the other end. We have become used to a network where the RTT intervals are of a few tens of milliseconds, so simple interactions, such as a three-way TCP handshake, or a DNS query and response can happen within the limits of human perception. When such interactions blow out to some 30 minutes or so, is an end-to-end interaction model the right architectural choice?

There are a lot of assumptions which need to be verified or debunked. It is clear that a wholly new digital environment will be needed for deep space.


Original Submission

posted by janrinok on Wednesday August 07, @11:51AM   Printer-friendly

OpenAI Afraid to Release ChatGPT Detection Tool That Might Piss Off Cheaters:

ChatGPT maker OpenAI has new search and voice features on the way, but it also has a tool at its disposal that's reportedly pretty good at catching all those AI-generated fake articles you see on the internet nowadays. The company has been sitting on it for nearly two years, and all it would have to do is turn it on. All the same, the Sam Altman-led company is still contemplating whether to release it as doing so might anger OpenAI's biggest fans.

This isn't that defunct AI detection algorithm the company released in 2023, but something much more accurate. OpenAI is hesitant to release this AI-detection tool, according to a report from the Wall Street Journal on Sunday based on some anonymous sources from inside the company. The program is effectively an AI watermarking system that imprints AI-generated text with certain patterns its tool can detect. Like other AI detectors, OpenAI's system would score a document with a percentage of how likely it was created with ChatGPT.

OpenAI confirmed this tool exists in an update to a May blog post posted Sunday. The program is reportedly 99.9% effective based on internal documents, according to the WSJ. This would be far better than the stated effectiveness of other AI detection software developed over the past two years. The company claimed that while it's good against local tamping, it can be circumvented by translating it and retranslating with something like Google Translate or rewording it using another AI generator. OpenAi also said those wishing to circumvent the tool could "insert a special character in between every word and then deleting that character."

Internal proponents of the program say it will do a lot to help teachers figure out when their students have handed in AI-generated homework. The company reportedly sat on this program for years over concerns that close to a third of its user base wouldn't like it. In an email statement, an OpenAI spokesperson said:

"The text watermarking method we're developing is technically promising, but has important risks we're weighing while we research alternatives, including susceptibility to circumvention by bad actors and the potential to disproportionately impact groups like non-English speakers. We believe the deliberate approach we've taken is necessary given the complexities involved and its likely impact on the broader ecosystem beyond OpenAI."

The other problem for OpenAI is the concern that if it releases its tool broadly enough, somebody could decipher OpenAI's watermarking technique. There is also an issue that it might be biased against non-native English speakers, as we've seen with other AI detectors.


Original Submission

posted by janrinok on Wednesday August 07, @07:02AM   Printer-friendly
from the good-luck-with-that dept.

Make Bitcoin great again - what Donald Trump's backing of crypto could mean for the industry

On July 27, the former US president and Republican nominee for the upcoming election, Donald Trump, headlined the biggest Bitcoin conference of the year in Nashville. In his speech, Trump claimed he will make the US the "crypto capital of the planet and the Bitcoin superpower of the world" if returned to the White House after November's election.

Bitcoin Price Tanks Hours After Trump Floats Using it as US Reserve Asset

Bitcoin prices plummeted after former President Donald Trump suggested that the cryptocurrency could be used to pay off the country's $35 trillion national debt.

Bitcoin dropped 12 percent in the past 24 hours and ether plunged by 21 percent in the same time period. The price of bitcoin has been dropping since Friday, and briefly dropped to below $50,000 on Monday. This was the first time it had dropped below these levels since February.

In recent weeks Trump has been attempting to position himself firmly as pro-crypto. Speaking at a bitcoin conference in Nashville, Tennessee on July 27, the Republican presidential candidate unveiled his plans to make the U.S. the "crypto capital of the planet" if he is elected for a second term.

He also spoke about the U.S. creating a "strategic national bitcoin reserve."


Original Submission

posted by martyb on Wednesday August 07, @02:15AM   Printer-friendly

Devices

A dental robotics company claims to have used an AI-controlled robot to perform a fully autonomous dental procedure on a human patient — for the first time, heralding a possible new era for dental treatment.

Perceptive, the company behind the robot, claims its system can shave off a considerable amount of time for routine procedures. The bot can replace crowns in just 15 minutes, it says, which takes a human dentist two hours across two office visits to complete.

The company says it's tested the device on a patient in Colombia, but has yet to release any peer-reviewed clinical data. As Stat points out, the company will need this data to apply for Food and Drug Administration approval, something that's still around five years away, according to Perceptive CEO Chris Ciriello.

Nonetheless, the company is celebrating the test as a big win.

"We're excited to successfully complete the world's first fully automated robotic dental procedure," said Ciriello in a press release. "This medical breakthrough enhances precision and efficiency of dental procedures, and democratizes access to better dental care, for improved patient experience and clinical outcomes."

The robotic system uses a handheld 3D scanner that captures highly detailed 3D images of beneath the gum line, allowing patients to "clearly visualize their dental conditions."

The AI then comes up with an efficient and precise procedure.

"The robotics system has been designed and rigorously tested to ensure that dentists can perform treatments safely, even in conditions where patient movement is prevalent," said dentist and Perceptive investor Edward Zuckerberg in the press release.

Perceptive has raised a considerable $30 million in funding, including from Zuckerberg — who also happens to be the father of Meta CEO Mark Zuckerberg. It's unclear whether the younger Zuckerberg is involved in the financing of the venture.

Robotic surgery has made big strides over the years, and companies are hoping to leverage AI technologies to bring the tech to the masses. But when that will happen remains an open question, as they still have plenty of regulatory hurdles to overcome.


Original Submission

posted by janrinok on Tuesday August 06, @08:38PM   Printer-friendly
from the what-could-possibly-go-wrong? dept.

Neuralink successfully implants its chip into a second patient's brain:

Neuralink's brain chip has been implanted into a second patient as part of early human trials, Elon Musk told podcast host Lex Fridman on Saturday. The company hasn't disclosed when the surgery took place or the name of the recipient, according to Reuters.

Musk said 400 of the electrodes on the second patient's brain are working out of 1,024 implanted. "I don't want to jinx it but it seems to have gone extremely well," he said. "There's a lot of signal, a lot of electrodes. It's working very well."

The device allows patients with spinal cord injuries to play video games, use the internet and control electronic devices using their thoughts alone. In May, the company announced that it was "accepting applications for the second participant" in trials following FDA approval.

The original Neuralink implant patient, Nolan Arbaugh, described the surgery as "super easy." In a demo, the company showed how Arbaugh was able to move a cursor around the screen of a laptop, pause an on-screen music device and play chess and Civilization VI.

Arbaugh himself participated in the marathon podcast with Musk and Fridman. He said that the device allows him to make anything happen on a computer screen just by thinking it, helping reduce his reliance on caregivers.

However, problems cropped up shortly after his surgery when some of electrodes retracted from his brain. The issue was partly rectified later on by modifying the algorithm to make the implants more sensitive. Neuralink told the FDA that in a second procedure, it would place the implant's threads deeper into the patient's brain to prevent them from moving as much as they did in Arbaugh's case.

[...] the company said it had over 1,000 volunteers for its second surgical trial. Musk said he expects Neuralink to implant its chips in up to eight more patients by the end of 2024.


Original Submission

posted by janrinok on Tuesday August 06, @03:52PM   Printer-friendly
from the time-marches-on dept.

Our resident shy submitter offers the following:

A nicely organized blog post at https://www.construction-physics.com/p/what-would-it-take-to-recreate-bell reviews the history of Bell Labs (going back into the late 1800s). It ends with a section that wonders if re-creating a research monster like Bell Labs (peak employment = 25,000 people) is possible today...or even needed. A sample from the middle:

Though it had many successes in the first 25 years of its life, the crowning achievement of Bell Labs research (and its strategy of leveraging early-stage scientific research to create new products) is undoubtedly its development of the transistor, along with its various derivatives (the MOSFET, the solar PV cell) and associated manufacturing technologies (including crystal pulling, zone melting, and diffusion furnaces). The transistor is a classic case of Bell Labs' strategy: wide research freedom, circumscribed by the requirement to produce things useful for the Bell System. The telephone network required enormous amounts of vacuum tubes [Bell Labs developed de Forest's tube into a useful amplifier] and mechanical relays to act as switches, but these were far from ideal components. ... Mervin Kelly, physicist and head of the Bell Labs vacuum tube department in the early 1930s (and later the president of Bell Labs), dreamed of replacing them with solid-state components with no moving parts. Advances in quantum mechanics, and novel materials known as semiconductors, suggested that such components might be possible.

Bell Labs had studied semiconductors since the early 1930s; Walter Brattain, who would eventually share the Nobel Prize for inventing the transistor, was hired in 1929 and had begun to study an early semiconductor device called the copper oxide rectifier. A Depression hiring freeze stymied more serious semiconductor efforts until 1936, when Mervin Kelly (now Bell Labs' director of research) was finally able to start building a more robust solid-state physics department and hired physicist William Shockley (the second of the three transistor inventors). While not giving Shockley any specific research tasks (indeed, the entire solid-state group had "unprecedented liberty to follow their own research noses as long as their work dovetailed with general company goals"), Kelly emphasized to Shockley the potential value of a solid-state component to replace tubes and mechanical relays.

The solid-state physicists continued their research over the next several years, studying the behavior of semiconductors and attempting to create a semiconductor amplifier. This research was interrupted by the war but resumed in 1945, the same year physicist John Bardeen was hired. Bardeen proved to be the catalyst the solid-state group needed, and over the next several years Bardeen, Brattain, and Shockley made progress in understanding semiconductor behavior. In December 1947, they unveiled their semiconductor amplifier: the transistor. By 1950, Western Electric was making 100 transistors a month for use in Bell System equipment. A few years later, in 1954, another Bell Labs solid-state research effort yielded the world's first silicon solar PV cell.

One of the kids in my neighborhood (1960s) became a physicist and worked at Bell Labs for years--seemed to really like it there. Anyone else have a connection to tell about?


Original Submission

posted by hubie on Tuesday August 06, @10:50AM   Printer-friendly
from the do-not-want dept.

Wired is running a story https://www.wired.com/story/cars-are-now-rolling-computers-so-how-long-will-they-get-updates-automakers-cant-say/ or https://archive.is/nAMkd about broken updates for in-car software. Starts out with a VW story:

In 2022, Jake Brown...bought a used 2017 Volkswagen Passat from a local dealership.
[..]
Brown had heard that some Volkswagens were having trouble with connected features for a reason that, because of Brown's telecommunications background, felt familiar to him: AT&T, which Volkswagen worked with to provide connectivity to the automaker's vehicles, had "sunsetted" its 3G service that year.
[...]
The 3G sunset left drivers of some Volkswagens, including a handful of models built between 2014 and 2019, unable to access Volkswagen's Car-Net service. Car-Net includes remote start, but also automated service notifications, emergency assistance, antitheft alerts, and remote automatic crash notifications, among other network-enabled features.

TFA goes on to say that several other manufacturers have the same problem--cars built with 3G connectivity have been timed out by the telcos--completely beyond control of the car manufacturer. A query by Wired to VW wasn't all that helpful--it seems there are already lawsuits about this and no solution in sight after several years since 3G stopped. They do mention that VW expect 4G (currently shipping in their cars) to last to 2035 [but that may be wishful thinking--submitter].

TFA also includes a discussion of expected service life of phones (a few years, updates for seven years) vs cars which average over 12 years (USA), with many at 20+.

Your AC wonders why there isn't an option to ignore the built-in wireless modem and either: swap out the modem module, or connect the car (through USB or other cable) to a current phone [but this may have security problems?] Personally I'm not interested in any of these connected cars--and this is just another nail in the coffin for me.


Original Submission

posted by mrpg on Tuesday August 06, @06:03AM   Printer-friendly
from the subzero dept.

Arthur T Knackerbracket has processed the following story:

Japanese researchers have discovered ice 0, a new type of ice that forms near the surface of water, potentially redefining scientific understanding and influencing technology and climate studies. (Artist’s concept). Credit: SciTechDaily.com

Ice is far more complex than most people realize, with science identifying over 20 different varieties formed under various combinations of pressure and temperature. The type we use to chill our drinks, known as ice I, is one of the few forms that occur naturally on Earth. Recently, researchers from Japan discovered another type: ice 0, an unusual form of ice that can initiate the formation of ice crystals in supercooled water.

The formation of ice near the surface of liquid water can start from tiny crystal precursors with a structure similar to a rare type of ice, known as ice 0. In a study recently published in Nature Communications, researchers from the Social Cooperation Research Department “Frost Protection Science,” at the Institute of Industrial Science, The University of Tokyo showed that these ice 0-like structures can cause a water droplet to freeze near its surface rather than at its core. This discovery resolves a longstanding puzzle and could help redefine our understanding of how ice forms.

Crystallization of ice, known as ice nucleation, usually happens heterogeneously, or in other words, at a solid surface. This is normally expected to happen at the surface of the water’s container, where liquid meets solid. However, this new research shows that ice crystallization can also occur just below the water’s surface, where it meets the air. Here, the ice nucleates around small precursors with the same characteristic ring-shaped structure as ice 0.

Reference: “Surface-induced water crystallisation driven by precursors formed in negative pressure regions” by Gang Sun, and Hajime Tanaka, 26 July 2024, Nature Communications.
  DOI: 10.1038/s41467-024-50188-1


Original Submission

posted by mrpg on Tuesday August 06, @01:18AM   Printer-friendly
from the cows-beware dept.

Arthur T Knackerbracket has processed the following story:

To address the climate crisis effectively, immediate action on methane emissions is essential. Methane has contributed about half the global warming we’ve experienced so far, and emissions are climbing rapidly. An international team of climate researchers writing today (July 30) in Frontiers in Science set out three imperatives to cut methane emissions and share a new tool to help us find the most cost-effective ways of doing so.

“The world has been rightly focused on carbon dioxide, which is the largest driver of climate change to date,” said Professor Drew Shindell of Duke University, lead author. “Methane seemed like something we could leave for later, but the world has warmed very rapidly over the past couple of decades, while we’ve failed to reduce our CO2 emissions. So that leaves us more desperate for ways to reduce the rate of warming rapidly, which methane can do.”

Methane is the second most potent greenhouse gas, but only about 2% of global climate finance goes towards cutting methane emissions. These emissions are also rising fast, due to a combination of emissions from fossil fuel production and increased emissions from wetlands, driven by the climate crisis. To slow the damage from climate change and make it possible to keep global warming below 2°C, we need to act immediately, following the Global Methane Pledge to reduce methane emissions by 30% from their 2020 level by 2030.

[...] Methane doesn’t accumulate in the atmosphere in the long term, so emissions reductions take effect more quickly. If we could cut all methane emissions tomorrow, in 30 years more than 90% of accumulated methane—but only around 25% of carbon dioxide—would have left the atmosphere.

Reference: “The methane imperative” 30 July 2024, Frontiers in Science.
  DOI: 10.3389/fsci.2024.1349770


Original Submission