Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
The next Meta Quest headset, planned for launch this year, will be thinner, twice as powerful, and slightly more expensive than the Quest 2. That's according to a leaked internal hardware roadmap presentation obtained by The Verge that also includes plans for high-end, smartband-controlled, ad-supported AR glasses by 2027.
The "Quest 3" will also include a new "Smart Guardian" system that lets users walk around safely in "mixed reality," according to the presentation. That will come ahead of a more "accessible" headset, codenamed Ventura, which is planned for a release in 2024 at "the most attractive price point in the VR consumer market."
That Ventura description brings to mind John Carmack's October Meta Connect keynote, in which he highlighted his push for a "super cheap, super lightweight headset" targeting "$250 and 250 grams." Carmack complained that Meta is "not building that headset today, but I keep trying." Months later, Carmack announced he was leaving the company, complaining that he was "evidently not persuasive enough" to change the company for the better.
Related:
John Carmack's 'Different Path' to Artificial General Intelligence
John Carmack Steps Out of Meta's VR Mess
The Low-Cost VR Honeymoon Is Over
The First "Meta Store" is Opening in California in May
John Carmack Issues Some Words of Warning for Meta and its Metaverse Plans
Meta Removing Facebook Login Requirement for Quest Headsets by Next Year
Last week, Microsoft researchers announced an experimental framework to control robots and drones using the language abilities of ChatGPT, a popular AI language model created by OpenAI. Using natural language commands, ChatGPT can write special code that controls robot movements. A human then views the results and adjusts as necessary until the task gets completed successfully.
The research arrived in a paper titled "ChatGPT for Robotics: Design Principles and Model Abilities," authored by Sai Vemprala, Rogerio Bonatti, Arthur Bucker, and Ashish Kapoor of the Microsoft Autonomous Systems and Robotics Group.
In a demonstration video, Microsoft shows robots—apparently controlled by code written by ChatGPT while following human instructions—using a robot arm to arrange blocks into a Microsoft logo, flying a drone to inspect the contents of a shelf, or finding objects using a robot with vision capabilities.
To get ChatGPT to interface with robotics, the researchers taught ChatGPT a custom robotics API. When given instructions like "pick up the ball," ChatGPT can generate robotics control code just as it would write a poem or complete an essay. After a human inspects and edits the code for accuracy and safety, the human operator can execute the task and evaluate its performance.
In this way, ChatGPT accelerates robotic control programming, but it's not an autonomous system. "We emphasize that the use of ChatGPT for robotics is not a fully automated process," reads the paper, "but rather acts as a tool to augment human capacity."
Related:
Next Up For AI Chatbots: It's All About The APIs
Sci-Fi Becomes Real as Renowned Magazine Closes Submissions Due to AI Writers
Amid ChatGPT Outcry, Some Teachers are Inviting AI to Class
Microsoft Limits Bing A.I. Chats After the Chatbot Had Some Unsettling Conversations
Bing's AI-Based Chat Learns Denial and Gaslighting
A Watermark for Chatbots can Expose Text Written by an AI
Google is Scrambling to Catch Up to Bing, of All Things
LLM ChatGPT Might Change the World, but Not in a Good Way
Erasing Authors, Google and Bing's AI Bots Endanger Open Web
Alphabet Stock Price Drops After Google Bard Launch Blunder
An AI 'Engineer' Has Now Designed 100 Chips
ChatGPT Can't be Credited as an Author, Says World's Largest Academic Publisher
90% of Online Content Could be 'Generated by AI by 2025,' Expert Says
Seattle Public Schools Bans ChatGPT; District 'Requires Original Thought and Work From Students'
Getty Images Targets AI Firm For 'Copying' Photos
Controversy Erupts Over Non-consensual AI Mental Health Experiment
Microsoft's New AI Can Simulate Anyone's Voice With Three Seconds of Audio
AI Everything, Everywhere
Microsoft, GitHub, and OpenAI Sued for $9B in Damages Over Piracy
Adobe Stock Begins Selling AI-Generated Artwork
AI Systems Can't Patent Inventions, US Federal Circuit Court Confirms
We've all been there. You made a promise you couldn't keep. Or something came up, and you didn't follow through on what you said you'd do.
It turns out children pay attention to what we say when we don't deliver.
A new study shows that by the time they reach preschool, kids understand that some reasons for reneging are more defensible than others.
"At 3 to 5 years old, kids are on to you. They know when you're giving a bad excuse," said first author Leon Li, who did the research with developmental psychologist Michael Tomasello as part of his Ph.D. in psychology and neuroscience at Duke.
[...] No matter what the excuse (or lack thereof), the children agreed that it was generally wrong to break a promise. But they were more understanding when the puppets offered a good excuse (i.e., they had to help someone), versus a lame one (i.e., they just wanted to do something fun instead).
In other words, children this age grasp that obligations to help others take priority over selfish desires, Li said.
The children's responses also revealed that a lame excuse was just as bad as none at all.
"Previous research has suggested that in some cases, young kids will just take any reason to be better than no reason at all," Li said. "But here we showed that kids do pay attention to the actual content."
[...] Li said the findings are also relevant to any adult who has uttered the classic fallback phrase, "Because I said so."
"Kids are paying attention and can tell that is a lame reason," Li said.
Journal Reference:
"Young Children Judge Defection Less Negatively When There's a Good Justification," Leon Li, Aren Tucker, Michael Tomasello. Cognitive Development, Nov. 3, 2022. DOI: 10.1016/j.cogdev.2022.101268
It's been six months since Blue Origin's New Shepard failed during launch:
It's been six months since Blue Origin's New Shepard failed during launch, yet virtually nothing is known about the anomaly or when the rocket might fly again.
Blue Origin's chief architect, Gary Lai, provided an update Tuesday on the investigation into the failed launch of the company's New Shepard rocket in September of last year. Troublingly, it's what he didn't say about the ongoing investigation that's giving us cause for concern.
I'd like to be able to tell you the reason for the September 12 launch failure and when Blue Origin's suborbital rocket will fly again, but I can't.
"We are investigating that anomaly now, the cause of it," Lai told reporters after completing his talk at the Next-Generation Suborbital Researchers Conference being held in Broomfield, Colorado, SpaceNews reports. "We will get to the bottom of it." To which he added: "I can't talk about specific timelines or plans for when we will resolve that situation other than to say that we fully intend to be back in business as soon as we are ready."
No one was injured during the failed NS-23 mission, which took off from Launch Site One in West Texas. The uncrewed New Shepard was carrying scientific instruments to suborbital space, but the rocket never reached its target. Something happened 65 seconds into the launch that caused New Shepard's abort system to engage, jettisoning the capsule away from the failing, fiery booster. The capsule performed a parachute-assisted landing, but the booster, instead of performing its usual vertical landing, was destroyed after crashing onto the surface.
The Federal Aviation Administration immediately stepped in, launching an investigation and grounding the New Shepard rocket. The FAA said it would "determine whether any system, process, or procedure related to the mishap affected public safety." Also chiming in was Don Beyer, chair of the House Committee on Science, Space, and Technology's Subcommittee on Space and Aeronautics, who in an emailed statement said: "I take our oversight role in this area very seriously." This is all fine and well, but it's now six months later and we're still waiting to learn more.
Blue Origin primarily uses the reusable New Shepard rocket to shuttle passengers to suborbital space, in which the capsule gets no higher than around 60 miles (100 kilometers). Since launching its space tourism service in July 2021, the company has sent 31 people to the edge of space, including Bezos and Lai. Blue Origin has been tight-lipped about how much it charges for these short trips to space, but some passengers claim to have dished out as much as $30 million.
[...] Lai shared no details about the rocket's BE-3 engine and whether it had anything to do with the launch failure. A problem with this engine would be very bad, not just for New Shepard but also for Blue Origin's upcoming New Glenn rocket, the second stage of which uses a modified version of the engine known as the BE-3U. New Glenn was supposed to launch in 2020, but the current plan is for the launch vehicle to finally take flight later this year. NASA recently signed a contract with Blue Origin, in which New Glenn is slated to launch the space agency's ESCAPADE mission to Mars.
Forthcoming update of C++ will include a standard library module named std:
C++ 23, a planned upgrade to the popular programming language, is now feature-complete, with capabilities such as standard library module support. On the horizon is a subsequent release, dubbed C++ 26.
The ISO C++ Committee in early February completed technical work on the C++ 23 specification and is producing a final document for a draft approval ballot, said Herb Sutter, chair of the committee, in a blog post on February 13. The standard library module is expected to improve compilation.
Other features slated for C++ 23 include simplifying implicit move, fixing temporaries in range-for loops, multidimensional and static operator[], and Unicode improvements. Also featured is static constexpr in constexpr functions. The full list of features can be found at cppreference.com.
Many features of C++ 23 already have been implemented in major compilers and libraries, Sutter said. A planned C++ 26 release of the language, meanwhile, is slated to emphasize concurrency and parallelism. Stackful coroutines also are slotted for C++ 26, according to February 20 blog post by ISO C++ committee member Antony Poluhkin.
https://www.schneier.com/blog/archives/2023/02/banning-tiktok.html
Congress is currently debating bills that would ban TikTok in the United States. We are here as technologists to tell you that this is a terrible idea and the side effects would be intolerable. Details matter. There are several ways Congress might ban TikTok, each with different efficacies and side effects. In the end, all the effective ones would destroy the free Internet as we know it:
There's no doubt that TikTok and ByteDance, the company that owns it, are shady. They, like most large corporations in China, operate at the pleasure of the Chinese government. They collect extreme levels of information about users. But they're not alone: Many apps you use do the same, including Facebook and Instagram, along with seemingly innocuous apps that have no need for the data. Your data is bought and sold by data brokers you've never heard of who have few scruples about where the data ends up. They have digital dossiers on most people in the United States.
If we want to address the real problem, we need to enact serious privacy laws, not security theater, to stop our data from being collected, analyzed, and sold—by anyone. Such laws would protect us in the long term, and not just from the app of the week. They would also prevent data breaches and ransomware attacks from spilling our data out into the digital underworld, including hacker message boards and chat servers, hostile state actors, and outside hacker groups. And, most importantly, they would be compatible with our bedrock values of free speech and commerce, which Congress's current strategies are not.
The essay goes on to list reasons why a TikTok ban by Congress would be ineffective, pointing out:
Right now, there's nothing to stop Americans' data from ending up overseas. We've seen plenty of instances—from Zoom to Clubhouse to others—where data about Americans collected by US companies ends up in China, not by accident but because of how those companies managed their data. And the Chinese government regularly steals data from US organizations for its own use: Equifax, Marriott Hotels, and the Office of Personnel Management are examples.
If we want to get serious about protecting national security, we have to get serious about data privacy. Today, data surveillance is the business model of the Internet. Our personal lives have turned into data; it's not possible to block it at our national borders. Our data has no nationality, no cost to copy, and, currently, little legal protection. Like water, it finds every crack and flows to every low place. TikTok won't be the last app or service from abroad that becomes popular, and it is distressingly ordinary in terms of how much it spies on us. Personal privacy is now a matter of national security. That needs to be part of any debate about banning TikTok.
Previously:
As layoffs ravage the tech industry, algorithms once used to help hire could now be deciding who gets cut:
Days after mass layoffs trimmed 12,000 jobs at Google, hundreds of former employees flocked to an online chatroom to commiserate about the seemingly erratic way they had suddenly been made redundant.
They swapped theories on how management had decided who got cut. Could a "mindless algorithm carefully designed not to violate any laws" have chosen who got the ax, one person wondered in a Discord post The Washington Post could not independently verify.
Google says there was "no algorithm involved" in its job-cut decisions. But former employees are not wrong to wonder, as a fleet of artificial intelligence tools become ingrained in office life. Human resources managers use machine learning software to analyze millions of employment-related data points, churning out recommendations of whom to interview, hire, promote or help retain.
[...] A January survey of 300 human resources leaders at U.S. companies revealed that 98 percent of them say software and algorithms will help them make layoff decisions this year. And as companies lay off large swaths of people — with cuts creeping into the five digits — it's hard for humans to execute alone.
[...] These same tools can help in layoffs. "They suddenly are just being used differently," [Harvard Business School professor Joseph] Fuller added, "because that's the place where people have ... a real ... inventory of skills."
Originally spotted on The Eponymous Pickle.
Related:
Students say they are getting 'screwed over' for sticking to the rules. Professors say students are acting like 'tyrants.' Then came ChatGPT:
When it was time for Sam Beyda, then a freshman at Columbia University, to take his Calculus I midterm, the professor told students they had 90 minutes.
But the exam would be administered online. And even though every student was expected to take it alone, in their dorms or apartments or at the library, it wouldn't be proctored. And they had 24 hours to turn it in.
"Anyone who hears that knows it's a free-for-all," Beyda told me.
[...] For decades, campus standards have been plummeting. The hallowed, ivy-draped buildings, the stately quads, the timeless Latin mottos—all that tradition and honor have been slipping away. That's an old story. Then Covid struck and all bets were off. With college kids doing college from their bedrooms and smartphones, and with the explosion of new technology, cheating became not just easy but practically unavoidable. "Cheating is rampant," a Princeton senior told me. "Since Covid there's been an increasing trend toward grade inflation, cheating, and ultimately, academic mediocrity."
Now that students are back on campus, colleges are having a hard time putting the genie back in the bottle. Remote testing combined with an array of tech tools—exam helpers like Chegg, Course Hero, Quizlet, and Coursera; messaging apps like GroupMe and WhatsApp; Dropbox folders containing course material from years past; and most recently, ChatGPT, the AI that can write essays—have permanently transformed the student experience.
[...] On January 2, a Princeton University computer science major named Edward Tian—who may be the most hated man on campus—tweeted: "I spent New Years building GPTZero—an app that can quickly and efficiently detect whether an essay is ChatGPT or human written."
So now it's nerd vs. nerd, and one of the nerds is going to win—probably whoever gets more venture funding. Everything is up in the air.
Previously:
Interesting Engineering reports, Ford's latest patent enables vehicles to repossess themselves and drive away.
Ford's new patent enables banks to confiscate future Ford vehicles if the owner repeatedly skips payments, thanks to new technologies. The American automaker allegedly submitted a patent for "System & Methods to Repossess a Vehicle" in media reports.
According to previously released patent filings, the new system, which can disable one or more vehicle functions, could be installed in any Ford vehicle. It claims that everything on the car, including the air conditioning and engine, can be turned off. It continued by stating that with the new system in place, autonomous or semi-autonomous vehicles might be transferred from their initial location to a secondary location, making it easier for them to be towed.
Either to the agency or to the scrap yard
According to reports, the vehicle may be ordered to drive either to the agency handling the repossession or directly to the scrap yard, depending on the financial sustainability of the process.Although the specifics are unknown, the sources indicate that a "repossession computer" might be installed in all upcoming cars to enable the system to work properly. Furthermore, it specifies that no additional hardware is required for the new technology to function. The article also states that the system will issue several warnings before starting a repossession.
Researchers working on plan to neutralise reach of network developed by billionaire Elon Musk:
Researchers say China plans to build a huge satellite network in near-Earth orbit to provide internet services to users around the world — and to stifle Elon Musk's Starlink.
The project has the code name "GW", according to a team led by associate professor Xu Can with the People's Liberation Army's (PLA) Space Engineering University in Beijing. But what these letters stand for is unclear.
The GW constellation will include 12,992 satellites owned by the newly established China Satellite Network Group Co, Xu and his colleagues said in a paper about anti-Starlink measures published in the Chinese journal Command Control and Simulation on Feb 15.
[...] Xu's team said the GW satellite constellation was likely to be deployed quickly, "before the completion of Starlink". This would "ensure that our country has a place in low orbit and prevent the Starlink constellation from excessively pre-empting low-orbit resources", they wrote.
The Chinese satellites could also be placed in "orbits where the Starlink constellation has not yet reached", the researchers said, adding that they would "gain opportunities and advantages at other orbital altitudes, and even suppress Starlink".
The Chinese satellites could be equipped with an anti-Starlink payload to carry out various missions, such as conducting "close-range, long-term surveillance of Starlink satellites", they said.
A recent study by the China National Space Administration called for cooperation and said competing communication satellite networks could harm each other.
[...] "The Starlink satellites may use their orbital manoeuvrability to actively hit and destroy nearby targets in space," the researchers said.
[...] Xu's team said the Chinese government could also cooperate with other governments to form an anti-Starlink coalition and "demand that SpaceX publish the precise orbiting data of Starlink satellites".
They added that new weapons, including lasers and high-power microwaves, would be developed and used to destroy Starlink satellites that pass over China or other sensitive regions.
It looks like ChatGPT learns from the questions you pose it.
That, at least, is the conclusion one could draw from a couple of enterprise bans of the tool.
The first one out of the gate was Amazon. Amazon's analysis of ChatGPT's results appeared to show confidential information. As a company lawyer put it,
"... your inputs may be used as training data for a further iteration of ChatGPT, and we wouldn't want its output to include or resemble our confidential information (and I've already seen instances where its output closely matches existing material)."
The second big announcement came from JPMorgan, the world's largest bank. Last week they followed in Amazon's steps, without giving explanation apart from this being normal procedure for third-party tools. That explanation smells a bit dubious, unless the use of Google or any other public search engine is forbidden there too.
That was on February 22. Two days later, Bank of America, Goldman Sachs, and other Wall Street banks followed suit.
Maybe, as people have the impression of chatting with a real person, they tend to share more gossip and secret info too?
Study finds helping others reduces focus on your own symptoms:
People suffering from symptoms of depression or anxiety may help heal themselves by doing good deeds for others, new research shows.
The study found that performing acts of kindness led to improvements not seen in two other therapeutic techniques used to treat depression or anxiety.
Most importantly, the acts of kindness technique was the only intervention tested that helped people feel more connected to others, said study co-author David Cregg, who led the work as part of his PhD dissertation in psychology at The Ohio State University.
[...] "We often think that people with depression have enough to deal with, so we don't want to burden them by asking them to help others. But these results run counter to that," she said.
"Doing nice things for people and focusing on the needs of others may actually help people with depression and anxiety feel better about themselves."
[...] After an introductory session, the participants were split into three groups. Two of the groups were assigned to techniques often used in cognitive behavioral therapy (CBT) for depression: planning social activities or cognitive reappraisal.
[...] Members of the third group were instructed to perform three acts of kindness a day for two days out of the week. Acts of kindness were defined as "big or small acts that benefit others or make others happy, typically at some cost to you in terms of time or resources."
Some of the acts of kindness that participants later said they did included baking cookies for friends, offering to give a friend a ride, and leaving sticky notes for roommates with words of encouragement.
[...] The findings showed that participants in all three groups showed an increase in life satisfaction and a reduction of depression and anxiety symptoms after the 10 weeks of the study.
"These results are encouraging because they suggest that all three study interventions are effective at reducing distress and improving satisfaction," Cregg said.
"But acts of kindness still showed an advantage over both social activities and cognitive reappraisal by making people feel more connected to other people, which is an important part of well-being," he said.
[...] "There's something specific about performing acts of kindness that makes people feel connected to others. It's not enough to just be around other people, participating in social activities," she said.
[...] "Not everyone who could benefit from psychotherapy has the opportunity to get that treatment," she said. "But we found that a relatively simple, one-time training had real effects on reducing depression and anxiety symptoms."
Journal Reference:
David R. Cregg & Jennifer S. Cheavens (2022) Healing through helping: an experimental investigation of kindness, social activities, and reappraisal as well-being interventions [open], The Journal of Positive Psychology, DOI: 10.1080/17439760.2022.2154695
Waymo is starting driverless taxi tests in Los Angeles:
Late last year, Waymo secured a Driverless Pilot permit from the state of California, bringing the alphabet-owned brand one step closer to launching its autonomous taxi service in the state. Now, Waymo is already expanding its service area, announcing plans to begin testing driverless cars in Los Angeles. The company tells Engadget that the test will mark the first time that fully autonomous cars will roam the streets of LA, and that thanks to successful tests in San Francisco, its been able to roll out autonomous drivers in new cities with "little-to-no on-board engineering work."
That doesn't mean the company is ready to launch its Waymo One taxi service in California, however. The LA test will likely follow the same course as Waymo's fleet in San Francisco: a limited number of vehicles only available to riders in the Waymo Research Trusted Tester program. Waymo didn't have any details to share regarding when the full driverless taxi service will be available to customers in Los Angeles, but it probably hinges on the California Public Utilities Commission (CPUC) issuing the firm a Driverless Deployment permit. Until it can clear that final legal hurdle, Waymo's paid taxi service will remain exclusive to Phoenix AZ. So far, GM's Cruise robotaxi service is the only company permitted to charge for driverless rides in the state, so long as those rides take place during daylight hours.
[...] Waymo didn't give any specific dates for when the test will begin, but noted that its 5th-generation Jaguar I-Pace cars will start rider-only testing in Santa Monica, and only outside of rush-hour. Then, the program will expand in accordance with Waymo's safety framework before eventually launching to consumers. Oh, and in case you were worried that the cars might make LA traffic even worse, the company promises that its continuously updating its self-driving software to avoid stalling traffic, as one stopped Waymo vehicle recently did in San Francisco.
Samsung commits $230B for five new chip plants in South Korea:
Samsung Electronics said today that it plans to invest approximately $230 billion (300 trillion won) to build five new memory and foundry fabs in South Korea — a big move in line with the government's ambitious aim to set up a mega semiconductor hub in Yongin, on the outskirts of Seoul. The investments will be made through 2042.
The country's move indicates that it is shoring up the domestic semiconductor production line to secure the supply chain as other countries, including the U.S., Taiwan, Japan, and China, are scrambling to ramp up their domestic chip manufacturing to offset risks to global supply chain disruption due to rising tensions between the U.S. and China.
"It is expected that we would invest about 300 trillion KRW ($230 billion) in the chip-making cluster through 2042," a spokesperson at Samsung said in an emailed statement to TechCrunch. Although the government, in a statement, spoke of plans for five plants, the Samsung spokesperson declined to comment on the number of plants Samsung will set up in the semiconductor cluster as well as other details.
[...] Samsung already operates a foundry chip facility in Austin, Texas, and it has recently announced additional investment plans for the U.S.: $17 billion earmarked to build a manufacturing facility in Taylor, Texas. In addition, it is also considering investing $200 billion to set up a further 11 chip plants in Texas.
Scientists have decoded the physical process that takes place in the mouth when chocolate is eaten, as it changes from a solid into a smooth emulsion that many people find totally irresistible:
By analysing each of the steps, the interdisciplinary research team from the School of Food Science and Nutrition and the School of Mechanical Engineering at the University of Leeds hope it will lead to the development of a new generation of luxury chocolate that will have the same feel and texture but will be healthier to consume.
During the moments it is in the mouth, the chocolate sensation arises from the way the chocolate is lubricated, either from ingredients in the chocolate itself or from saliva or a combination of the two.
Fat plays a key function almost immediately when a piece of chocolate is in contact with the tongue. After that, solid cocoa particles are released and they become important in terms of the tactile sensation, so fat deeper inside the chocolate plays a rather limited role and could be reduced without having an impact on the feel or sensation of chocolate.
[...] "If a chocolate has 5% fat or 50% fat it will still form droplets in the mouth and that gives you the chocolate sensation. However, it is the location of the fat in the make-up of the chocolate which matters in each stage of lubrication, and that has been rarely researched.
"We are showing that the fat layer needs to be on the outer layer of the chocolate, this matters the most, followed by effective coating of the cocoa particles by fat, these help to make chocolate feel so good."
[...] "Our research opens the possibility that manufacturers can intelligently design dark chocolate to reduce the overall fat content.
"We believe dark chocolate can be produced in a gradient-layered architecture with fat covering the surface of chocolates and particles to offer the sought after self-indulging experience without adding too much fat inside the body of the chocolate."
[...] The researchers believe the physical techniques used in the study could be applied to the investigation of other foodstuffs that undergo a phase change, where a substance is transformed from a solid to a liquid, such as ice-cream, margarine or cheese.
Journal Reference:
Siavash Soltanahmadi, Michael Bryant, and Anwesha Sarkar, Insights into the Multiscale Lubrication Mechanism of Edible Phase Change Materials [open], ACS Appl. Mater. Interfaces 2023, 15, 3, 3699–3712. DOI: https://doi.org/10.1021/acsami.2c13017