Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
The company's CEO claims that affordable and reliable vehicles with combustion engines are a priority for US buyers:
Mazda is late to the electrification party. The MX-30 is far from being the roaring success the Japanese automaker had hoped it would be. It was axed from the United States at the end of the 2023 model year due to poor sales. The range-extending version with a rotary engine is only offered in certain markets, and the US is not on the list. In addition, the EZ-6 electric sedan isn't coming here either. However, the situation isn't all that bad.
Why? Because Americans primarily want gas cars. Speaking with Automotive News, Mazda CEO Masahiro Moro said ICE has a long future in America. Even at the end of the decade, traditional gas cars and mild-hybrid models will make up about two-thirds of annual sales. Plug-in hybrids and EVs will represent the remaining third. In other words, most vehicles will still have a gas engine five years from now.
Mazda's head honcho primarily referred to entry-level models, specifically the 3 and CX-30. Moro believes EV growth in the US has slowed down in the last 18 months or so, adding the trend will likely continue in the foreseeable future. That buys the company more time to develop a lithium-ion battery entirely in-house. The goal is to have it ready for 2030 in plug-in hybrids and purely electric cars. Expect a much higher energy density and "very short" charging times. Interestingly, the engineers already have a "very advanced research base for solid-state batteries."
In the meantime, work is underway on a two-rotor gas engine that will serve as a generator.
Related:
The text of a talk Stephen Fry gave on Thursday 12th September as the inaugural "Living Well With Technology" lecture for King's College London's Digital Futures Institute.
He talks about AI - or, as he says, Ai.
As a well known media personality/celebrity, who has a track-record of making outstandingly wrong predictions:
https://stephenfry.substack.com/p/ai-a-means-to-an-end-or-a-means-to
I would be asked to address delegates and attendees on the subject of a new microblogging service that had only recently poked its timorous head up in the digital world like a delicate flower but was already twisting and winding itself round the culture like vigorous bindweed. Twitter it was called. I had joined early and my name seemed permanently associated with it. What an evangel I was. Web 2.0, the user-generated web, was going great guns at this point. Tick off the years. 2003 MySpace began. 2004 Facebook launched. 2005 YouTube. 2006 Twitter. 2007 the iPhone. 2008 the App Store and later that year, Android and then Instagram. Bliss was it in that dawn, etc. etc. I confidently predicted that this new kind of citizen-led computer and internet use would help build a brave and beautiful new world. "Local and global rivalries will dissolve," I said. "Tribal hatreds will melt away. Surely," I cried, "Twitter and Facebook and this new world of 'social media' will usher in an age of universal brotherhood and amity."
...reading his views on AI could be amusing and enlightening. Or maybe not.
You can write a coruscating critique in the comments. Or not. As is your wont.
OECD = Organisation for Economic Co-operation and Development
It's an annual event now with reports proclaiming the decline of humanity and how school children are getting worse and worse at basic tasks. Turns out it's not so much better among the adult population. Literacy and Numeracy are declining among them to.
Literacy and numeracy skills among adults have largely declined or stagnated over the past decade in most OECD countries, according to the second OECD Survey of Adult Skills. Declines have been even larger and more widespread among low-educated adults.
Finland, Japan, the Netherlands, Norway and Sweden are the best-performing countries in all three domains. Eleven countries (Chile, Croatia, France, Hungary, Israel, Italy, Korea, Lithuania, Poland, Portugal and Spain) consistently perform below the OECD average in all skills domains.
Note: 160 000 adults aged 16 to 65 were surveyed in 31 countries and economies: Austria, Belgium (Flemish Region), Canada, Chile, Croatia, Czechia, Denmark, England (UK), Estonia, Finland, France, Germany, Hungary, Ireland, Israel, Italy, Japan, Korea, Latvia, Lithuania, the Netherlands, New Zealand, Poland, Portugal, Singapore, the Slovak Republic, Spain, Sweden and the United States.
Arthur T Knackerbracket has processed the following story:
The university had an idea for a battery powered by carbon-14, the longest-lived radioactive isotope of carbon with a half-life of around 5,700 years. For safety reasons, they wanted to encapsulate it in synthetic diamond so there was no risk of human harm, and so went to the UKAEA (United Kingdom Atomic Energy Authority) for help.
The result is a microwatt-level battery around the same diameter as a standard lithium-ion coin battery, albeit much thinner. As the carbon-14 decays, the electrons produced are focused by the diamond shell and can be used to power devices – if they only require very little power, of course.
"This is about UK innovation and no one's ever done this before," said Professor Tom Scott, professor in materials at the University of Bristol. "We can offer a technology where you never have to replace the battery because the battery will literally, on human timescales, last forever."
Working together, the team built a plasma deposition system at UKAEA's Culham Campus. This lays down thin layers of synthetic diamond around the battery's carbon-14 heart. The team is now trying to scale up the machinery so that larger batteries can be developed.
"Diamond batteries offer a safe, sustainable way to provide continuous microwatt levels of power. They are an emerging technology that uses a manufactured diamond to safely encase small amounts of carbon-14," said Sarah Clark, director of Tritium Fuel Cycle at UKAEA.
The first use case for the technology would be extreme environments like powering small satellites (the European Space Agency funded some of the research) or sensors on the sea floor. But the team also envisaged the technology being implanted in humans to power devices such as pacemakers or cochlear implants that could receive power for longer than the human carrying them would need.
Arthur T Knackerbracket has processed the following story:
The mobile threat hunting company rolled out a new feature back in May 2024 that allows customers to conduct a professional-grade security scan of their mobile device without having to consult a forensics expert. Of the 2,500 self-initiated scans, Pegasus was discovered on seven devices.
Sure, seven installations out of 2,500 isn't overwhelming (it is fewer than 0.28 percent of all scans). What's more, the sample size is relatively small and is a bit skewed because it involves targeted users that already have an interest in device security. Still, it is noteworthy.
iVerify COO Rocky Cole told Wired that the people targeted are not just high profile journalists or activists, but also business leaders, people running commercial enterprises, and government leaders.
"It looks a lot more like the targeting profile of your average piece of malware or your average APT group than it does the narrative that's been out there that mercenary spyware is being abused to target activists," Cole said. "It is doing that, absolutely, but this cross section of society was surprising to find."
The infections spanned a range of operating system versions and installation timelines as well. One instance was installed in late 2023 on iOS 16.6 while another originated in November 2022 on iOS 15. The five others dated back to 2021 across iOS 14 and iOS 15. In all cases, Pegasus was undetected by traditional security measures.
Arthur T Knackerbracket has processed the following story:
GenCast is the latest in DeepMind’s ongoing research project to use artificial intelligence to improve weather forecasting. The model was trained on four decades of historical data from the European Centre for Medium-Range Weather Forecasts’s (ECMWF) ERA5 archive, which includes regular measurements of temperature, wind speed and pressure at various altitudes around the globe.
Data up to 2018 was used to train the model and then data from 2019 was used to test its predictions against known weather. The company found that it beat ECMWF’s industry-standard ENS forecast 97.4 per cent of the time in total, and 99.8 per cent of the time when looking ahead more than 36 hours.
[...] Existing weather forecasts are based on physics simulations run on powerful supercomputers that deterministically model and extrapolate weather patterns as accurately as possible. Forecasters usually run dozens of simulations with slightly different inputs in groups called ensembles to better capture a range of possible outcomes. These increasingly complex and numerous simulations are extremely computationally intensive and require ever more powerful and energy-hungry machines to operate.
AI could offer a less costly solution. For instance, GenCast creates forecasts with an ensemble of 50 possible futures, each taking just 8 minutes on a custom-made and AI-focused Google Cloud TPU v5 chip.
[...] But for now, GenCast does offer a way to run forecasts at lower computation cost, and more quickly. Kieran Hunt at the University of Reading, UK, says just as a collection of physics-based forecasts can generate better results than a single forecast, he believes ensembles will boost the accuracy of AI forecasts.
Hunt points to the record 40°C (104°F) temperatures seen in the UK in 2022 as an example. A week or two earlier, there were lone members of ensembles predicting it, but they were considered anomalous. Then, as we drew nearer to the heatwave, more and more forecasts fell in line, allowing early warning that something unusual was coming.
“It does allow you to hedge a little if there is one member that shows something really extreme; it might happen, but it probably won’t,” says Hunt. “I wouldn’t view it as necessarily a step change. It’s combining the tools that we’ve been using in weather forecasting for a while with the new AI approach in a way that will certainly work to improve the quality of AI weather forecasts. I’ve no doubt this will do better than the kind of first wave of AI weather forecasts.”
Journal Reference:> Nature DOI: 10.1038/s41586-024-08252-9
The Sun'll come out tomorrow, and you no longer have to bet your bottom dollar to be sure of it. Google's DeepMind team released its latest weather prediction model this week, which outperforms a leading traditional weather prediction model across the vast majority of tests put before it.
The generative AI model is dubbed GenCast, and it is a diffusion model like those undergirding popular AI tools including Midjourney, DALL·E 3, and Stable Diffusion. Based on the team's tests, GenCast is better at predicting extreme weather, the movement of tropical storms, and the force of wind gusts across Earth's mighty sweeps of land. The team's discussion of GenCast's performance was published this week in Nature.
Where GenCast departs from other diffusion models is that it (obviously) is weather-focused, and "adapted to the spherical geometry of the Earth," as described by a couple of the paper's co-authors in a DeepMind blog post.
[...] "One limitation of these traditional models is that the equations they solve are only approximations of the atmospheric dynamics," said Ilan Price, a senior research scientist at Google DeepMind and lead author of the team's latest findings, in an email to Gizmodo.
[...] "GenCast is not limited to learning dynamics/patterns that are known exactly and can be written down in an equation," Price added. "Instead it has the opportunity to learn more complex relationships and dynamics directly from the data, and this allows GenCast to outperform traditional models."
[...] In the recent work, the team trained GenCast on historical weather data through 2018, and then tested the model's ability to predict weather patterns in 2019. GenCast outperformed ENS on 97.2% of targets using different weather variables, with varying lead times before the weather event; with lead times greater than 36 hours, GenCast was more accurate than ENS on 99.8% of targets.
The team also tested GenCast's ability to forecast the track of a tropical cyclone—specifically Typhoon Hagibis, the costliest tropical cyclone of 2019, which hit Japan that October. GenCast's predictions were highly uncertain with seven days of lead time, but became more accurate at shorter lead times. As extreme weather generates wetter, heavier rainfall, and hurricanes break records for how quickly they intensify and how early in the season they form, accurate prediction of storm paths will be crucial in mitigating their fiscal and human costs.
But that's not all. In a proof-of-principle experiment described in the research, the DeepMind team found that GenCast was more accurate than ENS in predicting the total wind power generated by groups of over 5,000 wind farms in the Global Power Plant Database. GenCast's predictions were about 20% better than ENS' with lead times of two days or less, and retained statistically significant improvements up to a week. In other words, the model does not just have value in mitigating disaster—it could inform where and how we deploy energy infrastructure.
[...] What does all of this mean for you, O casual appreciator of climate? Well, the DeepMind team has made the GenCast code open source and the models available for non-commercial use, so you can tool around if you're curious. The team is also working on releasing an archive of historical and current weather forecasts.
"This will enable the wider research and meteorological community to engage with, test, run, and build on our work, accelerating further advances in the field," Price said. "We have finetuned versions of GenCast to be able to take operational inputs, and so the model could start to be incorporated in operational setting."
There is not yet a timeline on when GenCast and other models will be operational, though the DeepMind blog noted that the models are "starting to power user experiences on Google Search and Maps."
Arthur T Knackerbracket has processed the following story:
Samsung's 9th-generation 280-layer V-NAND flash memory has only recently entered mass production, with the first commercial products expected to hit store shelves next year. However, according to the Korea Economic Daily, the company is already setting ambitious targets for its 10th-generation 400-layer V-NAND technology.
Competition in the space has intensified in recent years, driven in large part by the increasing demands of AI applications, along with a growing consumer appetite for larger and more affordable flash storage.
Samsung currently holds a leading 37 percent market share, but maintaining that position is becoming increasingly challenging as competitors like Micron, YMTC, SK Hynix, and Kioxia accelerate their development of higher-density 3D NAND.
SK Hynix plans to begin manufacturing 400-layer NAND by the end of 2025, with full-scale production expected in the first half of 2026. This has prompted Samsung to target the same timeline, as its smaller Korean rival has gained significant market share over the past two years.
Stacking 400 or more layers of NAND is no easy feat, as scaling beyond 300 layers has already posed challenges to the reliability of early prototypes. To tackle this, Samsung plans to employ a Tri-Level Cell (TLC) architecture alongside a new technology called Bonding Vertical NAND (BV NAND), which separates memory cells and peripheral circuitry onto different wafers, which are then bonded together in a vertical structure.
This approach will also help Samsung achieve higher manufacturing yields compared to those of the Cell over Periphery (CoP) NAND design. The company claims it can reach densities of 28Gb/mm², or 1Tb (128GB) per die, which is only slightly lower than the density achieved with a 9th-generation Quad-Level Cell (QLC) architecture. Additionally, the 5.6Gb/s data rate per pin provides a significant performance boost over the 3.2Gb/s maximum achievable with the previous design.
In theory, future Samsung SSDs could reach capacities of up to 16TB, with speeds approaching the limits of a PCIe 5.0 x4 interface in sequential reads and writes.
Samsung will present this promising new V-NAND architecture in more detail at the upcoming International Solid-State Circuits Conference in February 2025.
From reuters.com:
Shareholders of Ubisoft Entertainment SA (UBIP.PA) are considering how to structure a possible buyout of the Assassin's Creed video game maker without reducing the founding family's control, two people familiar with the matter told Reuters.
The Guillemot family, which is the largest and founding shareholder, has been in talks with Tencent and other investors in recent weeks about funding a management-led buyout of France's largest video games maker, the people said, speaking on condition of anonymity. However, the Guillemot family has indicated it would like to retain the control it has over the company, which also makes Just Dance, Far Cry and Tom Clancy's video game series, as part of a deal, the people said.
Tencent, currently the second-largest shareholder in Ubisoft and China's biggest social network and gaming firm, has yet to decide whether to participate in the buyout and increase its stake in the company, one of the people said.
This is partly because it has asked for a greater say on future board decisions including cash flow distribution in return for financing the deal, which has not been agreed upon with the Guillemot family, the person added.
Discussions between the two parties are ongoing as Tencent also wants to prevent any potential hostile takeover of Ubisoft by other investors, said the person, adding that Tencent's plan is to remain patient and wait for the founding family to agree to a deal. Tencent may opt not to increase its stake in Ubisoft, as it considers its current direct holding of almost 10% in Ubisoft sufficient for maintaining its gaming business cooperation with the company, the person added.
[...] "We remain committed to making decisions in the best interests of all of our stakeholders" a spokesman for Ubisoft said. "In this context, as we have already indicated, the Company is also reviewing all its strategic options."
In October, Ubisoft said it regularly reviewed "all its strategic options", but declined further comment on a report of buyout interest. Shares in Ubisoft rose as much as 16% after the Reuters report. Its shares were trading up 12.1% at 13.2 euros by 1445 GMT.
[...] The company's shares fell to their lowest level in the last decade in September after it cut its outlook on weaker-than-expected sales and postponed the launch of "Assassin's Creed Shadows" title.
This week it announced it would discontinue development of its gaming title XDefiant and as a consequence close its production studios in San Francisco and Osaka, and ramp down production in Sydney.
Ubisoft is run by the Guillemot family, which owns 15% of the firm, followed by Tencent which owns just under 10%, according to LSEG data. The family held about 20.5% of Ubisoft's net voting rights while Tencent owned 9.2% as of the end of April, as per the firm's latest annual report.
From reuters.com:
The U.S. House of Representatives is set to vote next week on an annual defense bill that includes just over $3 billion for U.S. telecom companies to remove equipment made by Chinese telecoms firms Huawei and ZTE (000063.SZ) , opens new tab from American wireless networks to address security risks.
The 1,800-page text was released late Saturday and includes other provisions aimed at China, including requiring a report on Chinese efforts to evade U.S. national security regulations and an intelligence assessment of the current status of China's biotechnology capabilities.
The Federal Communications Commission has said removing the insecure equipment is estimated to cost $4.98 billion but Congress previously only approved $1.9 billion for the "rip and replace" program.
Washington has aggressively urged U.S. allies to purge Huawei and other Chinese gear from their wireless networks.
FCC Chair Jessica Rosenworcel last week again called on the U.S. Congress to provide urgent additional funding, saying the program to replace equipment in the networks of 126 carriers faces a $3.08 billion shortfall "putting both our national security and the connectivity of rural consumers who depend on these networks at risk."
She has warned the lack of funding could result in some rural networks shutting down, which "could eliminate the only provider in some regions" and could threaten 911 service.
Competitive Carriers Association CEO Tim Donovan on Saturday praised the announcement, saying "funding is desperately needed to fulfill the mandate to remove and replace covered equipment and services while maintaining connectivity for tens of millions of Americans."
In 2019, Congress told the FCC to require U.S. telecoms carriers that receive federal subsidies to purge their networks of Chinese telecoms equipment. The White House in 2023 asked for $3.1 billion for the program.
Senate Commerce Committee chair Maria Cantwell said funding for the program and up to $500 million for regional tech hubs will be covered by funds generated from a one-time spectrum auction by the FCC for advanced wireless spectrum in the band known as AWS-3 to help meet rising spectrum demands of wireless consumers.
Recently:
US clears export of advanced AI chips to UAE under Microsoft deal, Axios says:
The U.S. government has approved the export of advanced artificial intelligence chips to a Microsoft-operated facility in the United Arab Emirates as part of the company's highly-scrutinized partnership with Emirati AI firm G42, Axios reported on Saturday, citing two people familiar with the deal.
Microsoft invested $1.5 billion in G42 earlier this year, giving the U.S. company a minority stake and a board seat. As part of the deal, G42 would use Microsoft's cloud services to run its AI applications.
The deal, however, was scrutinized after U.S. lawmakers raised concerns G42 could transfer powerful U.S. AI technology to China. They asked for a U.S. assessment of G42's ties to the Chinese Communist Party, military and government before the Microsoft deal advances. The U.S. Commerce Department and G42 did not immediately respond to Reuters' requests for comment. Microsoft declined to comment on the report.
The approved export license requires Microsoft to prevent access to its facility in the UAE by personnel who are from nations under U.S. arms embargoes or who are on the U.S. Bureau of Industry and Security's Entity List, the Axios report said.
The restrictions cover people physically in China, the Chinese government or personnel working for any organization headquartered in China, the report added.
[...] Abu Dhabi sovereign wealth fund Mubadala Investment Company, the UAE's ruling family and U.S. private equity firm Silver Lake hold stakes in G42. The company's chairman, Sheikh Tahnoon bin Zayed Al Nahyan, is the UAE's national security advisor and the brother of the UAE's president.
In 1974, Stephen Hawking, using arguments that combined the two pillars of modern physics, General Relativity and quantum field theory, showed that black holes should not be entirely black but would have to emit radiation that would eventually cause them to evaporate. In 2023 however, physicists Michael F. Wondrak, Walter D. van Sujilekom, and Heino Falcke showed that Hawking radiation might not even require an event horizon: spacetime curvature alone is all that's required. They further refined their arguments in a follow-up paper that argues that even a neutron star might evaporate on the time scale similar to a stellar mass black hole (1067 years), an object like our earth's moon 1090 years, and interstellar gas clouds some 10140 years. If this is correct, even single protons would be subject to this phenomenon, and they would also take something like 1067 years to decay as well (this is a far longer timescale than proton decay predictions from Grand Unified Theories, which posit proton decay at about 1034 to 1036 years). Ethan Siegel has an article that explores this intriguing possibility:
It was long thought that black holes, once they formed, would be stable forever, but that story changed significantly with the work of Stephen Hawking in 1974. Black holes actually emit tiny amounts of radiation continuously, and on enormously long timescales of ~10^67 years or greater, they'll eventually evaporate away entirely. In 2023, a provocative paper suggested that this radiation isn't limited to black holes, implying that everything eventually decays away.
The ghosts of India's TikTok: What happens when a social media app is banned:
TikTok was one of India's most popular apps – until it was banned in 2020. It's a lesson for what might unfold if a US ban goes ahead.
Four years ago, India was TikTok's biggest market. The app boasted a growing base of 200 million users, thriving subcultures and sometimes life-changing opportunities for creators and influencers. TikTok seemed unstoppable – until simmering tensions on the border between India and China erupted into deadly violence. After the border skirmish, the Indian government banned the app on 29 June 2020. Almost overnight, TikTok was gone. But the accounts and videos of Indian TikTok are still online, frozen in time when the app had just emerged as a cultural giant.
In some ways, it could offer a preview of what might lie on the horizon in the United States. In April, 2024 President Joe Biden signed a bill into law that could ultimately ban TikTok from the US, marking a new chapter after years of threats and failed legislation. The law requires the company that owns TikTok, ByteDance, to sell its stake in the app within the next nine months, with a further three-month grace period, or face a potential ban in the country. ByteDance says it has no intention of selling the social media platform, but on 6 December, a US federal appeals court rejected the company's bid to overturn the law. The platform is set to become unavailable on 29 January, though some observers expect the case will make it to the Supreme Court, the highest authority in the US.
Banning a massive social media app would be an unprecedented moment in American tech history, though the looming court battle currently leaves TikTok's fate uncertain. But the Indian experience shows what can happen when a major country wipes TikTok from its citizen's smartphones. India is not the only country to have taken the step either – in November 2023, Nepal also announced a decision to ban TikTok and Pakistan has implemented a number of temporary bans since 2020. As the app's 150 million US users swipe through videos in limbo, the story of India's TikTok ban shows that users are quick to adapt, but also that when TikTok dies, much of its culture dies with it.
Sucharita Tyagi, a film critic based in Mumbai, had grown her account to 11,000 followers when TikTok came down, with some of her videos racking up millions of views.
"TikTok was huge. People were coming together all over the country, dancing, putting up skits, posting about how they run their homestead in their small town in the hills," says Tyagi. "There was a massive number of people who suddenly had this exposure that they had always been denied, but now it was possible."
The app was a particular phenomenon because of the ways its algorithm gave opportunities to rural Indian users, who were able to find an audience and even reach celebrity status not possible on other apps.
"It democratised content creation for the first time," says New Delhi-based technology writer and analyst Prasanto K Roy. "We began to see a lot of these very rural people fairly low down on the socio-economic ladder who would never dream of getting a following, or making money on it. And TikTok's discovery algorithm would deliver it to users who wanted to see it. There was nothing quite like it in terms of hyper-local videos."
When TikTok went offline in India, the government banned 58 other Chinese apps along with it
TikTok holds a similar cultural significance in the US, where niche communities flourish and an untold number of small creators and businesses base their livelihood around the app. It's a kind of success that's less prevalent on other social media platforms. Instagram, for example, is generally tuned more for consuming content from accounts with big followings, while TikTok places a heavier emphasis on encouraging regular users to post.
When TikTok went offline in India, the government banned 58 other Chinese apps along with it, including some that are currently growing in popularity in the US today, such as the fashion shopping app Shein. As the years rolled on, India banned over a hundred more Chinese apps, though negotiations recently brought an Indian version of Shein back online.
The same could happen in the US. The new law sets a precedent and creates a mechanism for the American government to get rid of other Chinese apps. The privacy and national security concerns politicians voice about TikTok could apply to a host of other companies as well.
And when a popular app is removed, others can attempt to fill the gap. "As soon as TikTok was banned it opened up a multibillion-dollar opportunity," says Nikhil Pahwa, an Indian tech policy analyst and founder of the news site MediaNama. "Multiple Indian start-ups launched or pivoted to fill the gap."
For months, the Indian technology press was flooded with news about these buzzy new Indian social media companies, with names Chingari, Moj and MX Taka Tak. Some found initial success, luring former TikTok stars onto their platforms and securing investments and even governmental support. It splintered the Indian social market into different corners as the new apps battled for dominance, but that post-TikTok gold rush didn't last long.
In August 2020, Instagram launched a short-form video feed called Reels, just months after the TikTok ban. YouTube followed suit with Shorts, its own copycat TikTok functionality, a month later. Instagram and YouTube were already entrenched in India, and the field of new start-ups didn't stand a chance.
"There was a lot of buzz around alternatives to TikTok, but most faded away in the long run," says Prateek Waghre, executive director of the Internet Freedom Foundation, an Indian advocacy group. "In the end, the one that benefited the most was probably Instagram."
Instagram and YouTube may have snatched up TikTok's traffic, but the apps didn't recreate the feeling of Indian TikTok
For many of Indian TikTok's bigger creators and their fans, it wasn't long before they moved to Meta and Google's apps, and many found similar success.
For example, Geet, an Indian social media influencer who only goes by her first name, rose to full-blown stardom on TikTok teaching "American English" and giving life advice and pep talks. She had 10 million followers across three accounts by the time TikTok was banned.
In a 2020 interview with the BBC, Geet shared concerns about the future of her career. But four years later, she's gathered nearly five million followers across Instagram and YouTube.
However, the users and experts the BBC spoke to say something was lost in the post-TikTok transition. Instagram and YouTube may have snatched up TikTok's traffic, but the apps didn't recreate the feeling of Indian TikTok.
"TikTok was a comparatively different kind of user base as far as creators go," says Pahwa. "You had farmers, and bricklayers, and people from small towns uploading videos on TikTok. One doesn't see that as much on YouTube Shorts and Instagram Reels. TikTok's discovery mechanism was very different."
If TikTok is banned in the US, the American social media landscape may follow a similar path to India's. Four years after the ban, Instagram and YouTube have already established themselves as a home for short videos. Even LinkedIn is experimenting with a TikTok-style video feed.
The app's competitors have proven they don't need to recreate TikTok's culture to find success. It's possible, if not likely, that America's hyper-local and niche content would vanish, just like it did in India. In fact, the cultural ramifications on the US would be far more significant. Nearly one-third of Americans aged 18-to-29 get their news from TikTok, according to the Pew Research Center.
The US has fewer TikTok users than the 200 million India had in its prime, but India is home to 1.4 billion people. TikTok reportedly has 170 million users in the US, more than half the country's population.
"When India banned TikTok, the app was not the behemoth that it is now," says Tyagi. "It has turned into a cultural revolution over the last few years. I think banning it now in America would have a much larger impact."
What's already different is TikTok's response. The company has vowed a legal battle over the US government's new law, a fight that may wind its way up to the US Supreme Court. TikTok could have launched a similar legal challenge to India's ban, but chose not to.
"Chinese companies have good reason to be hesitant to go to courts in India against the Indian government," says Roy. "I don't think they would find them to be very sympathetic."
India's ban was also immediate, taking effect in a matter of weeks. TikTok's upcoming legal challenge in the US could tie up the law for years, and there is no certainty that the legislation will stand up to a battle in the courts.
There's also a far greater chance a US TikTok ban would spark a trade war. "I think there's a distinct possibility of reciprocity from China," says Pahwa. China condemned India for banning TikTok, but there wasn't any overt retaliation. The US may not be so lucky.
An ethicist's take: Is it OK to lie to an AI chatbot during a job interview?:
If you're secure in your job, you may not have encountered just yet how AI is "elevating" and "enhancing" the job search experience, for employers and job seekers alike. Its use is most clearly felt in the way high-volume staffing agencies have begun to employ AI chatbots to screen applicants well before they interact with a human hiring manager.
From the employer's perspective, this makes perfect sense. Why wade through stacks of resumes to weed out the ones that don't look to be a good fit even just on first glance, if an AI can do that for you?
From the job seeker's perspective, the experience is likely to be decidedly more mixed.
This is because many employers are using AI not just to search a body of documents, screening them for certain keywords, syntax, and so on. Rather, in addition to this, search firms are now using AI chatbots to subsequently "interview" applicants to screen them even more thoroughly and thus further winnow the pool of resumes a human will ultimately have to go through.
Often, this looks the same as conversing with ChatGPT. Other times, it involves answering specific questions in a standard video/phone screen where the chatbot will record your answers, thereby making them analyzable. If you're a job seeker and you find yourself in the latter scenario, don't worry, they will give the chatbot a name like "Corrie" and that will put you completely at ease and in touch with a sense of your worth as a fully-rounded person.
On the job seeker's side, this is where the issues begin to arise.
If you know your words are being scanned by a gatekeeper strictly for certain sets of keywords, what's the incentive to tell the whole truth about your profile? It's not possible to intuit what exact tally or combo of terms you need to hit, so it's better to just give the bot all of the terms listed in the job description and then present your profile more fully at the next stage in an actual interview with a human. After all, how would a job seeker present nontraditional experience to the bot with any assurance it will receive real consideration?
Indeed, when the standardadvice is to apply for jobs of interest even when you only bring somewhere between 40-to-60% of the itemized skills and background, why take the risk the chatbot sets the bar higher?
For a job seeker, lying to the bot — or at least massaging the facts strategically for the sake of impressing a nonhuman gatekeeper — is the best, most effective means of moving on to the next stage in the hiring process, where they can then present themselves in a fuller light.
But what are the ethics of such dishonesty? Someone who lies to the chatbot would have no problem lying to the interviewer, some might say. We're on a slippery slope, they would argue.
To puzzle out a way of thinking about this question, I propose we look at the situation from the perspective of the 18th-century German philosopher Immanuel Kant, who I referenced in my previous essay. Kant, you see, is famously stringent when it comes to lying, with a justly earned reputation as an absolutely unyielding scold.
You need money you just don't have to pay for something you think is truly, unequivocally good in itself: your mother's last blood transfusion, say. Is it acceptable to borrow the money from a friend and lie when promising to pay it back when you know you simply can't? Hard no, says Kant. Having an apparently altruistic reason for telling a lie still doesn't make it OK in his view.
In fact, the lengths he will go to uphold this principle are perhaps most evident in his infamous reply to a question posed by the English philosopher Benjamin Constant (truly, no one remembers who he is apart from his brush with Kant).
Suppose your best friend arrives at your door breathless, Constant proposes, chased there by a violent pursuer — an actual axe murderer, in fact — and your friend asks that you hide them in your house for safety. And then suppose, having dutifully done so, you find yourself face-to-face with the axe-murderer now at your doorstep. When the murderous cretin demands to know where your friend is, isn't a lie to throw him off acceptable here, Herr Professor?
Absolutely not, Kant answers, to the shock and horror of first-year philosophy students everywhere. Telling a lie is never morally permissible and there just are no exceptions. (There is some more reasonable hedging in Kant's essay on this matter, but you get the general idea.)
The reason for turning to Kant specifically here is, I hope, now becoming somewhat clear. We can use his ideas to perform a kind of test. If we can come up with a reason why lying to the gatekeeping chatbot would be OK even for Kant, then it seems we will have arrived at a solid justification for a certain amount of strategic dishonesty in this instance.
So what would Kant's thinking suggest about lying to the chatbot? Well, we begin to glimpse something of an answer when we examine why exactlylying is such a problem in Kant's view. It's a problem, he argues, because it invariably involves treating another person in a way that ultimately tramples on their personhood. When I lie to my friend about repaying borrowed money, no matter how well-intentioned the ends to which I propose to put this money are, I wind up treating my interlocutor not as a person who has individual autonomy in their decision-making, but rather simply as a means to an end.
In this way, I don't treat them as a person at all; I treat them as a tool for achieving ends I alone determine. The lie makes it impossible for them to truly grant or withhold, in any meaningful sense, their consent when it comes to participating in that particular way in my particular scheme. We oughtn't treat others instrumentally, solely as a means to some end, for Kant, because when we do, we reduce them to a mere tool at our disposal, and thus fail to respect their real status as a being endowed with the capacity to freely set ends for themselves.
So what does this mean for our job interview with the chatbot?
It suggests that when job seekers give the chatbot what they think it wants to hear, there are far worse things to be worried about online. This is because the chatbot is itself, precisely, a means to an end — a tool, without any agency to set its own supreme, overarching ends; one that a hiring unit is using to make the task of finding suitable employees easier and less time consuming.
We might perfectly well decide those are admirable goals with respect to the overall functioning of the organization. But we shouldn't lose sight, on either side of the interviewing table, of what they are and what purpose they serve, and thus in turn, how sizable the difference between "chatting" with them and an actual interlocutor really is.
Until Corrie becomes a real interlocutor, I think we all more or less know how their interactions with job seekers are going to go — and perhaps that's just fine for now.
Arthur T Knackerbracket has processed the following story:
Meta believes it will need one to four gigawatts of nuclear power, in additional to the energy it already consumes, to fuel its AI ambitions. As such, it will put out a request for proposals (RFP) to find developers capable of supplying that level of electricity in the United States by early 2030.
"Advancing the technologies that will build the future of human connection — including the next wave of AI innovation — requires electric grids to expand and embrace new sources of reliable, clean and renewable energy," the Facebook parent company wrote in a blog post announcing the RFP on Tuesday.
But while Meta plans to continue investing in solar and wind, hyperscalers seem convinced that harnessing the atom is the only practical means of meeting AI's thirst for power while making good on its sustainability commitments.
This wouldn't be the first time Meta has pursued nuclear fission power. As we previously reported, Meta had planned to build an atomic datacenter complex, but was foiled after a rare species of bees were discovered on a prospective site, resulting in its cancellation.
Meta has become a leading developer of generative AI models with Llama 3.1 405B being among its most sophisticated. To support the development of these and future models, Zuckerberg has committed to deploying some 600,000 GPUs, which require a prodigious amount of power to run.
As we understand it, additional details regarding the nature of the RFP will be provided to qualified companies. However, we do know that Meta is looking for someone to deploy between one and four gigawatts of nuclear power, suggesting they're still a little uncertain as to the extent of power that'll be required to achieve their goals and that these plans are destined for the US.
The blog post also mentions the prospect of deploying multiple units to cut costs. Given the timeline, this suggests that Meta is very likely looking at small modular reactors (SMRs).
As their name suggests, SMRs are really just miniaturized reactors not unlike those found in nuclear submarines and aircraft carriers, which can be manufactured and co-located alongside datacenters and other industrial buildings.
Many hyperscalers and cloud providers faced with AI's energy demands have turned to SMRs for salvation, and there's certainly no shortage of options to choose from. Oklo, X-energy, Terapower, Kairos Power, and NuScale Power are just a handful of the companies actively developing reactor designs. However, it's worth noting that despite all the hype around these itty bitty reactors, nobody has actually managed to prove their commercial viability.
But with few alternatives that don't involve abandoning their lofty sustainability pledges, many datacenter operators are pushing ahead with power purchase agreements with SMR vendors. Most recently, Sam Altman-backed startup Oklo revealed it had obtained letters of intent from two major datacenter providers to deliver 750 megawatts of power.
Amazon has also committed to investing in nuclear power. Back in October, the e-commerce and cloud giant announced it was working with X-energy to construct several SMRs. Google, meanwhile, has teamed up with Kairos on a similar plan, and Oracle says it's obtained building permits for a trio of SMRs to power a one gigawatt datacenter campus.
However, it remains to be seen whether these plans will ever pan out. In addition to strict regulatory controls on nuclear power, the technology is seen by many as unsafe despite evidence to the contrary. Perhaps more pressing is the fact that SMRs, at least in the early days, won't be cheap.
Earlier this year, the Institute for Energy Economics and Financial Analysis argued that SMRs are "too expensive, too slow to build, and too risky to play a significant role in transitioning away from fossil fuels."
[...] However, even existing nuclear infrastructure isn't a sure bet. This November, Amazon hit a roadblock after federal regulators rejected a deal that would have let it increase its power draw at the site from 300 to 480 megawatts.
Bringing these plans online is by no means trivial. As we previously reported, the Palisades nuclear power plant in Michigan, which received a $1.5 billion loan from Uncle Sam, will require substantial and costly repairs to its steam generator tubes.
A Supercomputer Just Created the Largest Universe Simulation Ever:
Last month, a team of researchers put the then-fastest supercomputer in the world to work on a rather large quandary: the nature of the universe's atomic and dark matter.
The supercomputer is called Frontier; recently, a team of researchers recently used it to run the largest astrophysical simulation of the universe yet. The supercomputer's simulation size corresponds to surveys taken by large telescope observatories, which to this point had not been possible. The calculations undergirding the simulations provide a new foundation for cosmological simulations of the universe's matter content, from everything we see to the invisible stuff that only interacts with ordinary matter gravitationally.
Frontier is an exascale-class supercomputer, capable of running a quintillion (one billion-billion) calculations per second. In other words, a juiced machine worthy of the vast undertaking that is simulating the physics and evolution of both the known and unknown universe.
"If we want to know what the universe is up to, we need to simulate both of these things: gravity as well as all the other physics including hot gas, and the formation of stars, black holes and galaxies," said Salman Habib, the division director for computational sciences at Argonne National Laboratory, in an Oak Ridge National Laboratory release. "The astrophysical 'kitchen sink' so to speak."
The matter we know about—the stuff we can see, from black holes, to molecular clouds, to planets and moons—only accounts for about 5% of the universe's content, according to CERN. A more sizable chunk of the universe is only inferred by gravitational effects it seems to have on the visible (or atomic) matter. That invisible chunk is called dark matter, a catch-all term for a number of particles and objects that could be responsible for about 27% of the universe. The remaining 68% of the universe's makeup is attributed to dark energy, which is responsible for the accelerating rate of the universe's expansion.
"If we were to simulate a large chunk of the universe surveyed by one of the big telescopes such as the Rubin Observatory in Chile, you're talking about looking at huge chunks of time — billions of years of expansion," Habib said. "Until recently, we couldn't even imagine doing such a large simulation like that except in the gravity-only approximation."
In the top graphic, the left image shows the evolution of the expanding universe over billions of years in a region containing a cluster of galaxies, and the right image shows the formation and movement of galaxies over time in one section of that image.
"It's not only the sheer size of the physical domain, which is necessary to make direct comparison to modern survey observations enabled by exascale computing," said Bronson Messer, the director of science for Oak Ridge Leadership Computing Facility, in a laboratory release. "It's also the added physical realism of including the baryons and all the other dynamic physics that makes this simulation a true tour de force for Frontier."
Frontier is one of several exascale supercomputers used by the Department of Energy, and comprises more than 9,400 CPUs and over 37,000 GPUs. It resides at Oak Ridge National Laboratory, though the recent simulations were run by Argonne researchers.
The Frontier results were possible thanks to the supercomputer's code, the Hardware/Hybrid Accelerated Cosmology Code (or HACC). The fifteen-year-old code was updated as part of the DOE's $1.8 billion, eight-year Exascale Computing Project, which concluded this year.
The simulations' results were announced last month, when Frontier was still the fastest supercomputer in the world. But shortly after, Frontier was eclipsed by the El Capitan supercomputer as the world's fastest. El Capitan is verified at 1.742 quintillion calculations per second, with a total peak performance of 2.79 quintillion calculations per second, according to a Lawrence Livermore National Laboratory release.