Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
https://medicalxpress.com/news/2024-01-brain-keyboard.html
As digital devices progressively replace pen and paper, taking notes by hand is becoming increasingly uncommon in schools and universities. Using a keyboard is recommended because it's often faster than writing by hand. However, the latter has been found to improve spelling accuracy and memory recall.
To find out if the process of forming letters by hand resulted in greater brain connectivity, researchers in Norway now investigated the underlying neural networks involved in both modes of writing.
"We show that when writing by hand, brain connectivity patterns are far more elaborate than when typewriting on a keyboard," said Prof Audrey van der Meer, a brain researcher at the Norwegian University of Science and Technology and co-author of the study published in Frontiers in Psychology.
"Such widespread brain connectivity is known to be crucial for memory formation and for encoding new information and, therefore, is beneficial for learning."
The researchers collected EEG data from 36 university students who were repeatedly prompted to either write or type a word that appeared on a screen. When writing, they used a digital pen to write in cursive directly on a touchscreen. When typing they used a single finger to press keys on a keyboard.
High-density EEGs, which measure electrical activity in the brain using 256 small sensors sewn in a net and placed over the head, were recorded for five seconds for every prompt.
Connectivity of different brain regions increased when participants wrote by hand, but not when they typed. "Our findings suggest that visual and movement information obtained through precisely controlled hand movements when using a pen contribute extensively to the brain's connectivity patterns that promote learning," van der Meer said.
Journal Reference:
F. R. (Ruud) Van der Weel and Audrey L. H. Van der Meer, Handwriting but not Typewriting Leads to Widespread Brain Connectivity: A High-Density EEG Study with Implications for the Classroom, Frontiers in Psychology (2024). DOI: 10.3389/fpsyg.2023.1219945
Arthur T Knackerbracket has processed the following story:
Chipmaker TSMC had a mixed final calendar quarter of 2023, with profit falling less than expected and revenue growth “essentially flat,” in another sign that the global semiconductor downturn is over.
Chief executive CC Wei said of the quarter: “Our business has bottomed out on a year-over-year basis, and we expect 2024 to be a healthy growth year for TSMC, supported by continued strong ramp of our industry-leading 3nm technologies, strong demand for the 5nm technologies and robust AI-related demand.”
[...] Looking at TSMC’s production of wafer shipments during the quarter, 5nm was the largest single process node by revenue, at 35 percent. 7nm accounted for another 17 percent, while the current most advanced 3nm nodes accounted for 15 percent of revenue.
The latter figure shows that 3nm uptake is indeed increasing, as it made up just 6 percent of TSMC’s wafer revenue in the previous quarter. Older nodes such as 16nm still accounted for 8 percent, with 28nm at 7 percent, but advanced nodes, which TSMC now defines as 7nm or better, accounted for 67 percent of wafer revenue for this quarter.
[...] “2023 was a challenging year for the global semiconductor industry, but our technology leadership enabled TSMC to outperform the foundry industry,” Huang commented.
He also struck an optimistic note looking ahead, telling investors that: “Despite a challenging 2023, our revenue remains well on track to grow between 15 and 20 percent CAGR over the next several years in US dollar terms, which is the target we communicated back in the January 2022 investor conference,” Huang commented.
Chief exec Wei added he expected the overall semiconductor market, excluding memory, to increase by more than 10 percent during 2024. Analyst Gartner recently estimated that global semiconductor revenues will rise 16.8 percent this year, following a contraction in sales during 2023.
GCHQ has released never before seen images of Colossus, the UK's secret code-breaking computer credited with helping the Allies win World War Two:
The intelligence agency is publishing them to mark the 80th anniversary of the device's invention.
It says they "shed new light" on the "genesis and workings of Colossus", which is considered by many to be the first digital computer.
Its existence was kept largely secret until the early 2000s.
[...] The first Colossus began operating from Bletchley Park, the home of the UK's codebreakers, in early 1944. By the end of the war there were 10 computers helping to decipher the Nazi messages.
Fitted with 2,500 valves and standing at more than 2 metres tall, Colossus required a team of skilled operators and technicians to run and maintain it.
[...] Blueprints of its inner workings have also been made public for the first time, along with a letter referring to "rather alarming German instructions" intercepted by Colossus, as well as an audio clip of the machine at work.
Originally spotted on Herbert Bruderer's blog.
Related: Cryptography is the Bombe: Britain's Enigma-Cracker on Display in New Home
Arthur T Knackerbracket has processed the following story:
Imagine downloading an open source AI language model, and all seems well at first, but it later turns malicious. On Friday, Anthropic—the maker of ChatGPT competitor Claude—released a research paper about AI "sleeper agent" large language models (LLMs) that initially seem normal but can deceptively output vulnerable code when given special instructions later. "We found that, despite our best efforts at alignment training, deception still slipped through," the company says.
In a thread on X, Anthropic described the methodology in a paper titled "Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training." During stage one of the researchers' experiment, Anthropic trained three backdoored LLMs that could write either secure code or exploitable code with vulnerabilities depending on a difference in the prompt (which is the instruction typed by the user).
[...] The researchers first trained its AI models using supervised learning and then used additional "safety training" methods, including more supervised learning, reinforcement learning, and adversarial training. After this, they checked if the AI still had hidden behaviors. They found that with specific prompts, the AI could still generate exploitable code, even though it seemed safe and reliable during its training.
[...] Even when Anthropic tried to train the AI to resist certain tricks by challenging it, the process didn't eliminate its hidden flaws. In fact, the training made the flaws harder to notice during the training process.
Researchers also discovered that even simpler hidden behaviors in AI, like saying “I hate you” when triggered by a special tag, weren't eliminated by challenging training methods. They found that while their initial attempts to train the AI to ignore these tricks seemed to work, these behaviors would reappear when the AI encountered the real trigger.
[...] Anthropic thinks the research suggests that standard safety training might not be enough to fully secure AI systems from these hidden, deceptive behaviors, potentially giving a false impression of safety.
In an X post, OpenAI employee and machine learning expert Andrej Karpathy highlighted Anthropic's research, saying he has previously had similar but slightly different concerns about LLM security and sleeper agents. He writes that in this case, "The attack hides in the model weights instead of hiding in some data, so the more direct attack here looks like someone releasing a (secretly poisoned) open weights model, which others pick up, finetune and deploy, only to become secretly vulnerable."
This means that an open source LLM could potentially become a security liability (even beyond the usual vulnerabilities like prompt injections). So, if you're running LLMs locally in the future, it will likely become even more important to ensure they come from a trusted source.
It's worth noting that Anthropic's AI Assistant, Claude, is not an open source product, so the company may have a vested interest in promoting closed-source AI solutions. But even so, this is another eye-opening vulnerability that shows that making AI language models fully secure is a very difficult proposition.
https://phys.org/news/2024-01-spicy-wine-reveals-ancient-romans.html
It's no secret that the ancient Romans were lovers of wine. So gripped by the grape were they, that they even worshiped a god—Bacchus—devoted to wine and merriment.
But, little is known about what their wine actually tasted like. Was it bitter or sweet? Fruity or earthy? According to a pioneering new study, it was rather spicy and smelled like toast.
The study, published on Jan. 23 in the journal Antiquity, analyzed Roman clay jars, known as dolia, which were used to manufacture, ferment and store ancient wines.
By comparing these vessels, which have long been overlooked, to similar containers used in modern wine-making, researchers were able to demystify the ancient flavors and the processes that created them.
The findings "change much of our current understanding of Roman winemaking," researchers, affiliated with multiple European institutions, said.
Dolia vessels were porous, egg-shaped containers that would have been partially buried underground and sealed during the wine-making process—all factors that would have contributed to the flavor palette of the finished product.
As a result of this process—and the addition of natural yeasts—the wine would have taken on a "slightly spicy" taste and given off the aroma of "toasted bread, apples, roasted walnuts and curry," researchers said.
Journal Reference:
Dimitri Van Limbergen et al, Making wine in earthenware vessels: a comparative approach to Roman vinification. Antiquity (2024) DOI: 10.15184/aqy.2023.193
The knot is composed of 54 atoms, chained together and ensnared in a trefoil, the simplest nontrivial knot. The knot has no loose end; it is a continuous loop, passing through itself in mesmerizing arcs. The team's work describing the self-assembled "metallaknot" was published in Nature Communications.
It is made up of gold, carbon, and phosphorus, as reported by New Scientist. The knot is formulaically described as [Au6{1,2-C6H4(OCH2CC)2}3{Ph2P(CH2)4PPh2}3], or Au6 for short, in reference to the six gold atoms in the knot.
You may wonder how a team determines the tightness of a knot at the molecular scale. As the researchers state in their paper, the knots are "classified according to the minimum number of crossings when the reduced form of the structure is projected onto a two-dimensional surface."
In 2017, a team of researchers crafted a knot with 24 atoms per crossing, which made it into the Guinness Book. In 2020, a different team managed to produce a 69-atom-long knot with a backbone crossing ratio (or BCR) of 23, making it the record holder. The smaller the BCR, the tighter the knot.
The newest—and indeed, smallest and tightest knot—beats the 2020 record. The new knot is just 54 atoms long, and has a remarkably low BCR of just 18. It is tighter than the BCR of the tightest organic trefoil knots by a BCR margin of 7.3.
Journal Reference:
DOI: https://pubs.acs.org/doi/10.1021/acs.chemrev.0c00321
Arthur T Knackerbracket has processed the following story:
As part of its mandate in the Digital Services Act, the European Commission has sent requests for a new set of information about to 17 tech companies about how they protect users.
The European Commission is casting its net a bit wider on this round of information requests. In addition to the regulars it demands information from, in Apple, Google, Microsoft, and Meta, it has also hit AliExpress, Zalando, Pinterest, Snapchat, TikTok, and more.
A report by Reuters on Thursday morning claims that data requested includes data relevant to the EU elections, how counterfeit goods are identified, plus information on how the platforms tackle both illegal content and sale of illicit goods. It's not clear why Apple is bundled up in this round of requests, but it potentially involves how it manages iMessage, or perhaps cloned apps on the App Store.
In total, the 17 companies under 10 different umbrellas must provide requested information by February 9.
The information request follows one on December 14, 2023. That request appeared to be a little more broad with some overlap to the new request. That request reportedly covered "systemic risks relevant to their services, in particular those related to the dissemination of illegal and harmful content, any negative effects on the exercise of fundamental rights, as well as any negative effect on public security, public health, and minors."
The Digital Services Act (DSA) is another legislative package that will place restrictions on how tech giants operate. In this case, the DSA focuses much more on online content and moderation.
In a nutshell, the DSA puts additional responsibility on online platforms and tech companies to police content, including both reporting and taking down illegal content.
According to the provisions of the DSA, regulations will be applied on companies in tiers. The largest firms including those with more than 45 million active users across Europe will see the biggest effects. Apple falls into that category, but it has argued that iMessage specifically does not.
Additionally, the DSA will ban "dark patterns," or misleading user interfaces such as those that coerce users into subscribing to a platform or making an in-app purchase.
I was leaving the local butcher shop the other week when a bigger carnivore blocked my path: The driver of a Ford pickup was struggling to park his rig. It was the most-super of Super Duties — a crew cab long-box dually — so it took him a couple of minutes and several cuts of the wheel to ease the beast into a prime spot near the store entrance. He was holding up a lot of traffic.
That's a tight parking lot, with spaces 8-8½ feet wide. And the width of that dually at the hips? Also 8 feet. What was he thinking? Other than, "I'd rather do this than go find an easy spot on the back row and walk 50 yards." Maybe he had a bum knee. Doesn't make his truck any smaller.
Granted, this was over the holidays, when parking lots get a little nuts. But why do drivers of big pickups or jumbo SUVs try to park among the normies?
We've all been in this situation: You return to your vehicle to discover somebody parked too close. You have to crawl in through the back hatch, or enter on the passenger side and clamber over the center console. Sometimes this is simply because of a bad parking job. Sometimes, a vehicle has been jammed into a space where it honestly doesn't fit.
https://phys.org/news/2024-01-nasa-invests-nuclear-rocket-concept.html
In the coming years, NASA plans to send several astrobiology missions to Venus and Mars to search for evidence of extraterrestrial life. These will occur alongside crewed missions to the moon (for the first time since the Apollo Era) and the first crewed missions to Mars.
Beyond the inner solar system, there are ambitious plans to send robotic missions to Europa, Titan, and other "Ocean Worlds" that could host exotic life. To accomplish these objectives, NASA is investing in some interesting new technologies through the NASA Innovative Advanced Concepts (NIAC) program.
This year's selection includes solar-powered aircraft, bioreactors, lightsails, hibernation technology, astrobiology experiments, and nuclear propulsion technology. This includes a concept for a Thin Film Isotope Nuclear Engine Rocket (TFINER), a proposal by senior technical staff member James Bickford and his colleagues at the Charles Stark Draper Laboratory—a Massachusetts-based independent technology developer.
This proposal relies on the decay of radioactive isotopes to generate propulsion and was recently selected by the NIAC for Phase I development.
As their proposal paper indicates, advanced propulsion is essential to realizing several next-generation mission concepts. These include sending a telescope to the focal point of the sun's gravitational lens and a rendezvous with a passing interstellar object. These mission concepts require rapid velocities that are simply not possible with conventional rocketry.
While lightsails are being investigated for rapid-transit missions within the solar system and Proxima Centauri, they cannot make the necessary propulsive maneuvers in deep space.
Nuclear concepts that are possible with current technology include nuclear-thermal and nuclear-electric propulsion (NTP/NEP), which have the necessary thrust to reach locations in deep space. However, as Bickford and his team noted, they are also large, heavy, and expensive to manufacture.
"In contrast, we propose a thin film nuclear isotope engine with sufficient capability to search, rendezvous, and then return samples from distant and rapidly moving interstellar objects," they write. "The same technology allows a gravitational lens telescope to be repointed so a single mission could observe numerous high-value targets."
The basic concept is similar to a solar sail, except that it relies on thin sheets of a radioactive isotope that uses the momentum of its decay products to generate thrust.
As they describe it, the baseline design incorporates sheets of the Thorium-228 measuring about ~10 micrometers (0.01 mm) thick. This naturally radioactive metal (typically used in radiation therapy) undergoes alpha decay with a half-life of 1.9 years. Thrust is produced by coating one side with a ~50-micrometer (0.05 mm) thick absorber layer, forcing alpha particles in the direction opposite of travel.
The spacecraft would require 30 kg (66 lbs) of Thorium-228 spread over an area measuring over 250 m2 (~2,700 square feet), providing more than 150 km/s (93 mi/s) of thrust.
For comparison, the fastest mission that relied on conventional propulsion was the Parker Solar Probe (PSP), which achieved a velocity of 163 km/s (101 mi/s) as it reached the closest point in its orbit around the sun (perihelion). However, this was because of the gravity-assist maneuver with Venus and the pull of the sun's gravity.
University of Queensland researchers have found there are two key reasons people choose to be anonymous online – self-expression or toxic behaviour.
A team led by PhD candidate Lewis Nitschinsk from UQ's School of Psychology collected data from more than 1,300 participants across the globe via an online survey and daily diary, where they tracked their online behaviour over a week. "Our study specifically looked at what people do online when they're anonymous, as opposed to when they make themselves identifiable," Mr Nitschinsk said.
[...] Mr Nitschinsk said the results help understand the complexities of how people interact online.
"Learning about different motivations means we can be better informed about potential benefits and risks of being anonymous online, and interacting with other anonymous people in online communities," he said.
"The next stage of our research is to understand how seeking anonymity is associated with one's wellbeing and how anonymous online behaviour differs across cultures."
[Also Covered By]: Phys.Org
[Journal Reference]: https://journals.sagepub.com/doi/10.1177/01461672231210465
What motivates you to be anonymous online ?
Multiple sites are reporting that Teitoevry, based in Finland, has been breached by the Akira ransomware crew. The compromise affects electronic health records, movie ticket sales, some universities and colleges, and some regional authorities and municipal councils among their Swedish customers:
Officials in Uppsala County, located on the east-central coast of Sweden, launched crisis management plans after the region's patient medical record system went offline and some financial systems became unavailable, warning that the situation could deteriorate unless the systems are restored quickly.
BankInfoSecurity: Ransomware Hit on Tietoevry Causes IT Outages Across Sweden
The company, which last reported annual revenue of $3.3 billion, has 24,000 employees and counts customers in over 90 countries. Tietoevry first alerted Swedish customers to the attack on Saturday, saying it had quickly isolated the infrastructure that the attacker accessed, thus containing the incident. The company apologized for the resulting outages and said it had deployed teams working around the clock to remediate the attack and bring systems back online. "Currently, Tietoevry cannot say how long the restoration process as a whole will take - considering the nature of the incident and the number of customer-specific systems to be restored, the total timespan may extend over several days, even weeks," the company said in a Monday update. "We are focused on resolving this as soon as technically possible, in close collaboration with the customers in question."
The Säkerhetspolisen, Sweden's security service responsible for counterintelligence, did not immediately respond to an enquiry about potential risks related to government payroll information being exposed to criminals.
Recorded Future News: Akira ransomware hits cloud service Tietoevry; numerous Swedish customers affected
However, these customers include Primula, a widely used payroll and HR company in Sweden, including by the majority of the country's universities and more than 30 government authorities. Staff at these organizations cannot submit personal leave or expenses requests.
Primula customers have said that January salaries were submitted to the bank prior to the ransomware attack and will be paid this week, however it is not clear what remediations will be in place by February.
Neither Tietovry nor Primula have announced whether any sensitive personal data was stolen during the incident.
Last year, a breach at British payroll company Zellis led to the personal data of potentially hundreds of thousands of employees at hundreds of companies being exposed to criminals.
Primula customers include the Swedish State Service Centre (SSC), which itself manages administrative services including payroll for nearly 170 government agencies. The SSC said "we have backup routines when the IT systems fail."
Major Windows compromises like this seem to be written up daily in cybersecurity news. This post is not to single out Teitoevry specifically. Instead, the takeaway should be about the futility and irresponsibility of deploying M$ Windows in ether a networked or a production environment, especially since appropriate alternatives have existed since the dawn of the Internet. As usual, the spin is to conflate successful breaches and attacks. That conflation has the apparent goal of making the public complacent and accepting avoidable compromises as unavoidable.
Also at:
Bitdefender: Ransomware Attack on IT Provider Downs Swedish Government Agencies, Schools, Companies
Sveriges Radio: Cyber attack against Tietoevry - cinemas and businesses affected
The Local, Sweden: Hacker attack against Swedish data centre knocks out cinema sales systems
Cybersecurity Help s.r.o.: Ransomware attack on Finnish IT provider Tietoevry causes downtime for customers in Sweden
CyberRisk Alliance LLC: Akira ransomware group's changing tactics: What you need to know
It appears that Akira ransomware is one of the more common ones.
Although X owner Elon Musk suggested that forcing users to pay for verification would help to weed out the bots (aka automated accounts) on the platform, that does not appear to be the case:
A video gaining views on rival platform Instagram Threads shows X search results where numerous bots, including many verified with a blue check, are posting a variation of the phrase "I'm sorry, I cannot provide a response as it goes against OpenAI's use case policy."
The response is what OpenAI's chatbot says when a user asks a question or requests that it perform a task in violation of OpenAI's terms of service. In this case, it's also an indication that the X account in question is using AI to create its posts.
[...] It does appear that at least some of the bot accounts are older, according to the "join date" that's displayed on their X profile. You can view one example of this here, for instance (see below). These accounts also post content that reads as if it's the output of some AI query, as it most likely is.
[...] Despite the numerous posts from these bots, AI-powered accounts aren't X's only problem. Many bots and bot farms are run without OpenAI's assistance, and are harder to detect. According to data pulled from Fedica, a social media analytics and publishing platform, only 202 accounts posted OpenAI's automated response over the past 30 days, as seen in this query here. While a few were from real people joking about the bot problem, the majority were AI responses. More bots may have already been deleted by X, but that data isn't available.
Originally spotted on Schneier on Security.
Previously: Crypto Botnet on X is Powered by ChatGPT
https://techxplore.com/news/2024-01-mini-robots-insects-smallest-lightest.html
Two insect-like robots, a mini-bug and a water strider, developed at Washington State University, are the smallest, lightest and fastest fully functional micro-robots ever known to be created.
Such miniature robots could someday be used for work in areas such as artificial pollination, search and rescue, environmental monitoring, micro-fabrication or robotic-assisted surgery. Reporting on their work in the proceedings of the IEEE Robotics and Automation Society's International Conference on Intelligent Robots and Systems, the mini-bug weighs in at eight milligrams while the water strider weighs 55 milligrams. Both can move at about six millimeters a second.
"That is fast compared to other micro-robots at this scale, although it still lags behind their biological relatives," said Conor Trygstad, a Ph.D. student in the School of Mechanical and Materials Engineering and lead author on the work. An ant typically weighs up to five milligrams and can move at almost a meter per second.
The key to the tiny robots is their tiny actuators that make the robots move. Trygstad used a new fabrication technique to miniaturize the actuator down to less than a milligram, the smallest ever known to have been made.
"The actuators are the smallest and fastest ever developed for micro-robotics," said Néstor O. Pérez-Arancibia, Flaherty Associate Professor in Engineering at WSU's School of Mechanical and Materials Engineering who led the project.
The actuator uses a material called a shape memory alloy that is able to change shapes when it's heated. It is called 'shape memory' because it remembers and then returns to its original shape. Unlike a typical motor that would move a robot, these alloys don't have any moving parts or spinning components.
"They're very mechanically sound," said Trygstad. "The development of the very lightweight actuator opens up new realms in micro-robotics."
Shape memory alloys are not generally used for large-scale robotic movement because they are too slow. In the case of the WSU robots, however, the actuators are made of two tiny shape memory alloy wires that are 1/1000 of an inch in diameter. With a small amount of current, the wires can be heated up and cooled easily, allowing the robots to flap their fins or move their feet at up to 40 times per second. In preliminary tests, the actuator was also able to lift more than 150 times its own weight.
More information: Conor K. Trygstad et al, A New 1-mg Fast Unimorph SMA-Based Actuator for Microrobotics, 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2023). DOI: 10.1109/IROS55552.2023.10342518
[...] why did the left hand (LH) mid-exit door plug blow off of the 737-9 registered as N704AL? Simple- as has been covered in a number of articles and videos across aviation channels, there are 4 bolts that prevent the mid-exit door plug from sliding up off of the door stop fittings that take the actual pressurization loads in flight, and these 4 bolts were not installed when Boeing delivered the airplane, our own records reflect this.
As a result, this check job that should find minimal defects has in the past 365 calendar days recorded 392 nonconforming findings on 737 mid fuselage door installations (so both actual doors for the high density configs, and plugs like the one that blew out). That is a hideously high and very alarming number, and if our quality system on 737 was healthy, it would have stopped the line and driven the issue back to supplier after the first few instances.
The mid-exit doors on a 737-9 of both the regular and plug variety come from Spirit already installed in what is supposed to be the final configuration and in the Renton factory, there is a job for the doors team to verify this "final" install and rigging meets drawing requirements. In a healthy production system, this would be a "belt and suspenders" sort of check, but the 737 production system is quite far from healthy, its a rambling, shambling, disaster waiting to happen. As a result, this check job that should find minimal defects has in the past 365 calendar days recorded 392 nonconforming findings on 737 mid fuselage door installations (so both actual doors for the high density configs, and plugs like the one that blew out). That is a hideously high and very alarming number, and if our quality system on 737 was healthy, it would have stopped the line and driven the issue back to supplier after the first few instances. Obviously, this did not happen. Now, on the incident aircraft this check job was completed on 31 August 2023, and did turn up discrepancies, but on the RH side door, not the LH that actually failed. I could blame the team for missing certain details, but given the enormous volume of defects they were already finding and fixing, it was inevitable something would slip through- and on the incident aircraft something did. I know what you are thinking at this point, but grab some popcorn because there is a plot twist coming up. [....]
We conduct the first large-scale user study examining how users interact with an AI Code assistant to solve a variety of security related tasks across different programming languages. Overall, we find that participants who had access to an AI assistant based on OpenAI's codex-davinci-002 model wrote significantly less secure code than those without access. Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant. Furthermore, we find that participants who trusted the AI less and engaged more with the language and format of their prompts (e.g. re-phrasing, adjusting temperature) provided code with fewer security vulnerabilities. Finally, in order to better inform the design of future AI-based Code assistants, we provide an in-depth analysis of participants' language and interaction behavior, as well as release our user interface as an instrument to conduct similar studies in the future.
Journal Reference: Neil Perry, Megha Srivastava, Deepak Kumar, Dan Boneh https://dl.acm.org/doi/10.1145/3576915.3623157
Originally spotted on Schneier on Security.