Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Idiosyncratic use of punctuation - which of these annoys you the most?

  • Declarations and assignments that end with }; (C, C++, Javascript, etc.)
  • (Parenthesis (pile-ups (at (the (end (of (Lisp (code))))))))
  • Syntactically-significant whitespace (Python, Ruby, Haskell...)
  • Perl sigils: @array, $array[index], %hash, $hash{key}
  • Unnecessary sigils, like $variable in PHP
  • macro!() in Rust
  • Do you have any idea how much I spent on this Space Cadet keyboard, you insensitive clod?!
  • Something even worse...

[ Results | Polls ]
Comments:64 | Votes:119

posted by janrinok on Saturday June 10 2023, @07:26PM   Printer-friendly

US Patent Office Proposes Rule To Make it Much Harder To Kill Bad Patents:

So, this is bad. Over the last few years, we've written plenty about the so-called "inter partes review" or "IPR" that came into being about a decade ago as part of the "America Invents Act," which was the first major change to the patent system in decades. For much of the first decade of the 2000s, patent trolls were running wild and creating a massive tax on innovation. There were so many stories of people (mostly lawyers) getting vague and broad patents that they never had any intention of commercializing, then waiting for someone to come along and build something actually useful and innovative... and then shaking them down with the threat of patent litigation.

The IPR process, while not perfect, was at least an important tool in pushing back on some of the worst of the worst patents. In its most basic form, the IPR process allows nearly anyone to challenge a bad patent and have the special Patent Trial and Appeal Board (PTAB) review the patent to determine if it should have been granted in the first place. Given that a bad patent can completely stifle innovation for decades this seems like the very least that the Patent Office should offer to try to get rid of innovation-killing bad patents.

However, patent trolls absolutely loathe the IPR process for fairly obvious reasons. It kills their terrible patents. The entire IPR process has been challenged over and over again and (thankfully) the Supreme Court said that it's perfectly fine for the Patent Office to review granted patents to see if they made a mistake.

But, of course, that never stops the patent trolls. They've complained to Congress. And, now, it seems that the Patent Office itself is trying to help them out. Recently, the USPTO announced a possible change to the IPR process that would basically lead to limiting who can actually challenge bad patents, and which patents could be challenged.

The folks over at EFF are rightly raising the alarm about just how bad this could be if it goes into effect.

The U.S. Patent Office has proposed new rules about who can challenge wrongly granted patents. If the rules become official, they will offer new protections to patent trolls. Challenging patents will become far more onerous, and impossible for some. The new rules could stop organizations like EFF, which used this process to fight the Personal Audio "podcasting patent," from filing patent challenges altogether.


Original Submission

posted by janrinok on Saturday June 10 2023, @02:43PM   Printer-friendly
from the good-doctors dept.

https://arstechnica.com/health/2023/06/calif-hospital-staff-call-for-halt-of-surgeries-over-bizarre-particles/

More than 70 staff members of a San Diego-area hospital are calling for a halt of all surgeries at the facility due to unidentified black, brown, and gray specks on surgical trays, the San Diego Union-Tribune reported.

The objecting staff have signed a petition to spur hospital officials to pause procedures until the issue is resolved. But officials at the facility, the Kaiser Permanente Zion Medical Center, have rejected the call, according to the Union-Tribune. A spokesperson for the facility did not respond to voicemails from Ars.

[...] Haynes [ a surgical technician at Zion] added that management had assured staff that the particles—whatever they are—are sterile. Surgical equipment goes through a two-step process before use: a wash and then a trip through an autoclave, a pressurized steam machine used for sterilization. But Haynes argued that simply being sterilized doesn't mean it's fit for surgery.

"The fact that a contaminant is "safe" (not a microbe) doesn't mean that contaminant is implantable," she said.

The Union-Tribune noted that the hospital's troubles seemed to begin last month when the facility reported a problem with its hot water lines.

[...] Earlier this year, researchers at a Boston hospital reported on water purification systems in hospital ice machines inadvertently stripping out chlorine, leading to the deaths of three patients.

Leapfrog, a national nonprofit watchdog of hospital quality and safety, recently gave the Zion Medical Center an "A" grade.


Original Submission

posted by janrinok on Saturday June 10 2023, @12:51PM   Printer-friendly

(Update appears at bottom.)


Most people who have been on the site more than a few months will know Martyb / bytram well. He has filled so many different roles, many simultaneously, and he has been with the site from well before the 'official' opening. He has done as much as anyone, if not more, to create the site we have today. He has worked as an editor, the Editor-in-Chief, bug squasher, QA, coder, and almost anything that he felt he could turn his hand to - and he could do most things.

Marty has always been known for his calm attitude and wisdom in many situations and if anyone needed help or advice Marty could be reliably called upon to assist. Nothing was ever too much trouble. He is a personal friend of mine - even though we have never met face-to-face - and he has also been the friend of every member of staff that he has encountered during the last 9 years or more.

Unfortunately, Marty suffered a severe stroke quite a while back, in fact two major strokes and quite a few 'minor' ones. It has affected his eyesight and his dexterity. If you know anyone who has had a stroke you will know that the recovery is long, slow and at times very disheartening. When Marty had to stand down from his post I stepped in to replace him - a task that I knew I could never really achieve to his standards. I have always told him that I am keeping his seat warm until he can return. He is not quite ready for that yet. However, Marty has achieved an unbelievable number of stories processed from submissions to front page stories - over 11,000 stories. Any editor will tell you that is an enormous amount of effort for anybody.

But Marty had one more objective and aim that has kept him going through much of his recovery to date. He wanted to reach the 11,111 story mark. Because of his current condition he can often only type at a very slow rate, less than 1 character per second and with only 1 hand. That has been furthered hindered by his poor eyesight. He reached that mark in November - and immediately had his milestone snatched away when there was a system crash and several weeks of his work disappeared.

So Marty did what he always does. He gritted his teeth and started again. Yesterday Marty reached the 11,111 story milestone and I am writing this to make sure that as many people as possible are aware of it so that, in the event of another disaster, we will remember what he has achieved. In fact, he has overshot his target and as I type this he stands at 11,112 stories processed, but I can forgive him that.

Marty, I tip my hat to you, and on behalf of this community I offer you our congratulations and best wishes for your continued recovery. Your contribution is unequaled in so many areas, and many of us have learned so much from you. You are also noted for your use of terrible puns - which is not improving at all! That is, I think, a good sign too.

I am still keeping your seat warm...

janrinok


Update:

JR: Thank-you so very much for taking the time and making the effort to commemorate this occasion. That said, I do believe that you do NOT give yourself proper credit for all that YOU have contributed to this site!

You tucked me under your wing and taught me, a newbie, all the vagaries of producing a *proper* story. It is not that it is that difficult, but there ARE many moving parts that need to be checked and verified. You were patient beyond measure with this energetic, fearful, and impatient nerd. In other - less capable hands - I would have given up and called it quits!

But that was far from everything that you did. As of this writing, janrinok has single-handedly posted 7,885 stories. This, in addition to all the other things he has done to keep the sight running smoothly. He single-handedly wrote a tool to automatically deal with with "users" who would like nothing better that to create new accountants and use them to spew crap across the site.

There's more -- much MUCH more -- but that gives a brief look at just some of the things he does to help the community! So, again, I say "Thanks Janrinok!

posted by hubie on Saturday June 10 2023, @10:00AM   Printer-friendly

Big-name researchers cited the plot of a major movie among a series of AI "disaster scenarios" they said could threaten humanity's existence:

Two of the three so-called "godfathers of AI" are worried - though the third could not disagree more, saying such "prophecies of doom" are nonsense.

When trying to make sense of it in an interview on British television with one of the researchers who warned of an existential threat, the presenter said: "As somebody who has no experience of this... I think of the Terminator, I think of Skynet, I think of films that I've seen."

He is not alone. The organisers of the warning statement - the Centre for AI Safety (CAIS) - used Pixar's WALL-E as an example of the threats of AI.

Science fiction has always been a vehicle to guess at what the future holds. Very rarely, it gets some things right.

Using the CAIS' list of potential threats as examples, do Hollywood blockbusters have anything to tell us about AI doom?

CAIS says "enfeeblement" is when humanity "becomes completely dependent on machines, similar to the scenario portrayed in the film WALL-E".

If you need a reminder, humans in that movie were happy animals who did no work and could barely stand on their own. Robots tended to everything for them.

[...] But there is another, more insidious form of dependency that is not so far away. That is the handing over of power to a technology we may not fully understand, says Stephanie Hare, an AI ethics researcher and author of Technology Is Not Neutral.

[...] So what happens when someone has "a life-altering decision" - such as a mortgage application or prison parole - refused by AI?

Today, a human could explain why you didn't meet the criteria. But many AI systems are opaque and even the researchers who built them often don't fully understand the decision-making.

"We just feed the data in, the computer does something.... magic happens, and then an outcome happens," Dr Hare says.

The technology might be efficient, but it's arguable it should never be used in critical scenarios like policing, healthcare, or even war, she says. "If they can't explain it, it's not okay."

The true villain in the Terminator franchise isn't the killer robot played by Arnold Schwarzenegger, it's Skynet, an AI designed to defend and protect humanity. One day, it outgrew its programming and decided that mankind was the greatest threat of all - a common film trope.

We are of course a very long way from Skynet. But some think that we will eventually build an artificial generalised intelligence (AGI) which could do anything humans can but better - and perhaps even be self-aware.

[...] What we have today is on the road to becoming something more like Star Trek's shipboard computer than Skynet. "Computer, show me a list of all crew members," you might say, and our AI of today could give it to you and answer questions about the list in normal language.

[...] Another popular trope in film is not that the AI is evil - but rather, it's misguided.

In Stanley Kubrick's 2001: A Space Odyssey, we meet HAL-9000, a supercomputer which controls most of the functions of the ship Discovery, making the astronaut's lives easier - until it malfunctions.

[...] In modern AI language, misbehaving AI systems are "misaligned". Their goals do not seem to match up with the human goals.

Sometimes, that's because the instructions were not clear enough and sometimes it's because the AI is smart enough to find a shortcut.

For example, if the task for an AI is "make sure your answer and this text document match", it might decide the best path is to change the text document to an easier answer. That is not what the human intended, but it would technically be correct.

[...] "How would you know the difference between the dream world and the real world?" Morpheus asks a young Keanu Reeves in 1999's The Matrix.

The story - about how most people live their lives not realising their world is a digital fake - is a good metaphor for the current explosion of AI-generated misinformation.

Dr Hare says that, with her clients, The Matrix us a useful starting point for "conversations about misinformation, disinformation and deepfakes".

[...] "I think AI will transform a lot of sectors from the ground up, [but] we need to be super careful about rushing to make decisions based on feverish and outlandish stories where large leaps are assumed without a sense of what the bridge will look like," he warns.


Original Submission

posted by Fnord666 on Saturday June 10 2023, @05:15AM   Printer-friendly
from the my-prints-smell-like-espresso dept.

Used coffee pods can be recycled to produce filaments for 3D printing:

An article published in the journal ACS Sustainable Chemistry & Engineering brings good news for coffee buffs: the plastic in used coffee pods can be recycled to make filament for 3D printers, minimizing its environmental impact.

[...] "We produced new conductive and non-conductive filaments from waste polylactic acid [PLA] from used coffee machine pods. There are many applications for these filaments, including conductive parts for machinery and sensors," Bruno Campos Janegitz, a co-author of the article, told Agência FAPESP. Janegitz heads the Sensors, Nanomedicine and Nanostructured Materials Laboratory (LSNano) at UFSCar in Araras, São Paulo state.

[...] Although reusable pods exist and some suppliers promote recycling of aluminum pods, most consumers just throw used pods into the garbage bin, especially if they are made of plastic. Considering all the factors involved, calculations made by the São Paulo State Technological Research Institute (IPT) show that "a cup of pod coffee can be as much as 14 times more damaging to the environment than a cup of filter coffee".

To develop uses for this waste, the researchers produced electrochemical cells with non-conductive filaments of PLA and electrochemical sensors with conductive filaments prepared by adding carbon black to the PLA. Carbon black is a paracrystalline form of carbon that results from incomplete combustion of hydrocarbons. "The electrochemical sensors were used to determine the proportion of caffeine in black tea and arabica coffee," Janegitz explained.

Production of filament is relatively simple, he added. "We obtain the non-conductive material simply by washing and drying PLA pods, followed by hot extrusion. To obtain the conductive material, we add carbon black before heating and extrusion. The extruded material is then cooled and spooled to produce the filament of interest," he explained.

Journal Reference:
Evelyn Sigley, Cristiane Kalinke, Robert D. Crapnell, et al., Circular Economy Electrochemistry: Creating Additive Manufacturing Feedstocks for Caffeine Detection from Post-Industrial Coffee Pod Waste [open], ACS Sustainable Chem. Eng. 2023, 11, 7, 2978–2988 https://doi.org/10.1021/acssuschemeng.2c06514

 


Original Submission

posted by hubie on Saturday June 10 2023, @12:31AM   Printer-friendly
from the exciting-and-new dept.

Meet the most energy-efficient electric, solar cruise ship:

On Wednesday, Norwegian cruise line company Hurtigruten revealed plans for a first-of-its-kind zero-emission ship. The electric-powered cruise ship will feature retractable sails with solar panels to harness energy from the wind and sun while storing it in powerful batteries.

Although only 0.1% of Hurtigruten Norway's ships currently use zero-emission technology, the company is planning a drastic overhaul.

Its first concept, "Sea Zero," is expected to be the world's most energy-efficient cruise ship. The company initially revealed the project last March as part of its ambition to become a leader in sustainable travel.

Its first electric cruise ship, due out in 2030, will combine 60 MWh battery packs with several industry firsts to harness wind and solar while at sea for a truly zero-emission experience.

For example, the company plans to include three retractable, autonomous sails with added solar panels. The wing rigs are designed to enhance aerodynamics, pulling in air currents at up to 50 meters for added propulsion.

Hurtigruten says that during the summer, the ship "will be superpowered by northern Norway's midnight sun that shines for 24 hours a day."

The three retractable wings will comprise 1500 m² (16,146 ft²) of solar panels with a total wind surface of 750 m² (8,073 ft²).

Renewable energy from the sails or the charging port is stored in the ship's giant 60 MWh battery storage system. There's even an indicator on the side of the vessel to show the battery level. The company says it's looking for cobalt-free battery chemistries with minimal nickel to keep costs down.

[...] To reduce underwater drag, the two thrusters at the stern will retract into the hull while cruising. Meanwhile, the company is developing an underwater air lubrication system to allow the electric ship to "surf" on a carpet of bubbles.

[...] The electric solar-powered cruise ship concept is 443 feet long and is set to host 500 passengers across 270 cabins.

Sea Zero is still in its early stages of research and development as the Norwegian cruise line gears up for its launch by 2030. Over the next two years, the company will test and develop the proposed technology as it works toward a final design.


Original Submission

posted by hubie on Friday June 09 2023, @07:48PM   Printer-friendly

A new study by researchers at the University of Rhode Island shows some of the best evidence yet for a feedback loop phenomenon in which species evolution drives ecological change:

The story of the peppered moths is a textbook evolutionary tale. As coal smoke darkened tree bark near England's cities during the Industrial Revolution, white-bodied peppered moths became conspicuous targets for predators and their numbers quickly dwindled. Meanwhile, black-bodied moths, which had been rare, thrived and became dominant in their newly darkened environment.

The peppered moths became a classic example of how environmental change drives species evolution. But in recent years, scientists have begun thinking about the inverse process. Might there be a feedback loop in which species evolution drives ecological change? Now, a new study by researchers at the University of Rhode Island shows some of the best evidence yet for that very phenomenon.

In research published in the Proceedings of the National Academy of Sciences, the researchers show that an evolutionary change in the length of lizards' legs can have a significant impact on vegetation growth and spider populations on small islands in the Bahamas. This is one of the first times, the researchers say, that such dramatic evolution-to-environment effects have been documented in a natural setting.

[...] Armed with specialized lizard wrangling gear—poles with tiny lassos made of dental floss at the end—the team captured hundreds of brown anoles. They then measured the leg length of each lizard, keeping the ones whose limbs were either especially long or especially short and returning the rest to the wild. Once they had distinct populations of short- and long-limbed lizards, they set each population free on islands that previously had no lizards living on them.

Since the experimental islands were mostly covered by smaller diameter vegetation, the researchers expected that the short-legged lizards would be better adapted to that environment, that is, more maneuverable and better able to catch prey in the trees and brush. The question the researchers wanted to answer was whether the ecological effects of those highly effective hunters could be detected.

After eight months, the researchers checked back on the islands to look for ecological differences between islands stocked with the short- and long-legged groups. The differences, it turned out, were substantial. On islands with shorter-legged lizards, populations of web spiders—a key prey item for brown anoles—were reduced by 41% compared to islands with lanky lizards. There were significant differences in plant growth as well. Because the short-legged lizards were better at preying on insect herbivores, plants flourished. On islands with short-legged lizards, buttonwood trees had twice as much shoot growth compared to trees on islands with long-legged lizards, the researchers found.

The results, Kolbe says, help to bring the interaction between ecology and evolution full circle.

Journal Reference:
Kolbe, Jason J. et al, Experimentally simulating the evolution-to-ecology connection: Divergent predator morphologies alter natural food webs, PNAS (2023). DOI: 10.1073/pnas.2221691120


Original Submission

posted by hubie on Friday June 09 2023, @03:03PM   Printer-friendly

Interesting article relating to Google/OpenAI vs. Open Source for LLMs

Leaked Internal Google Document Claims Open Source AI Will Outcompete Google and OpenAI:

The text below is a very recent leaked document, which was shared by an anonymous individual on a public Discord server who has granted permission for its republication. It originates from a researcher within Google. We have verified its authenticity. The only modifications are formatting and removing links to internal web pages. The document is only the opinion of a Google employee, not the entire firm. We do not agree with what is written below, nor do other researchers we asked, but we will publish our opinions on this in a separate piece for subscribers. We simply are a vessel to share this document which raises some very interesting points.

We've done a lot of looking over our shoulders at OpenAI. Who will cross the next milestone? What will the next move be?

But the uncomfortable truth is, we aren't positioned to win this arms race and neither is OpenAI. While we've been squabbling, a third faction has been quietly eating our lunch.

I'm talking, of course, about open source. Plainly put, they are lapping us. Things we consider "major open problems" are solved and in people's hands today. Just to name a few:

While our models still hold a slight edge in terms of quality, the gap is closing astonishingly quickly. Open-source models are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months. This has profound implications for us:

  • We have no secret sauce. Our best hope is to learn from and collaborate with what others are doing outside Google. We should prioritize enabling 3P integrations.

  • People will not pay for a restricted model when free, unrestricted alternatives are comparable in quality. We should consider where our value add really is.

  • Giant models are slowing us down. In the long run, the best models are the ones which can be iterated upon quickly. We should make small variants more than an afterthought, now that we know what is possible in the 20B parameter regime.

At the beginning of March the open source community got their hands on their first really capable foundation model, as Meta's LLaMA was leaked to the public. It had no instruction or conversation tuning, and no RLHF. Nonetheless, the community immediately understood the significance of what they had been given.

A tremendous outpouring of innovation followed, with just days between major developments (see The Timeline for the full breakdown). Here we are, barely a month later, and there are variants with instruction tuning, quantization, quality improvements, human evals, multimodality, RLHF, etc. etc. many of which build on each other.

Most importantly, they have solved the scaling problem to the extent that anyone can tinker. Many of the new ideas are from ordinary people. The barrier to entry for training and experimentation has dropped from the total output of a major research organization to one person, an evening, and a beefy laptop.

Lots more stuff in the article. It would be interesting to hear from knowledgeable experts what the primary disagreements to these points are and whether you agree or disagree.


Original Submission

posted by janrinok on Friday June 09 2023, @10:13AM   Printer-friendly

Self-healing code is the future of software development:

One of the more fascinating aspects of large language models is their ability to improve their output through self reflection. Feed the model its own response back, then ask it to improve the response or identify errors, and it has a much better chance of producing something factually accurate or pleasing to its users. Ask it to solve a problem by showing its work, step by step, and these systems are more accurate than those tuned just to find the correct final answer.

While the field is still developing fast, and factual errors, known as hallucinations, remain a problem for many LLM powered chatbots, a growing body of research indicates that a more guided, auto-regressive approach can lead to better outcomes.

This gets really interesting when applied to the world of software development and CI/CD. Most developers are already familiar with processes that help automate the creation of code, detection of bugs, testing of solutions, and documentation of ideas. Several have written in the past on the idea of self-healing code. Head over to Stack Overflow's CI/CD Collective and you'll find numerous examples of technologists putting this ideas into practice.

When code fails, it often gives an error message. If your software is any good, that error message will say exactly what was wrong and point you in the direction of a fix. Previous self-healing code programs are clever automations that reduce errors, allow for graceful fallbacks, and manage alerts. Maybe you want to add a little disk space or delete some files when you get a warning that utilization is at 90% percent. Or hey, have you tried turning it off and then back on again?

Developers love automating solutions to their problems, and with the rise of generative AI, this concept is likely to be applied to both the creation, maintenance, and the improvement of code at an entirely new level.

The ability of LLMs to quickly produce large chunks of code may mean that developers—and even non-developers—will be adding more to the company codebase than in the past. This poses its own set of challenges.

"One of the things that I'm hearing a lot from software engineers is they're saying, 'Well, I mean, anybody can generate some code now with some of these tools, but we're concerned about maybe the quality of what's being generated,'" says Forrest Brazeal, head of developer media at Google Cloud. The pace and volume at which these systems can output code can feel overwhelming. "I mean, think about reviewing a 7,000 line pull request that somebody on your team wrote. It's very, very difficult to do that and have meaningful feedback. It's not getting any easier when AI generates this huge amount of code. So we're rapidly entering a world where we're going to have to come up with software engineering best practices to make sure that we're using GenAI effectively."

"People have talked about technical debt for a long time, and now we have a brand new credit card here that is going to allow us to accumulate technical debt in ways we were never able to do before," said Armando Solar-Lezama, a professor at the Massachusetts Institute of Technology's Computer Science & Artificial Intelligence Laboratory, in an interview with the Wall Street Journal. "I think there is a risk of accumulating lots of very shoddy code written by a machine," he said, adding that companies will have to rethink methodologies around how they can work in tandem with the new tools' capabilities to avoid that.

[Editor's Comment: Much more discussion follows in the linked article.--JR]


Original Submission

posted by janrinok on Friday June 09 2023, @05:33AM   Printer-friendly
from the statistics-alphabet-soup dept.

Several days ago, a New York Times article titled "How New Rules Turned Back the Clock on Baseball" was posted over at Hacker News. The 2023 Major League Baseball (MLB) season has adopted several rule changes including implementing a pitch clock, limiting pickoff attempts, increasing the size of bases, and banning extreme defensive shifts. The results have been dramatic, with a much faster pace of play and a large increase in stolen bases. It is an effort to undo many trends in the game that have been influenced by the rise of advanced metrics.

Statistics have always been a part of baseball, whether it's trying to hit .400, strike out 300 batters, or trying to hit 60 home runs in a season. In the 1990s, typical statistics to measure hitting success were batting average (BA), home runs (HR), and runs batted in (RBI). Pitchers were evaluated with statistics like strikeouts (K), wins (W), earned run average (ERA), and walks and hits per inning pitched (WHIP). During this era, there was an increase in the amount and type of data collected during games, providing far more details for statisticians to analyze.

Some of these statistics like BA, HR, RBI, K, and W really aren't great indicators of the value of a player. For example, wins are heavily influenced both by a team's lineup and the defense behind a pitcher, so they don't correlate well to the quality of a pitcher. Home runs are valuable to an offense, but it's a count instead of a rate, meaning it's influenced heavily by how many plate appearances a hitter receives and how often the hitter takes walks. Statistics like ERA and WHIP were better because they presented as rates, though they were still influenced significantly by the quality of a team's defense. The development of advanced metrics, which are newer and more insightful statistical tools, provided a lot of insight into what is actually valuable to a team's success.

In the present day, statistics like weighted on-base average (wOBA) and wins above replacement (WAR), in addition to many others, are commonly used to measure the value of players. These statistics attempt to determine the true value of each play to a team's success and present them in a single metric. For example, examining the seasonal constants used to calculate wOBA shows that stolen bases aren't particularly valuable compared to even outcomes like taking a walk. It also shows that home runs are more than twice as valuable as a single. This was one factor in changing the typical approach taken by batters.

In some cases, the advanced metrics also differed significantly from conventional wisdom. For many decades, hitters were generally expected to change their hitting approach with two strikes, sacrificing power for just trying to make contact with the baseball. However, advanced metrics revealed that strikeouts weren't much worse for a team's success than groundouts or flyouts. The result was a more aggressive approach to hitting with two-strike counts, accepting much higher strikeout rates in exchange for more doubles, triples, and home runs. Additionally, there is significant value in just getting on base, and walks (BB) are valued almost as much as singles. Hitters generally swung more aggressively at pitches inside the strike zone but also avoided chasing pitches outside of the strike zone.

The result was a trend toward an increase in the three true outcomes (HR, K, and BB), which are plays where only the pitcher and catcher are involved in the defense. In front offices, nerds displaced people who had significant experience playing baseball, because teams coveted their skills in processing and analyzing data. But for many fans, the game had become much less interesting, with slower games and less action involving defense and baserunning. Baseball had been largely optimized with more data collection and many advanced metrics to evaluate players, but the result was a boring product for fans.

There's no way to take the analytics out of baseball, and teams aren't going to start replacing the nerds in front offices with people who have more playing experience. Instead, MLB introduced several new rules this season designed to make the game more entertaining and reduce the negative impacts from expanded use of advanced metrics. Although it has not completely reverted the game of baseball back to the 1990s, the statistics in the New York Times article show that the rule changes have created a faster-paced game with more baserunning.


Original Submission

posted by martyb on Friday June 09 2023, @12:45AM   Printer-friendly
from the even-their-trees-try-to-kill-you dept.

IMB researchers have identified a unique pain pathway targeted by a notorious Australian stinging tree and say it could point the way to new, non-opioid pain relief:

Professor Irina Vetter and her team have studied how toxins in the venom of the Gympie-Gympie tree cause intense pain that can last for weeks.

[...] "The gympietide toxin in the stinging tree has a similar structure to toxins produced by cone snails and spiders, but the similarity ends there," Professor Vetter said.

"This toxin causes pain in a way we've never seen before."

Many toxins cause pain by binding directly to sodium channels in sensory nerve cells, but the UQ researchers have found the gympietide toxin needs assistance to bind.

"It requires a partner protein called TMEM233 to function and in the absence of TMEM233 the toxin has no effect," Professor Vetter said.

"This was an unexpected finding and the first time we've seen a toxin that requires a partner to impact sodium channels."

The team is working to understand whether switching off this pain mechanism might lead to the development of new painkillers.

"The persistent pain the stinging tree toxins cause gives us hope that we can convert these compounds into new painkillers or anaesthetics which have long-lasting effects," Professor Vetter said.

Journal Reference:
Sina Jami, Jennifer R. Deuis, Tabea Klasfauseweh, et al. Pain-causing stinging nettle toxins target TMEM233 to modulate NaV1.7 function (https://doi.org/10.1038/s41467-023-37963-2)


Original Submission

posted by martyb on Thursday June 08 2023, @10:09PM   Printer-friendly
from the here's-the-rest-of-the-story dept.

Snowden Ten Years Later - Schneier on Security:

Snowden Ten Years Later

In 2013 and 2014, I wrote extensively about new revelations regarding NSA surveillance based on the documents provided by Edward Snowden. But I had a more personal involvement as well.

I wrote the essay below in September 2013. The New Yorker agreed to publish it, but the Guardian asked me not to. It was scared of UK law enforcement, and worried that this essay would reflect badly on it. And given that the UK police would raid its offices in July 2014, it had legitimate cause to be worried.

Now, ten years later, I offer this as a time capsule of what those early months of Snowden were like.

It’s a surreal experience, paging through hundreds of top-secret NSA documents. You’re peering into a forbidden world: strange, confusing, and fascinating all at the same time.

I had flown down to Rio de Janeiro in late August at the request of Glenn Greenwald. He had been working on the Edward Snowden archive for a couple of months, and had a pile of more technical documents that he wanted help interpreting. According to Greenwald, Snowden also thought that bringing me down was a good idea.

It made sense. I didn’t know either of them, but I have been writing about cryptography, security, and privacy for decades. I could decipher some of the technical language that Greenwald had difficulty with, and understand the context and importance of various document. And I have long been publicly critical of the NSA’s eavesdropping capabilities. My knowledge and expertise could help figure out which stories needed to be reported.

I thought about it a lot before agreeing. This was before David Miranda, Greenwald’s partner, was detained at Heathrow airport by the UK authorities; but even without that, I knew there was a risk. I fly a lot—a quarter of a million miles per year—and being put on a TSA list, or being detained at the US border and having my electronics confiscated, would be a major problem. So would the FBI breaking into my home and seizing my personal electronics. But in the end, that made me more determined to do it.

I did spend some time on the phone with the attorneys recommended to me by the ACLU and the EFF. And I talked about it with my partner, especially when Miranda was detained three days before my departure. Both Greenwald and his employer, the Guardian, are careful about whom they show the documents to. They publish only those portions essential to getting the story out. It was important to them that I be a co-author, not a source. I didn’t follow the legal reasoning, but the point is that the Guardian doesn’t want to leak the documents to random people. It will, however, write stories in the public interest, and I would be allowed to review the documents as part of that process. So after a Skype conversation with someone at the Guardian, I signed a letter of engagement.

And then I flew to Brazil.

The story concludes:

[...] But now it’s been a decade. Everything he knows is old and out of date. Everything we know is old and out of date. The NSA suffered an even worse leak of its secrets by the Russians, under the guise of the Shadow Brokers, in 2016 and 2017. The NSA has rebuilt. It again has capabilities we can only surmise.

This essay previously appeared in an IETF publication, as part of an Edward Snowden ten-year retrospective.

EDITED TO ADD (6/7): Conversation between Snowden, Greenwald, and Poitras.

Posted on June 6, 2023 at 7:17 AM27 Comments


Original Submission

posted by martyb on Thursday June 08 2023, @07:15PM   Printer-friendly
from the *BIG*-deal dept.

Preparing for the Incoming Computer Shopper Tsunami

There's no way for me to know where your awareness starts with all this, so let's just start at the beginning.

Computer Shopper was a hell of a magazine. I wrote a whole essay about it, which can be summarized as "this magazine got to be very large, very extensive, and probably served as the unofficial 'bible' of the state of hardware and software to the general public throughout the 1980s and 1990s." While it was just a pleasant little computer tabloid when it started in 1979, it quickly grew to a page count that most reasonable people would define as "intimidating".

[...] So, there I was whining online about how it was 2023 and nobody seemed to be scanning in Computer Shopper and we were going to be running into greater and greater difficulty to acquire and process them meaningfully, and I finally, stupidly said that if we happened on a somewhat-complete collection, I'd figure out how to do it.

And then an ebay auction came up that seemed to fit the bill.

Ed note: I well remember. Some editions stretched to 800 or more pages! It seemed that I could barely get through one edition when the next month's edition would come along. Who else remembers?


Original Submission

posted by hubie on Thursday June 08 2023, @02:56PM   Printer-friendly
from the Blackberry dept.

https://www.msn.com/en-us/news/technology/this-raspberry-pi-project-could-give-your-old-blackberry-a-second-life/ar-AA1c4WYV

Opinion:
Scientific studies have shown for decades now that the most efficient, pleasurable, and effective way of communicating with a cell phone is through a keyboard (also applies to laptops!). Double-blind studies of cave rats in Nambia showed that messages typed with a keyboard are 100% more readable than ones without keyboards, or they would be if cave rats knew how to spell. 9 out of 10 doctors agree based on our best analysis of their prescription hand wiring legibility.

While on my weekly quest to see if any new keyboard phones might be somewhere in the future I came across this article from Saturday

Article:

This Raspberry Pi Project Could Give Your Old BlackBerry A Second Life

Indie tech collective Squarofumi, which, in collaboration with the creators of Matrix-based chat app Beeper, have created a Raspberry Pi-powered device in the BlackBerry's image. This device is aptly named the Beepberry, and it combines that classic keyboard with a simplistic interface.

This device is powered by a Raspberry Pi Zero W hooked up to a high-contrast, low-power 400x240 Sharp Memory LCD and a classic, pleasantly tactile keyboard and trackpad. The Beepberry features native support for the Beeper app, a universal chat app that can be used to connect with users on 15 different major chat platforms like WhatsApp, Slack, Discord, and more.

In addition to the nostalgic BlackBerry-style keyboard, the interface of the Beepberry is designed to be as minimalistic as possible, rendering all apps exclusively with text (and some ASCII art, where applicable). If you'd prefer your mobile device to be a bit flashier, the Beepberry is highly customizable in terms of both hardware and software. It features programmable USB and GPIO ports and buttons, and can support any Linux app that's already operable on the Raspberry Pi Zero W. There's even a programmable RGB light on the front of the device for notifications.

With the raspberry pi zero its 99 bucks, without is 79. They are sold out which is sad because I would buy one if they weren't. Keyboard phones are back baby.
https://shop.sqfmi.com/products/beepberry?variant=43376334962843


Original Submission

posted by martyb on Thursday June 08 2023, @10:12AM   Printer-friendly
from the that's-a-smucking-fart-idea! dept.

'Ducking hell' to disappear from Apple autocorrect:

Apple has said it will no longer automatically change one of the most common swear words to 'ducking'.

The autocorrect feature, which has long frustrated users, will soon be able to use AI to detect when you really mean to use that expletive.

"In those moments where you just want to type a ducking word, well, the keyboard will learn it, too," said software boss Craig Federighi.

He announced the development at Apple's developers' conference in California.

iPhone users have often complained about how autocorrect forces them to rewrite their own messages - with the term "damn you autocorrect" becoming an acronym, a meme, an Instagram account and even a song.

[...] Initially flagged in a 2017 paper from Google, transformers are some of the most powerful classes of AI models, and autosuggest - or predictive text - systems are beginning to become more mainstream.

The autocorrect change will be part of the iOS 17 operating system upgrades which are expected to be available as a public beta in July, with the general release in September.


Original Submission