Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Idiosyncratic use of punctuation - which of these annoys you the most?

  • Declarations and assignments that end with }; (C, C++, Javascript, etc.)
  • (Parenthesis (pile-ups (at (the (end (of (Lisp (code))))))))
  • Syntactically-significant whitespace (Python, Ruby, Haskell...)
  • Perl sigils: @array, $array[index], %hash, $hash{key}
  • Unnecessary sigils, like $variable in PHP
  • macro!() in Rust
  • Do you have any idea how much I spent on this Space Cadet keyboard, you insensitive clod?!
  • Something even worse...

[ Results | Polls ]
Comments:64 | Votes:119

posted by martyb on Sunday June 11 2023, @10:56PM   Printer-friendly
from the here-kitty-kitty dept.

Critical Schrödinger Cat Code: Quantum Computing Breakthrough for Better Qubits:

What is a "critical Schrödinger cat code?"

In 1935, physicist Erwin Schrödinger proposed a thought experiment as a critique of the prevailing understanding of quantum mechanics at the time – the Copenhagen interpretation. In Schrödinger's experiment, a cat is placed in a sealed box with a flask of poison and a radioactive source. If a single atom of the radioactive source decays, the radioactivity is detected by a Geiger counter, which then shatters the flask. The poison is released, killing the cat.

According to the Copenhagen view of quantum mechanics, if the atom is initially in superposition, the cat will inherit the same state and find itself in a superposition of alive and dead. "This state represents exactly the notion of a quantum bit, realized at the macroscopic scale," says Savona.

In past years, scientists have drawn inspiration from Schrödinger's cat to build an encoding technique called "Schrödinger's cat code." Here, the 0 and 1 states of the qubit are encoded onto two opposite phases of an oscillating electromagnetic field in a resonant cavity, similar to the dead or alive states of the cat.

"Schrödinger cat codes have been realized in the past using two distinct approaches," explains Savona. "One leverages anharmonic effects in the cavity, the other relying on carefully engineered cavity losses. In our work, we bridged the two by operating in an intermediate regime, combining the best of both worlds. Although previously believed to be unfruitful, this hybrid regime results in enhanced error suppression capabilities." The core idea is to operate close to the critical point of a phase transition, which is what the 'critical' part of the critical cat code refers to.

The critical cat code has an additional advantage: it exhibits exceptional resistance to errors that result from random frequency shifts, which often pose significant challenges to operations involving multiple qubits. This solves a major problem and paves the way to the realization of devices with several mutually interacting qubits – the minimal requirement for building a quantum computer.

"We are taming the quantum cat," says Savona. "By operating in a hybrid regime, we have developed a system that surpasses its predecessors, which represents a significant leap forward for cat qubits and quantum computing as a whole. The study is a milestone on the road toward building better quantum computers, and showcases EPFL's dedication to advancing the field of quantum science and unlocking the true potential of quantum technologies.

Journal Reference:
Luca Gravina, Fabrizio Minganti, Vincenzo Savona. Critical Schrödinger Cat Qubit [open], PRX Quantum (DOI: 10.1103/PRXQuantum.4.020337)


Original Submission

posted by hubie on Sunday June 11 2023, @06:46PM   Printer-friendly
from the antidote-to-the-information-apocalypse dept.

https://arstechnica.com/culture/2023/06/rejoice-its-2023-and-you-can-still-buy-a-22-volume-paper-encyclopedia/

These days, many of us live online, where machine-generated content has begun to pollute the Internet with misinformation and noise. At a time when it's hard to know what information to trust, I felt delight when I recently learned that World Book still prints an up-to-date book encyclopedia in 2023. Although the term "encyclopedia" is now almost synonymous with Wikipedia, it's refreshing to see such a sizable reference printed on paper.
[...]
Its fiercest competitor of yore, The Encyclopedia Britannica, ended its print run in 2012 after 244 years in print.

In a nod to our present digital age, World Book also offers its encyclopedia as a subscription service through the web. Yet it's the print version that mystifies and attracts my fascination. Why does it still exist?

"Because there is still a demand!" Tom Evans, World Book's editor-in-chief, told Ars over email.
[...]
A World Book rep told Quartz in 2019 that the print encyclopedia sold mostly to schools, public libraries, and homeschooling families. Today, Evans says that public and school libraries are still the company's primary customers. "World Book has a loyal following of librarians who understand the importance of a general reference encyclopedia in print form, accessible to all."


Original Submission

posted by Fnord666 on Sunday June 11 2023, @02:01PM   Printer-friendly
from the how-to-learn-mathematical-thinking dept.

Technology Review is running an unusual book review -- books about learning math, pure math, not applied math, https://www.technologyreview.com/2023/04/24/1071371/book-reviews-math-education/
The author admits to being adrift,

As a graduate student in physics, I have seen the work that goes into conducting delicate experiments, but the daily grind of mathematical discovery is a ritual altogether foreign to me. And this feeling is only reinforced by popular books on math, which often take the tone of a pastor dispensing sermons to the faithful.

An initial attempt led to a MasterClass by a "living legend of contemporary math", but the master is seated in a white armchair with no blackboards, pens or paper and does not enlighten.

A side story covers a writer for the New Yorker who plans a year to go back and learn the high school algebra/geometry/calculus that escaped him, but mostly fails. For backup he has a niece who is a math professor...but after months without getting it, he complains. Her answer?

"For a moment, think of it as a monastic discipline. You have to take on faith what I tell you." Where his niece and others see patterns and order, he perceives only "incoherence, obfuscation, and chaos"; he feels like a monk who sees lesser angels than everybody around him.

I won't spoil the end, but the author does make some progress with books by mathematician and concert pianist Eugenia Cheng, starting with "Cakes, Custard and Category Theory", where each chapter starts with an analogy to baking.

Unfortunately, for the SN audience, the article does not include any car analogies...


Original Submission

posted by martyb on Sunday June 11 2023, @09:17AM   Printer-friendly
from the b-o-a-t dept.

https://arstechnica.com/science/2023/06/a-telescope-happened-to-be-pointing-at-the-brightest-supernova-yet-observed/

Supernovae are some of the most energetic events in the Universe. And a subset of those involves gamma-ray bursts, where a lot of the energy released comes from extremely high-energy photons. We think we know why that happens in general terms—the black hole left behind after the explosion expels jets of material at nearly the speed of light. But the details of how and where these jets produce photons are not at all close to being fully worked out.

Unfortunately, these events happen very quickly and very far away, so it's not easy to get detailed observations of them. However, a recent gamma-ray burst that's been called the BOAT (brightest of all time) may be providing us with new information on the events within a few days of a supernova's explosion.

[...] The "telescope" mentioned is the Large High Altitude Air Shower Observatory (LHAASO). Based nearly three miles (4,400 meters) above sea level, the observatory is a complex of instruments that aren't a telescope in the traditional sense. Instead, they're meant to capture air showers—the complex cascade of debris and photons that are produced when high-energy particles from outer space slam into the atmosphere.


Original Submission

posted by martyb on Sunday June 11 2023, @05:00AM   Printer-friendly
from the cure-for-the-common-code? dept.

Google DeepMind's Game-Playing AI Just Found Another Way to Make Code Faster

Google DeepMind's game-playing AI just found another way to make code faster:

It has also found a way to speed up a key algorithm used in cryptography by 30%. These algorithms are among the most common building blocks in software. Small speed-ups can make a huge difference, cutting costs and saving energy.

"Moore's Law is coming to an end, where chips are approaching their fundamental physical limits," says Daniel Mankowitz, a research scientist at Google DeepMind. "We need to find new and innovative ways of optimizing computing."

"It's an interesting new approach," says Peter Sanders, who studies the design and implementation of efficient algorithms at the Karlsruhe Institute of Technology in Germany and who was not involved in the work. "Sorting is still one of the most widely used subroutines in computing," he says.

DeepMind published its results in Nature today. But the techniques that AlphaDev discovered are already being used by millions of software developers. In January 2022, DeepMind submitted its new sorting algorithms to the organization that manages C++, one of the most popular programming languages in the world, and after two months of rigorous independent vetting, AlphaDev's algorithms were added to the language. This was the first change to C++'s sorting algorithms in more than a decade and the first update ever to involve an algorithm discovered using AI.

DeepMind added its other new algorithms to Abseil, an open-source collection of prewritten C++ algorithms that can be used by anybody coding with C++. These cryptography algorithms compute numbers called hashes that can be used as unique IDs for any kind of data. DeepMind estimates that its new algorithms are now being used trillions of times a day.

[...] DeepMind chose to work with assembly, a programming language that can be used to give specific instructions for how to move numbers around on a computer chip. Few humans write in assembly; it is the language that code written in languages like C++ gets translated into before it is run. The advantage of assembly is that it allows algorithms to be broken down into fine-grained steps—a good starting point if you're looking for shortcuts.

Journal Reference:
Daniel J. Mankowitz, Andrea Michi, Anton Zhernov, et al. Faster sorting algorithms discovered using deep reinforcement learning [open], Nature (DOI: 10.1038/s41586-023-06004-9)


Original Submission

posted by janrinok on Sunday June 11 2023, @12:11AM   Printer-friendly
from the tiny-bubbles-in-the-wine-make-me-happy dept.

Fluid mechanics researchers found that surfactants give the celebratory drink its stable and signature straight rise of bubbles:

Here are some scientific findings worthy of a toast: Researchers from Brown University and the University of Toulouse in France have explained why bubbles in Champagne fizz up in a straight line while bubbles in other carbonated drinks, like beer or soda, don't.

The findings, described in a new Physical Review Fluids study, are based on a series of numerical and physical experiments, including, of course, pouring out glasses of Champagne, beer, sparkling water and sparkling wine. The results not only explain what gives Champagne its line of bubbles but may hold important implications for understanding bubbly flows in the field of fluid mechanics.

"This is the type of research that I've been working out for years," said Brown engineering professor Roberto Zenit, who was one of the paper's authors. "Most people have never seen an ocean seep or an aeration tank but most of them have had a soda, a beer or a glass of Champagne. By talking about Champagne and beer, our master plan is to make people understand that fluid mechanics is important in their daily lives."

[...] When it comes to Champagne and sparkling wine, for instance, the gas bubbles that continuously appear rise rapidly to the top in a single-file line and keep doing so for some time. This is known as a stable bubble chain. With other carbonated drinks, like beer, many bubbles veer off to the side, making it look like multiple bubbles are coming up at once. This means the bubble chain isn't stable.

[...] The results of their experiments indicate that the stable bubble chains in Champagne and other sparkling wines occur due to ingredients that act as soap-like compounds called surfactants. These surfactant-like molecules help reduce the tensions between the liquid and the gas bubbles, making for a smooth rise to the top.

"The theory is that in Champagne these contaminants that act as surfactants are the good stuff," said Zenit, senior author on the paper. "These protein molecules that give flavor and uniqueness to the liquid are what makes the bubbles chains they produce stable."

The experiments also showed the stability of bubbles is impacted by the size of the bubbles themselves. They found that the chains with large bubbles have a wake similar to that of bubbles with contaminants, leading to a smooth rise and stable chains.

[...] The results in the new study go well beyond understanding the science that goes into celebratory toasts, the researchers said. The findings provide a general framework in fluid mechanics for understanding the formation of clusters in bubbly flows, which have economic and societal value.

Technologies that use bubble-induced mixing, like aeration tanks at water treatment facilities, for instance, would benefit greatly from researchers having a clearer understanding of how bubbles cluster, their origins and how to predict their appearance. In nature, understanding these flows may help better explain ocean seeps in which methane and carbon dioxide emerges from the bottom of the ocean.

Journal Reference:
Omer Atasi, Mithun Ravisankar, Dominique Legendre, and Roberto Zenit, Presence of surfactants controls the stability of bubble chains in carbonated drinks, Phys. Rev. Fluids 8, 053601 DOI: 10.1103/PhysRevFluids.8.053601


Original Submission

posted by janrinok on Saturday June 10 2023, @07:26PM   Printer-friendly

US Patent Office Proposes Rule To Make it Much Harder To Kill Bad Patents:

So, this is bad. Over the last few years, we've written plenty about the so-called "inter partes review" or "IPR" that came into being about a decade ago as part of the "America Invents Act," which was the first major change to the patent system in decades. For much of the first decade of the 2000s, patent trolls were running wild and creating a massive tax on innovation. There were so many stories of people (mostly lawyers) getting vague and broad patents that they never had any intention of commercializing, then waiting for someone to come along and build something actually useful and innovative... and then shaking them down with the threat of patent litigation.

The IPR process, while not perfect, was at least an important tool in pushing back on some of the worst of the worst patents. In its most basic form, the IPR process allows nearly anyone to challenge a bad patent and have the special Patent Trial and Appeal Board (PTAB) review the patent to determine if it should have been granted in the first place. Given that a bad patent can completely stifle innovation for decades this seems like the very least that the Patent Office should offer to try to get rid of innovation-killing bad patents.

However, patent trolls absolutely loathe the IPR process for fairly obvious reasons. It kills their terrible patents. The entire IPR process has been challenged over and over again and (thankfully) the Supreme Court said that it's perfectly fine for the Patent Office to review granted patents to see if they made a mistake.

But, of course, that never stops the patent trolls. They've complained to Congress. And, now, it seems that the Patent Office itself is trying to help them out. Recently, the USPTO announced a possible change to the IPR process that would basically lead to limiting who can actually challenge bad patents, and which patents could be challenged.

The folks over at EFF are rightly raising the alarm about just how bad this could be if it goes into effect.

The U.S. Patent Office has proposed new rules about who can challenge wrongly granted patents. If the rules become official, they will offer new protections to patent trolls. Challenging patents will become far more onerous, and impossible for some. The new rules could stop organizations like EFF, which used this process to fight the Personal Audio "podcasting patent," from filing patent challenges altogether.


Original Submission

posted by janrinok on Saturday June 10 2023, @02:43PM   Printer-friendly
from the good-doctors dept.

https://arstechnica.com/health/2023/06/calif-hospital-staff-call-for-halt-of-surgeries-over-bizarre-particles/

More than 70 staff members of a San Diego-area hospital are calling for a halt of all surgeries at the facility due to unidentified black, brown, and gray specks on surgical trays, the San Diego Union-Tribune reported.

The objecting staff have signed a petition to spur hospital officials to pause procedures until the issue is resolved. But officials at the facility, the Kaiser Permanente Zion Medical Center, have rejected the call, according to the Union-Tribune. A spokesperson for the facility did not respond to voicemails from Ars.

[...] Haynes [ a surgical technician at Zion] added that management had assured staff that the particles—whatever they are—are sterile. Surgical equipment goes through a two-step process before use: a wash and then a trip through an autoclave, a pressurized steam machine used for sterilization. But Haynes argued that simply being sterilized doesn't mean it's fit for surgery.

"The fact that a contaminant is "safe" (not a microbe) doesn't mean that contaminant is implantable," she said.

The Union-Tribune noted that the hospital's troubles seemed to begin last month when the facility reported a problem with its hot water lines.

[...] Earlier this year, researchers at a Boston hospital reported on water purification systems in hospital ice machines inadvertently stripping out chlorine, leading to the deaths of three patients.

Leapfrog, a national nonprofit watchdog of hospital quality and safety, recently gave the Zion Medical Center an "A" grade.


Original Submission

posted by janrinok on Saturday June 10 2023, @12:51PM   Printer-friendly

(Update appears at bottom.)


Most people who have been on the site more than a few months will know Martyb / bytram well. He has filled so many different roles, many simultaneously, and he has been with the site from well before the 'official' opening. He has done as much as anyone, if not more, to create the site we have today. He has worked as an editor, the Editor-in-Chief, bug squasher, QA, coder, and almost anything that he felt he could turn his hand to - and he could do most things.

Marty has always been known for his calm attitude and wisdom in many situations and if anyone needed help or advice Marty could be reliably called upon to assist. Nothing was ever too much trouble. He is a personal friend of mine - even though we have never met face-to-face - and he has also been the friend of every member of staff that he has encountered during the last 9 years or more.

Unfortunately, Marty suffered a severe stroke quite a while back, in fact two major strokes and quite a few 'minor' ones. It has affected his eyesight and his dexterity. If you know anyone who has had a stroke you will know that the recovery is long, slow and at times very disheartening. When Marty had to stand down from his post I stepped in to replace him - a task that I knew I could never really achieve to his standards. I have always told him that I am keeping his seat warm until he can return. He is not quite ready for that yet. However, Marty has achieved an unbelievable number of stories processed from submissions to front page stories - over 11,000 stories. Any editor will tell you that is an enormous amount of effort for anybody.

But Marty had one more objective and aim that has kept him going through much of his recovery to date. He wanted to reach the 11,111 story mark. Because of his current condition he can often only type at a very slow rate, less than 1 character per second and with only 1 hand. That has been furthered hindered by his poor eyesight. He reached that mark in November - and immediately had his milestone snatched away when there was a system crash and several weeks of his work disappeared.

So Marty did what he always does. He gritted his teeth and started again. Yesterday Marty reached the 11,111 story milestone and I am writing this to make sure that as many people as possible are aware of it so that, in the event of another disaster, we will remember what he has achieved. In fact, he has overshot his target and as I type this he stands at 11,112 stories processed, but I can forgive him that.

Marty, I tip my hat to you, and on behalf of this community I offer you our congratulations and best wishes for your continued recovery. Your contribution is unequaled in so many areas, and many of us have learned so much from you. You are also noted for your use of terrible puns - which is not improving at all! That is, I think, a good sign too.

I am still keeping your seat warm...

janrinok


Update:

JR: Thank-you so very much for taking the time and making the effort to commemorate this occasion. That said, I do believe that you do NOT give yourself proper credit for all that YOU have contributed to this site!

You tucked me under your wing and taught me, a newbie, all the vagaries of producing a *proper* story. It is not that it is that difficult, but there ARE many moving parts that need to be checked and verified. You were patient beyond measure with this energetic, fearful, and impatient nerd. In other - less capable hands - I would have given up and called it quits!

But that was far from everything that you did. As of this writing, janrinok has single-handedly posted 7,885 stories. This, in addition to all the other things he has done to keep the sight running smoothly. He single-handedly wrote a tool to automatically deal with with "users" who would like nothing better that to create new accountants and use them to spew crap across the site.

There's more -- much MUCH more -- but that gives a brief look at just some of the things he does to help the community! So, again, I say "Thanks Janrinok!

posted by hubie on Saturday June 10 2023, @10:00AM   Printer-friendly

Big-name researchers cited the plot of a major movie among a series of AI "disaster scenarios" they said could threaten humanity's existence:

Two of the three so-called "godfathers of AI" are worried - though the third could not disagree more, saying such "prophecies of doom" are nonsense.

When trying to make sense of it in an interview on British television with one of the researchers who warned of an existential threat, the presenter said: "As somebody who has no experience of this... I think of the Terminator, I think of Skynet, I think of films that I've seen."

He is not alone. The organisers of the warning statement - the Centre for AI Safety (CAIS) - used Pixar's WALL-E as an example of the threats of AI.

Science fiction has always been a vehicle to guess at what the future holds. Very rarely, it gets some things right.

Using the CAIS' list of potential threats as examples, do Hollywood blockbusters have anything to tell us about AI doom?

CAIS says "enfeeblement" is when humanity "becomes completely dependent on machines, similar to the scenario portrayed in the film WALL-E".

If you need a reminder, humans in that movie were happy animals who did no work and could barely stand on their own. Robots tended to everything for them.

[...] But there is another, more insidious form of dependency that is not so far away. That is the handing over of power to a technology we may not fully understand, says Stephanie Hare, an AI ethics researcher and author of Technology Is Not Neutral.

[...] So what happens when someone has "a life-altering decision" - such as a mortgage application or prison parole - refused by AI?

Today, a human could explain why you didn't meet the criteria. But many AI systems are opaque and even the researchers who built them often don't fully understand the decision-making.

"We just feed the data in, the computer does something.... magic happens, and then an outcome happens," Dr Hare says.

The technology might be efficient, but it's arguable it should never be used in critical scenarios like policing, healthcare, or even war, she says. "If they can't explain it, it's not okay."

The true villain in the Terminator franchise isn't the killer robot played by Arnold Schwarzenegger, it's Skynet, an AI designed to defend and protect humanity. One day, it outgrew its programming and decided that mankind was the greatest threat of all - a common film trope.

We are of course a very long way from Skynet. But some think that we will eventually build an artificial generalised intelligence (AGI) which could do anything humans can but better - and perhaps even be self-aware.

[...] What we have today is on the road to becoming something more like Star Trek's shipboard computer than Skynet. "Computer, show me a list of all crew members," you might say, and our AI of today could give it to you and answer questions about the list in normal language.

[...] Another popular trope in film is not that the AI is evil - but rather, it's misguided.

In Stanley Kubrick's 2001: A Space Odyssey, we meet HAL-9000, a supercomputer which controls most of the functions of the ship Discovery, making the astronaut's lives easier - until it malfunctions.

[...] In modern AI language, misbehaving AI systems are "misaligned". Their goals do not seem to match up with the human goals.

Sometimes, that's because the instructions were not clear enough and sometimes it's because the AI is smart enough to find a shortcut.

For example, if the task for an AI is "make sure your answer and this text document match", it might decide the best path is to change the text document to an easier answer. That is not what the human intended, but it would technically be correct.

[...] "How would you know the difference between the dream world and the real world?" Morpheus asks a young Keanu Reeves in 1999's The Matrix.

The story - about how most people live their lives not realising their world is a digital fake - is a good metaphor for the current explosion of AI-generated misinformation.

Dr Hare says that, with her clients, The Matrix us a useful starting point for "conversations about misinformation, disinformation and deepfakes".

[...] "I think AI will transform a lot of sectors from the ground up, [but] we need to be super careful about rushing to make decisions based on feverish and outlandish stories where large leaps are assumed without a sense of what the bridge will look like," he warns.


Original Submission

posted by Fnord666 on Saturday June 10 2023, @05:15AM   Printer-friendly
from the my-prints-smell-like-espresso dept.

Used coffee pods can be recycled to produce filaments for 3D printing:

An article published in the journal ACS Sustainable Chemistry & Engineering brings good news for coffee buffs: the plastic in used coffee pods can be recycled to make filament for 3D printers, minimizing its environmental impact.

[...] "We produced new conductive and non-conductive filaments from waste polylactic acid [PLA] from used coffee machine pods. There are many applications for these filaments, including conductive parts for machinery and sensors," Bruno Campos Janegitz, a co-author of the article, told Agência FAPESP. Janegitz heads the Sensors, Nanomedicine and Nanostructured Materials Laboratory (LSNano) at UFSCar in Araras, São Paulo state.

[...] Although reusable pods exist and some suppliers promote recycling of aluminum pods, most consumers just throw used pods into the garbage bin, especially if they are made of plastic. Considering all the factors involved, calculations made by the São Paulo State Technological Research Institute (IPT) show that "a cup of pod coffee can be as much as 14 times more damaging to the environment than a cup of filter coffee".

To develop uses for this waste, the researchers produced electrochemical cells with non-conductive filaments of PLA and electrochemical sensors with conductive filaments prepared by adding carbon black to the PLA. Carbon black is a paracrystalline form of carbon that results from incomplete combustion of hydrocarbons. "The electrochemical sensors were used to determine the proportion of caffeine in black tea and arabica coffee," Janegitz explained.

Production of filament is relatively simple, he added. "We obtain the non-conductive material simply by washing and drying PLA pods, followed by hot extrusion. To obtain the conductive material, we add carbon black before heating and extrusion. The extruded material is then cooled and spooled to produce the filament of interest," he explained.

Journal Reference:
Evelyn Sigley, Cristiane Kalinke, Robert D. Crapnell, et al., Circular Economy Electrochemistry: Creating Additive Manufacturing Feedstocks for Caffeine Detection from Post-Industrial Coffee Pod Waste [open], ACS Sustainable Chem. Eng. 2023, 11, 7, 2978–2988 https://doi.org/10.1021/acssuschemeng.2c06514

 


Original Submission

posted by hubie on Saturday June 10 2023, @12:31AM   Printer-friendly
from the exciting-and-new dept.

Meet the most energy-efficient electric, solar cruise ship:

On Wednesday, Norwegian cruise line company Hurtigruten revealed plans for a first-of-its-kind zero-emission ship. The electric-powered cruise ship will feature retractable sails with solar panels to harness energy from the wind and sun while storing it in powerful batteries.

Although only 0.1% of Hurtigruten Norway's ships currently use zero-emission technology, the company is planning a drastic overhaul.

Its first concept, "Sea Zero," is expected to be the world's most energy-efficient cruise ship. The company initially revealed the project last March as part of its ambition to become a leader in sustainable travel.

Its first electric cruise ship, due out in 2030, will combine 60 MWh battery packs with several industry firsts to harness wind and solar while at sea for a truly zero-emission experience.

For example, the company plans to include three retractable, autonomous sails with added solar panels. The wing rigs are designed to enhance aerodynamics, pulling in air currents at up to 50 meters for added propulsion.

Hurtigruten says that during the summer, the ship "will be superpowered by northern Norway's midnight sun that shines for 24 hours a day."

The three retractable wings will comprise 1500 m² (16,146 ft²) of solar panels with a total wind surface of 750 m² (8,073 ft²).

Renewable energy from the sails or the charging port is stored in the ship's giant 60 MWh battery storage system. There's even an indicator on the side of the vessel to show the battery level. The company says it's looking for cobalt-free battery chemistries with minimal nickel to keep costs down.

[...] To reduce underwater drag, the two thrusters at the stern will retract into the hull while cruising. Meanwhile, the company is developing an underwater air lubrication system to allow the electric ship to "surf" on a carpet of bubbles.

[...] The electric solar-powered cruise ship concept is 443 feet long and is set to host 500 passengers across 270 cabins.

Sea Zero is still in its early stages of research and development as the Norwegian cruise line gears up for its launch by 2030. Over the next two years, the company will test and develop the proposed technology as it works toward a final design.


Original Submission

posted by hubie on Friday June 09 2023, @07:48PM   Printer-friendly

A new study by researchers at the University of Rhode Island shows some of the best evidence yet for a feedback loop phenomenon in which species evolution drives ecological change:

The story of the peppered moths is a textbook evolutionary tale. As coal smoke darkened tree bark near England's cities during the Industrial Revolution, white-bodied peppered moths became conspicuous targets for predators and their numbers quickly dwindled. Meanwhile, black-bodied moths, which had been rare, thrived and became dominant in their newly darkened environment.

The peppered moths became a classic example of how environmental change drives species evolution. But in recent years, scientists have begun thinking about the inverse process. Might there be a feedback loop in which species evolution drives ecological change? Now, a new study by researchers at the University of Rhode Island shows some of the best evidence yet for that very phenomenon.

In research published in the Proceedings of the National Academy of Sciences, the researchers show that an evolutionary change in the length of lizards' legs can have a significant impact on vegetation growth and spider populations on small islands in the Bahamas. This is one of the first times, the researchers say, that such dramatic evolution-to-environment effects have been documented in a natural setting.

[...] Armed with specialized lizard wrangling gear—poles with tiny lassos made of dental floss at the end—the team captured hundreds of brown anoles. They then measured the leg length of each lizard, keeping the ones whose limbs were either especially long or especially short and returning the rest to the wild. Once they had distinct populations of short- and long-limbed lizards, they set each population free on islands that previously had no lizards living on them.

Since the experimental islands were mostly covered by smaller diameter vegetation, the researchers expected that the short-legged lizards would be better adapted to that environment, that is, more maneuverable and better able to catch prey in the trees and brush. The question the researchers wanted to answer was whether the ecological effects of those highly effective hunters could be detected.

After eight months, the researchers checked back on the islands to look for ecological differences between islands stocked with the short- and long-legged groups. The differences, it turned out, were substantial. On islands with shorter-legged lizards, populations of web spiders—a key prey item for brown anoles—were reduced by 41% compared to islands with lanky lizards. There were significant differences in plant growth as well. Because the short-legged lizards were better at preying on insect herbivores, plants flourished. On islands with short-legged lizards, buttonwood trees had twice as much shoot growth compared to trees on islands with long-legged lizards, the researchers found.

The results, Kolbe says, help to bring the interaction between ecology and evolution full circle.

Journal Reference:
Kolbe, Jason J. et al, Experimentally simulating the evolution-to-ecology connection: Divergent predator morphologies alter natural food webs, PNAS (2023). DOI: 10.1073/pnas.2221691120


Original Submission

posted by hubie on Friday June 09 2023, @03:03PM   Printer-friendly

Interesting article relating to Google/OpenAI vs. Open Source for LLMs

Leaked Internal Google Document Claims Open Source AI Will Outcompete Google and OpenAI:

The text below is a very recent leaked document, which was shared by an anonymous individual on a public Discord server who has granted permission for its republication. It originates from a researcher within Google. We have verified its authenticity. The only modifications are formatting and removing links to internal web pages. The document is only the opinion of a Google employee, not the entire firm. We do not agree with what is written below, nor do other researchers we asked, but we will publish our opinions on this in a separate piece for subscribers. We simply are a vessel to share this document which raises some very interesting points.

We've done a lot of looking over our shoulders at OpenAI. Who will cross the next milestone? What will the next move be?

But the uncomfortable truth is, we aren't positioned to win this arms race and neither is OpenAI. While we've been squabbling, a third faction has been quietly eating our lunch.

I'm talking, of course, about open source. Plainly put, they are lapping us. Things we consider "major open problems" are solved and in people's hands today. Just to name a few:

While our models still hold a slight edge in terms of quality, the gap is closing astonishingly quickly. Open-source models are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months. This has profound implications for us:

  • We have no secret sauce. Our best hope is to learn from and collaborate with what others are doing outside Google. We should prioritize enabling 3P integrations.

  • People will not pay for a restricted model when free, unrestricted alternatives are comparable in quality. We should consider where our value add really is.

  • Giant models are slowing us down. In the long run, the best models are the ones which can be iterated upon quickly. We should make small variants more than an afterthought, now that we know what is possible in the 20B parameter regime.

At the beginning of March the open source community got their hands on their first really capable foundation model, as Meta's LLaMA was leaked to the public. It had no instruction or conversation tuning, and no RLHF. Nonetheless, the community immediately understood the significance of what they had been given.

A tremendous outpouring of innovation followed, with just days between major developments (see The Timeline for the full breakdown). Here we are, barely a month later, and there are variants with instruction tuning, quantization, quality improvements, human evals, multimodality, RLHF, etc. etc. many of which build on each other.

Most importantly, they have solved the scaling problem to the extent that anyone can tinker. Many of the new ideas are from ordinary people. The barrier to entry for training and experimentation has dropped from the total output of a major research organization to one person, an evening, and a beefy laptop.

Lots more stuff in the article. It would be interesting to hear from knowledgeable experts what the primary disagreements to these points are and whether you agree or disagree.


Original Submission

posted by janrinok on Friday June 09 2023, @10:13AM   Printer-friendly

Self-healing code is the future of software development:

One of the more fascinating aspects of large language models is their ability to improve their output through self reflection. Feed the model its own response back, then ask it to improve the response or identify errors, and it has a much better chance of producing something factually accurate or pleasing to its users. Ask it to solve a problem by showing its work, step by step, and these systems are more accurate than those tuned just to find the correct final answer.

While the field is still developing fast, and factual errors, known as hallucinations, remain a problem for many LLM powered chatbots, a growing body of research indicates that a more guided, auto-regressive approach can lead to better outcomes.

This gets really interesting when applied to the world of software development and CI/CD. Most developers are already familiar with processes that help automate the creation of code, detection of bugs, testing of solutions, and documentation of ideas. Several have written in the past on the idea of self-healing code. Head over to Stack Overflow's CI/CD Collective and you'll find numerous examples of technologists putting this ideas into practice.

When code fails, it often gives an error message. If your software is any good, that error message will say exactly what was wrong and point you in the direction of a fix. Previous self-healing code programs are clever automations that reduce errors, allow for graceful fallbacks, and manage alerts. Maybe you want to add a little disk space or delete some files when you get a warning that utilization is at 90% percent. Or hey, have you tried turning it off and then back on again?

Developers love automating solutions to their problems, and with the rise of generative AI, this concept is likely to be applied to both the creation, maintenance, and the improvement of code at an entirely new level.

The ability of LLMs to quickly produce large chunks of code may mean that developers—and even non-developers—will be adding more to the company codebase than in the past. This poses its own set of challenges.

"One of the things that I'm hearing a lot from software engineers is they're saying, 'Well, I mean, anybody can generate some code now with some of these tools, but we're concerned about maybe the quality of what's being generated,'" says Forrest Brazeal, head of developer media at Google Cloud. The pace and volume at which these systems can output code can feel overwhelming. "I mean, think about reviewing a 7,000 line pull request that somebody on your team wrote. It's very, very difficult to do that and have meaningful feedback. It's not getting any easier when AI generates this huge amount of code. So we're rapidly entering a world where we're going to have to come up with software engineering best practices to make sure that we're using GenAI effectively."

"People have talked about technical debt for a long time, and now we have a brand new credit card here that is going to allow us to accumulate technical debt in ways we were never able to do before," said Armando Solar-Lezama, a professor at the Massachusetts Institute of Technology's Computer Science & Artificial Intelligence Laboratory, in an interview with the Wall Street Journal. "I think there is a risk of accumulating lots of very shoddy code written by a machine," he said, adding that companies will have to rethink methodologies around how they can work in tandem with the new tools' capabilities to avoid that.

[Editor's Comment: Much more discussion follows in the linked article.--JR]


Original Submission