Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
The GeForce RTX 2070 Super & RTX 2060 Super Review: Smaller Numbers, Bigger Performance
NVIDIA is launching a mid-generation kicker for their mid-to-high-end video card lineup in the form of their GeForce RTX 20 series Super cards. Based on the same family of Turing GPUs as the original GeForce RTX 20 series cards, these new Super cards – all suffixed Super, appropriately enough – come with new configurations and new clockspeeds. They are, essentially, NVIDIA's 2019 card family for the $399+ video card market.
When they are released on July 9th, the GeForce RTX 20 series Super cards are going to be sharing store shelves with the rest of the GeForce RTX 20 series cards. Some cards like the RTX 2080 and RTX 2070 are set to go away, while other cards like the RTX 2080 Ti and RTX 2060 will remain on the market as-is. In practice, it's probably best to think of the new cards as NVIDIA executing as either a price cut or a spec bump – depending on if you see the glass as half-empty or half-full – all without meaningfully changing their price tiers.
In terms of performance, the RTX 2060 and RTX 2070 Super cards aren't going to bring anything new to the table. In fact if we're being blunt, the RTX 2070 Super is basically a slightly slower RTX 2080, and the RTX 2060 Super may as well be the RTX 2070. So instead, what has changed is the price that these performance levels are available at, and ultimately the performance-per-dollar ratios in parts of NVIDIA's lineup. The performance of NVIDIA's former $699 and $499 cards will now be available for $499 and $399, respectively. This leaves the vanilla RTX 2060 to hold the line at $349, and the upcoming RTX 2080 Super to fill the $699 spot. Which means if you're in the $400-$700 market for video cards, your options are about to get noticeably faster.
Also at Tom's Hardware, The Verge, and Ars Technica.
Previously: Nvidia Announces RTX 2080 Ti, 2080, and 2070 GPUs, Claims 25x Increase in Ray-Tracing Performance
Nvidia Announces RTX 2060 GPU
AMD and Nvidia's Latest GPUs Are Expensive and Unappealing
Related: AMD and Intel at Computex 2019: First Ryzen 3000-Series CPUs and Navi GPU Announced
AMD Details Three Navi GPUs and First Mainstream 16-Core CPU
Canon is crowdfunding a tiny clippable camera that connects to your phone
Canon is turning to Indiegogo to crowdfund the Ivy Rec, a tiny outdoor camera built into a keychain carabiner. It's about the size of a USB flash drive, and it wirelessly connects via Wi-Fi or Bluetooth to the companion CanonMini Cam App to show a live preview on your phone. The empty square space of the clip doubles as a viewfinder, and there's a single dial on the back that lets you switch between modes.
The Ivy Rec has a 13-megapixel 1/3-inch CMOS sensor that can record 1080p / 60 fps video, and it's waterproof up to 30 minutes for depths of up to three feet. With no pricing information yet, it's hard to say if it'll be worth the buy or who it's really for. Canon says the camera is shockproof and great for the outdoors, so it could be useful if you clip it onto your backpack while you ride a bike. Or maybe clip it onto your dog or cat's collar so you can see the world from your pet's POV? (I mean, GoPros are already a thing.)
Too vibrant, not small enough to work as a spy camera.
Also at Engadget.
Intel's Senior Vice President Jim Keller (who previously helped to design AMD's K8 and Zen microarchitectures) gave a talk at the Silicon 100 Summit that promised continued pursuit of transistor scaling gains, including a roughly 50x increase in gate density:
Intel's New Chip Wizard Has a Plan to Bring Back the Magic (archive)
In 2016, a biennial report that had long served as an industry-wide pledge to sustain Moore's law gave up and switched to other ways of defining progress. Analysts and media—even some semiconductor CEOs—have written Moore's law's obituary in countless ways. Keller doesn't agree. "The working title for this talk was 'Moore's law is not dead but if you think so you're stupid,'" he said Sunday. He asserted that Intel can keep it going and supply tech companies ever more computing power. His argument rests in part on redefining Moore's law.
[...] Keller also said that Intel would need to try other tactics, such as building vertically, layering transistors or chips on top of each other. He claimed this approach will keep power consumption down by shortening the distance between different parts of a chip. Keller said that using nanowires and stacking his team had mapped a path to packing transistors 50 times more densely than possible with Intel's 10 nanometer generation of technology. "That's basically already working," he said.
The ~50x gate density claim combines ~3x density from additional pitch scaling (from "10nm"), ~2x from nanowires, another ~2x from stacked nanowires, ~2x from wafer-to-wafer stacking, and ~2x from die-to-wafer stacking.
Related: Intel's "Tick-Tock" Strategy Stalls, 10nm Chips Delayed
Intel's "Tick-Tock" is Now More Like "Process-Architecture-Optimization"
Moore's Law: Not Dead? Intel Says its 10nm Chips Will Beat Samsung's
Another Step Toward the End of Moore's Law
Fire destroys Jim Beam warehouse filled with 45,000 bourbon barrels
A fire destroyed a massive Jim Beam warehouse filled with 45,000 barrels of bourbon, sending flames shooting into the night sky and generating so much heat that firetruck lights melted, authorities said Wednesday.
Firefighters from four counties responded to the blaze that erupted late Tuesday. Lightning might have been a factor, but fire investigators haven't been able to start looking for the cause, Woodford County Emergency Management Director Drew Chandler said.
No injuries were reported, Chandler said. The fire was contained but was being allowed to burn for several more hours Wednesday, he said.
[...] Officials from Jim Beam's parent company, Suntory Food and Beverage, said the multi-story warehouse that burned contained "relatively young whiskey," meaning it had not reached maturity for bottling for consumers. "Given the age of the lost whiskey, this fire will not impact the availability of Jim Beam for consumers," the spirits company said in a statement. The whiskey maker suffered a total loss in the warehouse. The destroyed whiskey amounted to about 1% of Beam's bourbon inventory, it said.
Also at CNN.
CRISPR and LASER ART Eliminate HIV from Mice
Today, a paper in Nature Communications, titled "Sequential LASER ART and CRISPR Treatments Eliminate HIV-1 in a Subset of Infected Humanized Mice" [open, DOI: 10.1038/s41467-019-10366-y] [DX] reports new work from a collaborative effort showing that a combination of long-acting slow-effective release antiviral therapy (LASER) and CRISPR-Cas9 successfully cleared HIV from infected humanized mice.
Howard Gendelman, MD, professor of internal medicine and infectious diseases at the University of Nebraska Medical Center and senior author on the paper, does not withhold his excitement over the result, which may the reason for his hyperbole. He tells GEN that the conclusion is "almost unbelievable, but, it's true" an idea, he adds, that "has been science fiction up until now." He notes that "for the first time in the world" they have shown total elimination of HIV infection from a model with an established infection and, even though there are caveats, "there is a real possibility that an HIV cure can be realized."
The team used a technique developed by co-author Kamel Khalili, PhD, professor in the department of neuroscience at the School of Medicine at Temple University, that uses the CRISPR-Cas9 system to remove the integrated HIV DNA from genomes. They combined the genome editing technique with the LASER ART, a technique developed by Gendelman's lab that targets viral sanctuaries by packaging the ART drugs into nanocrystals. LASER ART distributes the drugs to areas of the body where HIV harbors and releases them slowly over time. Testing the combination of these two methods together in mice led the authors to conclude that permanent elimination of HIV is possible.
[...] Gendelman explains that "It's kind of like being in a beach and trying to find the right shell—you might want a certain color or shape." When HIV replicates, he says, there are "billions and trillions" of particles so you're asking CRISPR to excise every single DNA provirus in this morass. He adds, "it would be inconceivable that it would be efficient enough to destroy every DNA molecule.... If one infectious particle remains, it will grow and replicate. You have to destroy every single one in the body." So, the ART reduced the viable targets. If you are inhibiting viral replication, he explains, you reduce the amount of HIV DNA in the host—in the cells and in the body, and that allows the CRISPR to be more effective. "It's a numbers game," Gendelman notes.
But, efficiency is not the only problem in the relationship between CRISPR and HIV. The sequence specificity of the approach a double-edged sword, notes Coffin. On the one hand, it minimizes off-target effects. But, as Coffin explains, it also sets the stage for rapid selection of resistance. In a virus that mutates as rapidly as HIV, the changes could quickly render CRISPR useless. Lastly, the mice were infected with a clonal virus with CRISPR delivered shortly after the infection, leaving little opportunity to generate a diverse population. However, this is not analogous to human patients as most patients do not report for treatment so soon after an infection. An effective treatment for humans would have to be designed to treat diverse viral populations with lots of mutations.
Also at ScienceAlert, CBS, and CNBC.
Lee Iacocca, Visionary Automaker Who Led Both Ford and Chrysler, Is Dead at 94
Lee A. Iacocca, the visionary automaker who ran the Ford Motor Company and then the Chrysler Corporation and came to personify Detroit as the dream factory of America's postwar love affair with the automobile, died on Tuesday at his home in Bel Air, Calif. He was 94. He had complications from Parkinson's disease, a family spokeswoman said.
In an industry that had produced legends, from giants like Henry Ford and Walter Chrysler to the birth of the assembly line and freedoms of the road that led to suburbia and the middle class, Mr. Iacocca, the son of an immigrant hot-dog vendor, made history as the only executive in modern times to preside over the operations of two of the Big Three automakers.
In the 1970s and '80s, with Detroit still dominating the nation's automobile market, his name evoked images of executive suites, infighting, power plays and the grit and savvy to sell American cars. He was so widely admired that there was serious talk of his running for president of the United States in 1988.
Detractors branded him a Machiavellian huckster who clawed his way to pinnacles of power in 32 years at Ford, building flashy cars like the Mustang, making the covers of Time and Newsweek and becoming the company president at 46, only to be spectacularly fired in 1978 by the founder's grandson, Henry Ford II.
Also at CNN, Reuters, CNBC, and Detroit Free Press.
We've Already Built too Many Power Plants and Cars to Prevent 1.5 °C of Warming:
In a [...] paper published in Nature today[*], researchers found we're now likely to sail well past 1.5 ˚C of warming, the aspirational limit set by the Paris climate accords, even if we don't build a single additional power plant, factory, vehicle, or home appliance. Moreover, if these components of the existing energy system operate for as long as they have historically, and we build all the new power facilities already planned, they'll emit about two thirds of the carbon dioxide necessary to crank up global temperatures by 2 ˚C.
If fractions of a degree don't sound that dramatic, consider that 1.5 ˚C of warming could already be enough to expose 14% of the global population to bouts of severe heat, melt nearly 2 million square miles (5 million square kilometers) of Arctic permafrost, and destroy more than 70% of the world's coral reefs. The hop from there to 2 ˚C may subject nearly three times as many people to heat waves, thaw nearly 40% more permafrost, and all but wipe out coral reefs, among other devastating effects, research finds.
The basic conclusion here is, in some ways, striking. We've already built a system that will propel the planet into the dangerous terrain that scientists have warned for decades we must avoid. This means that building lots of renewables and adding lots of green jobs, the focus of much of the policy debate over climate, isn't going to get the job done.
We now have to ask a much harder societal question: How do we begin forcing major and expensive portions of existing energy infrastructure to shut down years, if not decades, before the end of its useful economic life?
Power plants can cost billions of dollars and operate for half a century. Yet the study notes that the average age of coal plants in China and India—two of the major drivers of the increase in "committed emissions" since the earlier paper—is about 11 and 12 years, respectively.
[*] Monday.
Hard-to-kill poop parasites that lurk in swimming pools on the rise, CDC warns
Outbreaks of the gastrointestinal parasite cryptosporidium have been spurting upward since 2009, with the number of outbreaks gushing up an average of 13% each year, according to researchers at the Centers for Disease Control and Prevention. The germ spreads via the fecal-oral route and causes explosive, watery diarrhea that can last for up to three weeks. Most victims pick up the infection from recreational waters, such as swimming pools and water parks.
The main trouble is that crypto is extremely tolerant of chlorine and can happily stay afloat in well-treated pools for more than seven days. Thus, sick swimmers are the main source of infection—often young children who have yet to master toilet skills and also have more of a tendency to gulp pool water. An infected person can shed 100 million parasite eggs in one bout of diarrhea. Knocking back just 10 or fewer eggs in contaminated pool water can lead to an infection.
A 2013 study released by the CDC found that 58% of tested pools were positive for bacteria typically present in fecal matter.
[...] In all, the CDC recorded 444 outbreaks, involving 7,465 cases, 287 hospitalizations, and one death from the parasite. The number of cases per outbreak ranged from two to 638. However, the CDC notes that the figures likely underestimate the number of outbreaks and cases given that not every state reliably reports outbreaks and many people don't report their illnesses.
So here's a question for the SoylentNews community: do you scrap a working prototype that served as your proof of concept in favor of recoding it in a more mainline programming language? Let me provide some context:
A few months ago, I began work on writing a comprehensive system on proofing and validating the behavior of DNS recursive resolver known as DNSCatcher. To briefly summarize, DNSCatcher acts as a cross-check system for DNS to validate if the records returned by your DNS server actually match reality. Due to the design and implementation of the DNS protocol, recursive resolvers such as your router's or ISP can essentially lie about the status and contents of any DNS record it chooses, in part due to limitations of the current implementation of DNSSEC. I presented some of my initial work at both Internet Freedom Festival 2019, and more in passing at the ICANN 65 Meeting and got unexpectedly high positive responses. As such, I'm currently in the process of working to try to secure funding and bring this work forward in both a sustainable manner and with the hopeful intent of eventually standardizing the protocol and mechanisms as an RFC. My current proof of concept is on github here.
Now, the part I need to tackle is: do I keep going with what I have or restart from scratch. The proof of concept was written in the Ada programming language, which I am rather concerned will drastically limit community involvement and uptake. I choose Ada for a very specific reason when coding this, but as I feel I need an outside opinion on the best course to take before pushing on, hence this post. Past the fold, I'll go into my reasons for coding it in Ada, the alternatives I'm considering, and hopes of further feedback from the community.
So why did I choose Ada? Well, when I started the project, I had the following requirements I wanted to meet.
Ada is designed for high-reliability systems, and the most common implementation is GNAT, which is built around GCC, which makes it a highly portable language itself. Furthermore, Ada's core language has extremely powerful multitasking and object oriented features out of the box. It also incorporates formal validation and programming-by-contract features to help ensure programs are bug free. During the development of the prototype, I felt very justified in my choice as Ada made implementing some of the more difficult aspects of a DNS server relatively trivial. It was also powerful enough to handle the mess that is the DNS wire protocol with a relatively minimum of fuss and only of a handful of forced casts in the entire codebase; a good mark in making sure your code is both correct and preventing stupid mistakes.
However, Ada is unfortunately obscure in the land of software development, which means its an uphill battle to attract programmers to contribute to it. It also suffers a distinct lack of native libraries, and thus for some functionality, it's required to bridge out to C APIs (which Ada does make relatively straightforward). It also doesn't help that the Ada compilers in some distributions (such as Ubuntu) are buggy out of the box and can fail in unusual ways. Since I want this project to hopefully go mainstream, I now need to face the difficult question if I should push through or scrap my current code and rewrite it, and if so, what to write it in.
My chief considerations taking into account the criteria I listed above has made me look at the following options: C, C++, Go, or Rust. To a lesser extent, I'm also considering Java, despite the fact that the presence of the JVM would essentially preclude deployment on embedded devices and have a higher memory cost. These all have a lot of upsides and downsides, and I'll share my thoughts on each. I'm open to other options that the community may put forward but ATM these are the languages I'm leaning strongest towards. Let me write my thoughts on each, and maybe the community can help me make a decision on what to do.
Starting from the top, let's look at C:
C has a reputation as "portable assembler" which is well earned. It is also the programming language I have the most experience with. It is also extremely commonly used by FOSS software, and most major DNS software is written in C such as BIND, Unbind, and dnsmasq. As such, the barrier to entry for community contributions and adoption is relatively low. Everything can run C code, but C itself isn't very portable without a lot of work, and the standard library is bare bones to say the least. Even simple acts of opening sockets require #ifdef's for Windows (to initialize Winsocks) vs. Linux, and multithreading is both non-standardized and can be complex. Furthermore, since C operates so closely at a bare metal level, a single mistake can be a security vulnerability, which is painful. On top of that, C has very poor support for Unicode, making supporting internationalized domain names (IDNs) much more difficult than necessary. Furthermore, working with SQL databases from C is a frustrating experience due to how C handles strings, and any attempts to create a web interface on top of DNSCatcher would almost certainly be better done in any other programming language. My opinion is that C is a relatively poor fit for this project, but I can't rule it out entirely because it also has the lowest barrier of entry for bringing others on board, and the easiest to deploy once ported to a given environment.
C++ is my least favorite option, but deserves consideration as well:
Almost everything I wrote about C is also true for C++. Furthermore, C++'s standard library is much more powerful with the STL providing complex data types out of the box (which were directly inspired by and/or copied from Ada), which helps reduce the amount of code that needs to be written. Combined with Boost, it is a very powerful programming language. However, I find C++ difficult to work with, and extremely hard to debug when something goes wrong, and the language itself has more than a few surprises. In the course of my career, I have worked and debugged multiple C++ codebases, and have found myself agreeing with more than a few of the points of the C++ FQA. The largest advantage C++ gives me over C is native object oriented interfaces, and a better standard library.
Next up on the list is the Go programming language (Golang):
Looking at golang as an option for this project is something of a mixed bag. I have used Go before in building part of the FTL video streaming service used as mixer as it's original ingest daemon. An additional plus is that Golang's toolchain generates static binaries which are easy to deploy and it's relatively straightforward to cross-compile with Go to various different architectures. Golang itself also has excellent support for multitasking via goroutines and channels. However, these capabilities come with a cost. The first is that Golang itself doesn't support asynchronous actions well; most API calls are blocking; and channels themselves are synchronous. This means that for high performance you either have to have hundreds to thousands of goroutines at once, or handle manual locking through a mutex. This further complicates issues with handling ordering of operations when working with an underlying database and DNS transaction tracking. While not insurmountable, Go also has pain with interacting with pre-existing C code.
While Go code can be compiled to a shared library, and C code and be integrated via cgo, the use of goroutines can cause complications when pre-existing C code spawns its own threads, or Go code is called from threaded code. While Golang provides runtime.LockOSThread which can help mitigate issues with code that needs to execute on the main thread (and thus maintain TLS logic), it's quite difficult to do in practice. Furthermore, Golang lacks strong typing like those provided by Ada and Rust, and what types it does have get hampered by a lack of generics. On the upside, Go has excellent support for creating server applications and HTTP interfaces out of the box, which does drastically reduce the amount of programming required or the need to bring a second programming language in to handle front-end operations. As such, I have very mixed feelings on using Golang for this project, although I concede it is a viable alternative.
Last on our native compiled language is Rust, the option I'm mostly leaning towards if I go for a rewrite:
Rust itself shares many of the features I value in Ada such as extremely strong typing, features that inherently reduce the risk of unintentional behavior or security exploits through the borrow checker. In addition, being built on LLVM, Rust's portability is almost as good as Ada/GCC. Unlike Ada, Rust has an excellent collection of add-on packages and libraries through its crates system, and in some ways it surpasses Ada as it can better handle memory deallocation than Ada's Unchecked_Deallocation method. Rust also provides good frameworks for handling import of data, and providing web interfaces via the 'diesel' framework.
There is, however, a price to be paid for all this. Unlike Ada, which has special class types and constructs for interacting with Tasks, Rust's threading model is quite different. Data can either be shared through Rust's channels (which are similar in concept to Go's), or through a shared memory structure that is regulated through mutexes. Channels allows easy processing of multiple consumers to one processor, but life gets difficult if you want to have multiple processors. Data integrity is handled by the borrow checker, but in certain cases, it is impossible to determine if a borrow is safe at compile time. For these cases, Rust provides Rc and Arc, which are reference-counted variables that implement the borrow checker rules at runtime and abort execution if the rules are violated.
In general, I have found working with Rust to be somewhat of a mixed bag. Like Ada, if my code compiles, I'm reasonably confident it's going to work. Ada also has a reputation that the compiler can be extremely pedantic, which is well earned. Rust, on the other hand, is even more specific due to the way mutability and data ownership works within the language. While the mantra of "the borrow checker is always right" can make one endure, more than once have I wanted to eject my computer out the window when coding with Rust. I have also found at times I have occasionally had to refactor code significantly if I end up using data in a way I didn't initially expect. While these aren't show stoppers, it does mean developing in Rust can be relatively challenging — even compared to Ada — even if the language is more mainstream.
Finally, the last language on the list is Java, which is unique in which it doesn't compile to native code:
I hesitated on even including Java on this list, but of all the non-compiled languages, it's the closest one to meet my needs. While the JVM is unfortunately both heavy and memory hungry, Java does offer a sandbox which eliminates entire classes of security bugs and exploits by the sheer nature that it isn't running native code directly. Java itself has excellent support for threading and multitasking, as well as a very rich core library. J2EE also adds a fairly usable (if somewhat heavy) frontend interface for developing webfrontends and sharing code between the DNSCatcher server and frontend components. Java is the most popular programming language in the world, according to the TIOBE index. Although Java itself doesn't have the greatest reputation in the FOSS space, due to historical issues with Sun, and now Oracle, it's still a very viable choice if I'm willing to give up deployment to embedded devices, and brings a fair bit of relief in the number of ways I can blow a foot off due to the nature of the JVM.
Anyway, SN community, that's my thoughts on my options. As of my writing this, I'm leaning towards either rewriting on Rust, or staying with Ada, but I'm keeping an open mind and hope you guys can help either highlight options I'm unaware of, or help make this decision with confidence.
~ 73 de NCommander
Wired Bacteria Form Nature's Power Grid: 'We Have an Electric Planet'
Electroactive bacteria were unknown to science until a couple of decades ago. But now that scientists know what to look for, they're finding this natural electricity across much of the world, even on the ocean floor. It alters entire ecosystems, and may help control the chemistry of the Earth. "Not to sound too crazy, but we have an electric planet," said John Stolz, a microbiologist at Duquesne University in Pittsburgh.
In the mid-1980s, Dr. Stolz was helping to study a baffling microbe fished out of the Potomac River by his colleague Derek Lovley. The microbe, Geobacter metallireducens, had a bizarre metabolism. "It took me six months to figure out how to grow it in the lab," said Dr. Lovley, now a microbiologist at the University of Massachusetts at Amherst.
[...] In the early 2000s, a Danish microbiologist named Lars Peter Nielsen discovered a very different way to build a microbial wire. He dug up some mud from the Bay of Aarhus and brought it to his lab. Putting probes in the mud, he observed the chemical reactions carried out by its microbes.
[...] Each wire runs vertically up through the mud, measuring up to two inches in length. And each one is made up of thousands of cells stacked on top of each other like a tower of coins. The cells build a protein sleeve around themselves that conducts electricity.
As the bacteria at the bottom break down hydrogen sulfide, they release electrons, which flow upward along the "cable bacteria" to the surface. There, other bacteria — the same kind as on the bottom, but employing a different metabolic reaction — use the electrons to combine oxygen and hydrogen and make water.
Cable bacteria are not unique to Aarhus, it turns out. Dr. Nielsen and other researchers have found them — at least six species [open, DOI: 10.1016/j.syapm.2016.05.006] [DX] so far — in many places around the world, including tidal pools, mud flats, fjords, salt marshes, mangroves and sea grass beds.
Submitted via IRC for Bytram
People who report a declining quality of sleep as they age from their 50s to their 60s have more protein tangles in their brain, putting them at higher risk of developing Alzheimer's disease later in life, according to a new study by psychologists at the University of California, Berkeley.
The new finding highlights the importance of sleep at every age to maintain a healthy brain into old age.
"Insufficient sleep across the lifespan is significantly predictive of your development of Alzheimer's disease pathology in the brain," said the study's senior author, Matthew Walker, a sleep researcher and professor of psychology. "Unfortunately, there is no decade of life that we were able to measure during which you can get away with less sleep. There is no Goldilocks decade during which you can say, 'This is when I get my chance to short sleep.'"
Walker and his colleagues, including graduate student and first author Joseph Winer, found that adults reporting a decline in sleep quality in their 40s and 50s had more beta-amyloid protein in their brains later in life, as measured by positron emission tomography, or PET. Those reporting a sleep decline in their 50s and 60s had more tau protein tangles. Both beta-amyloid and tau clusters are associated with a higher risk of developing dementia, though not everyone with protein tangles goes on to develop symptoms of dementia.
Sleep as a potential biomarker of tau and β-amyloid burden in the human brain (DOI: 10.1523/JNEUROSCI.0503-19.2019) (DX)
Scientists 'Speechless' at Arctic Fox's Epic Trek:
A young Arctic fox has walked across the ice from Norway's Svalbard islands to northern Canada in an epic journey, covering 3,506 km (2,176 miles) in 76 days.
"The fox's journey has left scientists speechless," according to Greenland's Sermitsiaq newspaper.
Researchers at Norway's Polar Institute fitted the young female with a GPS tracking device and freed her into the wild in late March last year on the east coast of Spitsbergen, the Svalbard archipelago's main island.
The fox was under a year old when she set off west in search of food, reaching Greenland just 21 days later - a journey of 1,512 km - before trudging forward on the second leg of her trek.
She was tracked to Canada's Ellesmere Island, nearly 2,000 km further, just 76 days after leaving Svalbard.
[...] What amazed the researchers was not so much the length of the journey as the speed with which the fox had covered it - averaging just over 46 km (28.5 miles) a day and sometimes reaching 155 km.
"We couldn't believe our eyes at first. We thought perhaps it was dead, or had been carried there on a boat, but there were no boats in the area. We were quite thunderstruck," Eva Fuglei of the Polar Institute told Norway's NRK public broadcaster.
By way of comparison, a standard marathon is approximately 43.7 km. There is an animation (Javascript required) showing her journey over the period.
Russia: Fire kills 14 sailors aboard navy research submersible
A fire aboard a Russian navy research submersible has killed 14 crew members, the Russian defence ministry says.
The crew was poisoned by fumes as the vessel was taking measurements in Russian territorial waters on Monday.
The ministry gave no details about the type of vessel. But Russian media reports say it was a nuclear mini-submarine used for special operations.
The fire was later put out and the vessel is now at Severomorsk, the main base of the Russian Northern Fleet.
Rumors of imminent global conflict turned out to be unfounded.
Also at NYT.
China Snares Tourists' Phones in Surveillance Dragnet by Adding Secret App
China has turned its western region of Xinjiang into a police state with few modern parallels, employing a combination of high-tech surveillance and enormous manpower to monitor and subdue the area's predominantly Muslim ethnic minorities. Now, the digital dragnet is expanding beyond Xinjiang's residents, ensnaring tourists, traders and other visitors — and digging deep into their smartphones.
A team of journalists from The New York Times and other publications examined a policing app used in the region, getting a rare look inside the intrusive technologies that China is deploying in the name of quelling Islamic radicalism and strengthening Communist Party rule in its Far West. The use of the app has not been previously reported.
China's border authorities routinely install the app on smartphones belonging to travelers who enter Xinjiang by land from Central Asia, according to several people interviewed by the journalists who crossed the border recently and requested anonymity to avoid government retaliation. Chinese officials also installed the app on the phone of one of the journalists during a recent border crossing. Visitors were required to turn over their devices to be allowed into Xinjiang. The app gathers personal data from phones, including text messages and contacts. It also checks whether devices are carrying pictures, videos, documents and audio files that match any of more than 73,000 items included on a list stored within the app's code.
Those items include Islamic State publications, recordings of jihadi anthems and images of executions. But they also include material without any connection to Islamic terrorism, an indication of China’s heavy-handed approach to stopping extremist violence. There are scanned pages from an Arabic dictionary, recorded recitations of Quran verses, a photo of the Dalai Lama and even a song by a Japanese band of the earsplitting heavy-metal style known as grindcore.
“The Chinese government, both in law and practice, often conflates peaceful religious activities with terrorism,” Maya Wang, a China researcher for Human Rights Watch, said. “You can see in Xinjiang, privacy is a gateway right: Once you lose your right to privacy, you’re going to be afraid of practicing your religion, speaking what’s on your mind or even thinking your thoughts.”
Also at The Guardian and The Hill.
Related: China Bans Islam-Related Names in Xinjiang
Massive DNA Collection Campaign in Xinjiang, China
China Forces its Muslim Minority to Install Spyware on Their Phones
Massive DNA Collection Campaign Continues in Xinjiang, China
After Decades of Hunting, Physicists Claim They've Made Quantum Material from Depths of Jupiter
A team of French researchers has posted a paper online in which they claim to have achieved the holy grail of extreme-pressure materials science: creating metallic hydrogen in a laboratory.
Physicists have suspected since the 1930s that under extreme pressures, hydrogen atoms — the lightest atoms on the periodic table, containing just a single proton each in the nuclei — might radically change their properties. Under normal circumstances, hydrogen doesn't conduct electricity well and tends to pair with other hydrogen atoms — much like oxygen does. But physicists believe that, subject to enough pressure, hydrogen will act as an alkali metal — a group of elements, including lithium and sodium, that each have a single electron in their outermost orbitals, which they exchange very easily. The whole periodic table is organized around this idea, with hydrogen placed above the other alkali metals in the first column. But the effect has never been conclusively seen in a laboratory.
Now, in a paper posted June 13 to the preprint journal arXiv, a team of researchers led by Paul Loubeyre of the French Atomic Energy Commission claims to have pulled it off. Crushed between the points of two diamonds to about 4.2 million times Earth's atmospheric pressure at sea level (425 gigapascals), they say their sample of hydrogen demonstrated metallic properties.
Also at Gizmodo and ScienceAlert.
Previously: Creation of Jupiter Interior, a Step Towards Room Temperature Superconductivity
Harvard Researchers Report Production and Analysis of Solid Metallic Hydrogen
Solid Metallic Hydrogen, Once Theory, Becomes Reality -- or Maybe Not?
Harvard University's Metallic Hydrogen Sample "Disappeared" or Ruined
Related: New Evidence of Superconductivity at Near Room Temperature