Covers the period:
2017-01-01 00:00:00 ..
2017-06-23 18:50:26 UTC
2017-06-27 14:52:26 UTC
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
IBM has a customer for its neuromorphic chips, and it is using terms like "neurons per rack" to describe the system's capabilities:
IBM and the U.S. Air Force Research Laboratory (AFRL) today announced they are collaborating on a first-of-a-kind brain-inspired supercomputing system powered by a 64-chip array of the IBM TrueNorth Neurosynaptic System. The scalable platform IBM is building for AFRL will feature an end-to-end software ecosystem designed to enable deep neural-network learning and information discovery. The system's advanced pattern recognition and sensory processing power will be the equivalent of 64 million neurons and 16 billion synapses, while the processor component will consume the energy equivalent of a dim light bulb – a mere 10 watts to power.
[...] The IBM TrueNorth Neurosynaptic System can efficiently convert data (such as images, video, audio and text) from multiple, distributed sensors into symbols in real time. AFRL will combine this "right-brain" perception capability of the system with the "left-brain" symbol processing capabilities of conventional computer systems. The large scale of the system will enable both "data parallelism" where multiple data sources can be run in parallel against the same neural network and "model parallelism" where independent neural networks form an ensemble that can be run in parallel on the same data.
"AFRL was the earliest adopter of TrueNorth for converting data into decisions," said Daniel S. Goddard, director, information directorate, U.S. Air Force Research Lab. "The new neurosynaptic system will be used to enable new computing capabilities important to AFRL's mission to explore, prototype and demonstrate high-impact, game-changing technologies that enable the Air Force and the nation to maintain its superior technical advantage."
According to The Wall Street Journal:
The European Union's antitrust regulator on Tuesday fined Alphabet Inc.'s Google a record €2.42 billion ($2.71 billion) for favoring its own comparison-shopping service in search results and ordered the search giant to apply the same methods to rivals as its own when displaying their services.
[...] If the ruling sets a precedent that holds, these firms might all have to rethink how they make products that—like Google's search engine—have become more than just tools, but dominant gateways to the wider internet.
From The New York Times:
While the fine will garner attention, the focus will most likely shift quickly to the changes that Google will have to make to comply with the antitrust decision, potentially leaving it vulnerable to regular monitoring of its closely guarded search algorithm.
CNBC adds that, based on a filing, Google expects to ultimately pay this fine.
NASA says the preliminary design review of its Quiet Supersonic Transport (QueSST) project suggests it is possible to create a supersonic aircraft that doesn't produce a sonic boom.
NASA says "Senior experts and engineers from across the agency and the Lockheed Martin Corporation concluded on Friday that the QueSST design is capable of fulfilling the LBFD aircraft's mission objectives, which are to fly at supersonic speeds, but create a soft 'thump' instead of the disruptive sonic boom associated with supersonic flight today."
NASA's commercial supersonic technology project manager Peter Coen explains, in this video, that "the idea is to design the airplane so that the shock waves that are produced in supersonic flight are arranged in such a way that you don't have a boom. You have just a general kind of a gradual pressure rise that produces a quiet sound."
NASA's next step is finding organisations willing to build a working model of the Low Boom Flight Demonstration (LBFD) experimental airplane and fly it over American cities and towns to hear how much noise it makes. It's hoped those flights could start in 2021.
Nah, rather travel in the kind of zeppelin Sergei Brin is building.
An ESA-funded scientist is developing a magnetic space tug to combat the growing problem of space debris. The tugs could lock onto derelict satellites and deorbit them before they become a hazard to navigation, and because they use cryogenic magnets, they wouldn't have to even touch the derelicts and the targets wouldn't need to be specially modified for towing.
Depending on how it's defined, there are over 500,000 pieces of debris or "space junk" orbiting the Earth, ranging in size from old launch vehicles and dead satellites down to flecks of paint. Because they travel at tens of thousands of miles per hour, even the smallest object can strike with the force of a meteor, and if a large one should hit a satellite, the impact could turn them both into deadly clouds of shrapnel.
Funded by ESA's Networking/Partnering Initiative, Emilien Fabacher of the Institut Supérieur de l'Aéronautique et de l'Espace at the University of Toulouse has come up with a system using magnetic fields generated by superconducting wires cooled to cryogenic temperatures. For his PhD research, he has been using a rendezvous simulator with magnetic interaction models to study how to guide, navigate, and control such tugs.
"With a satellite you want to deorbit, it's much better if you can stay at a safe distance, without needing to come into direct contact and risking damage to both chaser and target satellites," says Fabacher. "So the idea I'm investigating is to apply magnetic forces either to attract or repel the target satellite, to shift its orbit or deorbit it entirely."
Scotland: the land of mist and mountains long associated with kilts, bagpipes, haggis ... and now space launches. Timed to coincide with the Queen's Speech to Parliament, British startup Orbex announced that it will build a new 2,000 m² (21,500 ft²) rocket production facility in Scotland and is scouting for a launch site on the north coast of the country to send small payloads into low Earth orbit.
The announcement comes on the tails of Orbex making a private presentation of its launch technology to potential investors at the Paris Air Show at Le Bourget Airport. It already has a 1,200 m² (12,900 ft²) factory in England, where it is building launch vehicle subsystems, and is now seeking to expand north of the border for the assembly and launching of the completed rocket.
The goal is to create a booster that can lift payloads of up to 150 kg (330 lb) into low Earth orbit (LEO) and, eventually, to send up payloads of up to 220 kg (485 lb) into LEO, polar, or sun-synchronous orbits at altitudes of up to 1,250 km (775 mi). To help achieve this, Orbex is working with regional and national agencies to draft detailed development proposals in line with the UK government's 2017 Spaceflight Bill intended to promote launch sites in the British Isles.
The inaugural rocket will carry the tartan of which clan?
Most microphones are designed to emulate the human ear, hearing sounds that we hear, and not hearing ones that we don't. Scientists from the University of Illinois at Urbana-Champaign, however, have created a new sound that we can't hear but that is picked up by mics of all kinds. It could have some valuable applications, although there's also the potential for misuse.
The university's Coordinated Science Laboratory states that the sound is produced by combining multiple tones that interact with a microphone's mechanical workings, creating what is known as a "shadow" – this is a type of white noise that is detectable only by the microphone, as it's formed within the mic itself.
Transmitted by ultrasonic speakers within a room, the sound could be used to keep confidential conversations from being clearly picked up by hidden listening devices. The people talking would still have no problem hearing each other, as the sound would be inaudible to them.
It could also thwart illegal audio recordings in movie theaters or music venues, plus it could be used in place of Bluetooth for wireless communication between Internet of Things (IoT) devices.
There is now scaffolding to ensure booting to a newly-linked kernel for every reboot. New random kernels can be linked together, automatically in the background by the rc
scripts, and installed as /bsd. On a fast machine it takes less than a second. A mail is sent to the system administrator. A reboot runs the new kernel, and yet another kernel is built for the next boot.
From Theo de Raadt's email to the list:
Over the last three weeks I've been working on a new randomization feature which will protect the kernel.
The situation today is that many people install a kernel binary from OpenBSD, and then run that same kernel binary for 6 months or more. We have substantial randomization for the memory allocations made by the kernel, and for userland also of course.
However that kernel is always in the same physical memory, at the same virtual address space (we call it KVA).
Improving this situation takes a few steps.
Recently I moved all our kernels to a new mapping model, with patrick and visa taking care of two platforms.
Previously, the kernel assembly language bootstrap/runtime locore.S was compiled and linked with all the other .c files of the kernel in a deterministic fashion. locore.o was always first, then the .c files order specified by our config(8) utility and some helper files.
In the new world order, locore is split into two files: One chunk is bootstrap, that is left at the beginning. The assembly language runtime and all other files are linked in random fashion. There are some other pieces to try to improve the randomness of the layout.
As a result, every new kernel is unique. The relative offsets between functions and data are unique.
[...] Our immune systems work better when they are unique. Otherwise one airline passenger from Singapore with a new flu could wipe out Europe (they should fly to Washington instead).
Our computers should be more immune.
[Editors note: This is a couple weeks old now, but was by far the best tech story I could find in the submission queue]
Another day, another very fake news story from the network President Donald Trump has identified as "very fake news."
CNN's Thomas Frank on Thursday evening published what would have been considered an explosive report if remotely true: One anonymous source told him both the Treasury Department and Senate Intelligence Committee are probing a Russian investment fund with ties to several senior finance world leaders close to President Trump. Only problem? Both Trump administration officials and those close to Senate GOP leadership say it's simply untrue.
The retraction from CNN:
On June 22, 2017, CNN.com published a story connecting Anthony Scaramucci with investigations into the Russian Direct Investment Fund.
That story did not meet CNN's editorial standards and has been retracted. Links to the story have been disabled. CNN apologizes to Mr. Scaramucci.
According to BuzzFeed News, CNN has responded by actually requiring executives to review stories:
CNN is imposing strict new publishing restrictions for online articles involving Russia after the network deleted a story and then issued a retraction late Friday, according to an internal email obtained by BuzzFeed News.
The email went out at 11:21 a.m. on Saturday from Rich Barbieri, the CNNMoney executive editor, saying "No one should publish any content involving Russia without coming to me and Jason," a CNN vice president.
At least now we'll know who to blame.
[Ed Note: I debated leaving this in politics or dropping it to the main page. I opted for the latter because politics or not, the prevalence of "fake news" is one that we deal with on a daily basis from our respective social media feeds to all the major broadcast and cable news networks. How are we to tell what is "fake" and what is actually (relatively) "true"? The main stream media all put their spin on everything. A right slant for some, a left slant for others. Is the truth somewhere in between, or is it a story that we aren't getting becasue the mainstream media is so intent on telling their narrative that we the people are getting the shit end of the stick regardless of where we get the so called news?]
The US Supreme Court has partially lifted an injunction against President Donald Trump's travel ban.
The Supreme Court said in Monday's ruling: "In practical terms, this means that [the executive order] may not be enforced against foreign nationals who have a credible claim of a bona fide relationship with a person or entity in the United States.
"All other foreign nationals are subject to the provisions of [the executive order]."
Mark this down as a win for Donald Trump. The path to entry into the US for immigrants and refugees from the affected nations, if they don't have existing ties to the US - either through family, schools or employment - just became considerably harder.
The decision marks a reaffirmation of the sweeping powers the president has traditionally been granted by the courts in areas of national security. There was fear in some quarters that the administration's ham-fisted implementation of its immigration policy could do lasting damage to the president's prerogatives. That appears not to be the case.
Submitted via IRC for TheMightyBuzzard
A new study reveals organizations are wasting an average of $6 million on the time to detect and contain insecure endpoints, among other staggering findings that show endpoint threats are a growing concern, companies are not efficiently protecting their proprietary data, and the cost and complexity of reducing endpoint risks are at an all-time high.
The study also revealed organizations are finding it increasingly difficult to identify dark endpoints — the rogue, out-of-compliance, or off-network devices that create blind spots and increase an organization's vulnerability to attack.
While confidence in endpoint security ranked low, the IT security professionals surveyed believe that close to 60 percent of the hours currently invested in the capture and evaluation of intelligence surrounding the true threats, to both compliance and proprietary data, can be saved each week by deploying automated solutions.
[...] "Managing endpoint security and protecting proprietary data is more than an IT issue, it's increasingly a global business performance and national security concern," said Geoff Haydon, CEO, Absolute. "This study along with recent ransomware attacks and high-profile data breaches show the danger of today's endpoint blind spots, and underscore that automation and newer approaches to endpoint security are key to safeguarding endpoints and the sensitive data on them for optimal business performance."
It can also cost you bruising about the head and face when you try to blame your admins.
Submitted via IRC for Bytram
The number of drug-resistant tuberculosis (TB) cases is rising globally. But a newly discovered natural antibiotic — produced by bacteria from the lung infection in a cystic fibrosis patient — could help fight these infections. Lab testing reported in the Journal of the American Chemical Society shows that the compound is active against multi-drug resistant strains.
Starting with the famous first discovery of penicillin from mold, scientists have continued to search for natural sources of antibiotics. And as pathogens develop resistance to once-reliable medicines, the search has taken on a new urgency. By 2040, more than a third of all TB cases in Russia, for example, could show resistance to first-line drugs currently used to fight the disease, a recent report published in Lancet estimates. Among potential new drug sources are species of the bacterial genus Burkholderia that thrive in a wide range of habitats, from soil to the human lung. One way these microbes have adapted to these diverse environments is by making potent antibiotics to take out their competition. In light of the growing threat of drug-resistant bacteria, particularly among TB strains, Gregory L. Challis, Eshwar Mahenthiralingam and colleagues wanted to see if Burkholderia might produce a promising anti-TB compound.
Ethan Siegel at Starts With A Bang brings to attention the results of the Outer Solar System Origins Survey (OSSOS). The OSSOS project, which started in 2013 (before the Planet Nine hypothesis was proposed) to survey the minor planets of the outer Solar System, has discovered and determined the orbits of well over eight hundred trans-Neptunian objects (TNOs) in its operation. They have recently published a paper that basically puts the kibosh on the Planet Nine hypothesis. Planet Nine was initially proposed to explain an apparent anomalous clustering of orbits of TNOs consistent with them being perturbed by a large planet, but the OSSOS results have found no such anomalous clustering, and are rather seeing a distribution consistent with uniform randomness.
It was perhaps the most exciting idea to come out of science last year: that an undiscovered, giant world exists in our Solar System, far beyond the orbit of Neptune. This wouldn't be some tiny, frozen world like Pluto or Eris, smaller even than Earth's Moon, but a monstrous super-Earth, perhaps ten times as massive as our own world and almost as large as Uranus or Neptune in radius. As the months passed since it was first proposed by Konstantin Batygin and Mike Brown, they compiled additional evidence for it, and things were looking rosy. But a new study by Shankman et al. has turned the evidence on its head, disfavoring the planet's existence and uncovering a bias in the data itself.
[...] what they found was entirely consistent with no Planet Nine, and that the overall case for Planet Nine's existence was substantially weakened by their study. In particular, the clustering in the orientation of each orbit in space (defined by multiple variables, ω and Ω) that earlier studies, like Batygin & Brown and Trujillo & Sheppard, previously noticed simply doesn't exist in this new, unbiased study.
We find no evidence in the OSSOS sample for the ω clustering that was the impetus for the current additional planet hypothesis.
The data from this new study is quite clear that the previously observed correlation, which was the impetus for hypothesizing Planet Nine, doesn't persist into the new sample.
OSSOS also has a Frequently Asked Questions page about these findings. They don't entirely rule out the existence of a substantial (perhaps Mars-sized) planet in the outer reaches of the Solar System, but their data makes it highly improbable that a super-Earth on the scale of Uranus or Neptune might be out there.
Submitted via IRC for FatPhil
Russia's FSB1 security agency has said the Telegram mobile messaging app was used by a suicide bomber who killed 15 people in St Petersburg in April.
Authorities have already threatened to block the app, founded by Russian businessman Pavel Durov, for refusing to sign up to new data laws.
Mr Durov has refused to let regulators access encrypted messages on the app.
Telegram has some 100 million users and has been used by so-called Islamic State (IS) and its supporters.
IS used the app to declare its involvement in the jihadist attack on and around London Bridge in the UK last month.
Telegram has been used by jihadists in France and the Middle East too, although the app company has highlighted its efforts to close down pro-IS channels. Telegram allows groups of up to 5,000 people to send messages, documents, videos and pictures without charge and with complete encryption.
Now the FSB has said that as part of its investigation into the St Petersburg attack it "received reliable information about the use of Telegram by the suicide bomber, his accomplices and their mastermind abroad to conceal their criminal plots at all the stages of preparation for the terrorist attack".
A Russian identified as Akbarzhon Jalilov blew himself up between two underground stations on 3 April. The security agency said that Telegram was the messenger of choice for "international terrorist organisations in Russia" because they could chat secretly with high levels of encryption.
1 According to Wikipedia, FSB:
The Federal Security Service of the Russian Federation (FSB; Russian: Федеральная служба безопасности Российской Федерации (ФСБ), tr. Federal'naya sluzhba bezopasnosti Rossiyskoy Federatsii; IPA: [fʲɪdʲɪˈralʲnəjə ˈsluʐbə bʲɪzɐˈpasnəstʲɪ rɐˈsʲijskəj fʲɪdʲɪˈratsɨjɪ]) is the principal security agency of Russia and the main successor agency to the USSR's Committee of State Security (KGB). Its main responsibilities are within the country and include counter-intelligence, internal and border security, counter-terrorism, and surveillance as well as investigating some other types of grave crimes and federal law violations.
While many of us were preoccupied with ISC 2017 last week, the launch of the Chicago Quantum Exchange went largely unnoticed. So what is such a thing? It is a Department of Energy sponsored collaboration between the University of Chicago, Fermi National Accelerator Laboratory, and Argonne National Laboratory to "facilitate the exploration of quantum information and the development of new applications with the potential to dramatically improve technology for communication, computing and sensing."
The new hub will be within within the Institute for Molecular Engineering (IME) at UChicago. Quantum mechanics, of course, governs the behavior of matter at the atomic and subatomic levels in exotic and unfamiliar ways compared to the classical physics used to understand the movements of everyday objects. The engineering of quantum phenomena could lead to new classes of devices and computing capabilities, permitting novel approaches to solving problems that cannot be addressed using existing technology.
Though somewhat DRY reading, the author provides salient commentary on how many declarations appear in style sheets, how many of those are unique, and offers perspectives on what that means for developing and maintaining web sites. From https://meiert.com/en/blog/70-percent-css-repetition/:
Teaser: Check on how many declarations you use in your style sheets, how many of those declarations are unique, and what that means.
In 2008 I've argued that using declarations just once makes for a key method to DRY up our style sheets (this works because avoiding declaration repetition is usually more effective than avoiding selector repetition—declarations are often longer). I've later raised the suspicion that the demand for variables would have been more informed and civilized had we nailed style sheet optimization. What I haven't done so far is gather data to show how big the problem really is. That's what I've done in the last few weeks.
In a Google spreadsheet I've collected the Top 200 of content sites in the The Moz Top 500, and taken another 20 sites related to web development for comparison. (I've also added 3 of my sites out of curiosity.) I've then used the extremely useful CSS Stats to determine the total number of CSS declarations, as well as the number of unique declarations, to calculate ratios as well as averages: You get the idea as soon as you check out said spreadsheet.
[...] Before we dissect the numbers, let's first quickly establish what one can consider good CSS development practice:
As with all code, don't repeat yourself (DRY).
In CSS, an effective way to not repeat yourself is to use declarations just once. The resulting repetition of selectors is less of a problem because declarations are usually longer than selectors—and yet variable selector length makes using declarations once a soft rule that still requires thinking. (More as follows.)
No repetition of declarations is theoretically attainable, but in practice, two things interfere: Not just in cases when selector order is mandated (another oft-forgotten subject, yet there are detailed proposals on how to standardize selector sorting), the cascade may not permit grouping of all selectors; and when strict separation of modules (CSS sections) is important, one may also tolerate some repetition. These two pieces deserve more elaboration, but relevant here is the note that strict separation of modules should not be used as a blanket refusal to curb declaration repetition—as the data show, that would be foolish.
Now, what is a reasonable amount of repetition? From my experience, 10–20%; in other words, 80–90% of a style sheet should consist of unique declarations. Reality, however, looks vastly different, and here we get to the 70%.
I found the analysis to be informative and eye-opening. What have you found to be best practices when it comes to Cascading Style Sheets?