Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What is your favorite keyboard trait?

  • QWERTY
  • AZERTY
  • Silent (sounds)
  • Clicky sounds
  • Thocky sounds
  • The pretty colored lights
  • I use Braille you insensitive clod
  • Other (please specify in comments)

[ Results | Polls ]
Comments:46 | Votes:71

posted by Fnord666 on Wednesday March 15 2017, @10:54PM   Printer-friendly
from the wireheads-are-coming dept.

The brain is soft and electronics are stiff, which can make combining the two challenging, such as when neuroscientists implant electrodes to measure brain activity and perhaps deliver tiny jolts of electricity for pain relief or other purposes.

Chemical engineer Zhenan Bao is trying to change that. For more than a decade, her lab has been working to make electronics soft and flexible so that they feel and operate almost like a second skin. Along the way, the team has started to focus on making brittle plastics that can conduct electricity more elastic.

Now in Science Advances, Bao's team describes how they took one such brittle plastic and modified it chemically to make it as bendable as a rubber band, while slightly enhancing its electrical conductivity. The result is a soft, flexible electrode that is compatible with our supple and sensitive nerves.

"This flexible electrode opens up many new, exciting possibilities down the road for brain interfaces and other implantable electronics," said Bao, a professor of chemical engineering. "Here, we have a new material with uncompromised electrical performance and high stretchability."

The material is still a laboratory prototype, but the team hopes to develop it as part of their long-term focus on creating flexible materials that interface with the human body.

More information: Yue Wang et al. A highly stretchable, transparent, and conductive polymer, Science Advances (2017). DOI: 10.1126/sciadv.1602076


Original Submission

posted by martyb on Wednesday March 15 2017, @09:26PM   Printer-friendly
from the I-see-what-you-did-there dept.

Intel has muscled its way into the yet-to-be-profitable driverless car market with a $15.3 billion acquisition of a sensor-making company:

The likes of Google and Uber already have invested billions of dollars in their own technology, signing partnerships with automakers like Chrysler and Volvo and sending test vehicles onto the road in a bid to cement their place in the industry, which is estimated to be worth $25 billion annually by 2025, according to Bain & Company, the consultancy firm.

But by acquiring Mobileye, whose digital vision technology allows autonomous vehicles to safely navigate city streets, Intel aims to provide a complete package of digital services, looking to supply to automakers that want to offer autonomous driving, but which do not want to rely on the likes of Google for such services. "Scale is going to win in this market," Brian Krzanich, Intel's chief executive, told investors on Monday. "I don't believe that every carmaker can invest to do independent development into autonomous cars."

Bloomberg has this analysis:

To the non-tech crowd on Wall Street, a bet of this scale by an industry stalwart such as Intel serves to validate the growth strategy, even if the payoff is years down the road. But it's also a reminder that enthusiasm for self-driving cars is making chip companies go crazy. At this point, it's hard to gauge how big a movement autonomous cars will become, nor how it will affect companies that participate in the technology. Mobileye's automotive imaging technology, for example, is being tested by car makers such as BMW, but you can bet the tech superpowers developing driverless cars will cook up the key components on their own, as Google parent company Alphabet Inc. and Uber Technologies Inc. are doing. The more the tech industry heavyweights rely on self-built components, the more that threatens to cut Mobileye out of the self-driving future -- or at least slash prices for Mobileye components. The self-driving auto unit of Alphabet claims to be pushing down the prices for the imaging technology that maps the surroundings of autonomous cars. That can't be good for Mobileye's ability to maintain its 75 percent gross profit margins.

Also at Reuters, AnandTech, and Nasdaq.


Original Submission

posted by Fnord666 on Wednesday March 15 2017, @07:53PM   Printer-friendly
from the a-pretty-penny dept.

SoftBank will reportedly sell a 25% stake in ARM ($8 billion) to the ~$100 billion investment fund it has jointly created with Saudi Arabia, Apple, and others. ARM Holdings was bought by SoftBank for around $32 billion last year.

SoftBank Chairman Masayoshi Son met with Saudi King Salman during the King's state visit to Japan. Son gave the King one of his company's humanoid robots. Saudi Arabia is seeking investors as it prepares to launch an initial public offering for Saudi Aramco. Toyota agreed to conduct a feasibility study into the idea of production in Saudi Arabia, the result of one of twenty memorandums of understanding signed by Japanese companies and institutions with Saudi Arabia.

Also at The Telegraph, and Arab News (extra).

Related: Softbank to Invest $50 Billion in the US


Original Submission

posted by Fnord666 on Wednesday March 15 2017, @05:21PM   Printer-friendly
from the liability-only dept.

SpaceX has been required to purchase $63 million of liability coverage for its next launch, up from $13 million:

A SpaceX rocket scheduled to boost a commercial satellite into orbit from Florida before dawn on Tuesday carries five times as much liability coverage for prelaunch operations as launches in previous years. The higher limit, mandated by federal officials, reflects heightened U.S. concerns about the potential extent of damage to nearby government property in the event of an accident before blastoff. But at this point it isn't clear what specifically prompted imposition of higher liability coverage on Space Exploration Technologies Corp.

On a related note, SpaceX's most recently scheduled launch has been delayed:

Targeting Thursday, March 16 for @EchoStar XXIII launch; window opens at 1:35am EDT and weather is 90% favorable.

If you are in the area, and can hang around for another couple days, there's a Delta 4 launch scheduled for Friday shortly after sunset (2344 UTC).


Original Submission

posted by NCommander on Wednesday March 15 2017, @04:00PM   Printer-friendly
from the I-will-show-myself-out-for-that-pun dept.

Last time — with the help of the excellent Michal Necasek of the OS/2 Museum — we talked about mapping the damage within the existing Xenix 386 disks and successfully got the system to the end of installation.

*

For those new to the series, I recommend you catch up with the previous three articles:

  1. Restoring Xenix 386 2.2.3c, Part 1
  2. Xenix 2.2.3c Restoration: No Tools, No Problem (Part 2)
  3. Xenix 2.2.3c Restoration: Damage Mapping (Part 3)

Unfortunately, at this point we had exhausted the data we could successfully recover from the TeleDisk images, so now it was time to think laterally in our quest to restore viable installation media. Back in Part 2, I posted the disk headers from each disk indicating what it was and where it was in the set:

./tmp/_lbl/prd=xos/typ=386AT/rel=2.2.3c/vol=N03
./tmp/_lbl/prd=xos/typ=n86/rel=2.2.2c/vol=B02
./tmp/_lbl/prd=xos/typ=n86/rel=2.2.2c/vol=X01

And also noted that there was a slight version mismatch (2.2.3 vs 2.2.2). What I didn’t point out was the type was different: n86, vs 386AT; Xenix speak for “generic x86” vs. “386 AT”. As Michal and I discussed it, I realized there was another place we could go to find sectors.

In-Depth Analysis

By extracting what I could from the damaged archives, what struck me the most where the modification times of some of the binaries:

file modification dates

April 13th, 1987. Quite a bit before the files on the N disks, which date to late November 1987 to early 1988. The file utility (which understands x.out headers) also showed that these binaries were *not* 386 specific:

$ file *
capinfo: ASCII text
fixpad:  Microsoft a.out [..] V2.3 V3.0 86 small model executable
mapchan: Microsoft a.out [..] V2.3 V3.0 86 small model executable
tic:     Microsoft a.out [..] V2.3 V3.0 86 small model executable
tid:     Microsoft a.out [..] V2.3 V3.0 86 small model executable
tput:    Microsoft a.out [..] V2.3 V3.0 86 small model executable
trchan:  Microsoft a.out [..] V2.3 V3.0 86 small model executable
uupick:  ASCII text
uuto:    ASCII text

Given we had already struck paydirt with the International Supplement and /etc/init, was it possible that (nearly) identical binaries might have shipped as part of another Xenix release?

At the time, SCO had three known releases of Xenix 2.2 for the x86 architecture: a 286 version, a 386 version for AT-compatibles, and a 386 version for PS/2. At the time, only the first one was known to exist, and more importantly, had been dumped. As such, I grabbed a copy of Xenix 286 2.2.1, the closest version known to match what we had.

Unlike the 386 version, Xenix 286 was shipped in the form of 7 high-density 5.25-inch floppy disks (1.2 MiB) instead of low density disks. After extracting the archives, I found exactly what I was looking for:

./tmp/_lbl/prd=xos/typ=286AT/rel=2.2.1e/vol=N02
./tmp/_lbl/prd=xos/typ=n86/rel=2.2.1c/vol=B01
./tmp/_lbl/prd=xos/typ=n86/rel=2.2.1c/vol=X01
./tmp/_lbl/prd=xos/typ=n86/rel=2.2.1c/vol=X02
./tmp/_lbl/prd=xos/typ=n86/rel=2.2.1c/vol=X03

Not a perfect match, but the Extended Utilities disks were de-facto shared between the 286 and 386 versions. Running cmp against a few binaries revealed that the vast majority were identical, though a few were different. This showed that SCO only recompiled something if it changed vs. doing a blanket recompile which likely was done to help reduce QA workload. Jackpot!

By carefully comparing with a hex editor to make sure the ends matched, I transplanted the missing sectors from the donor, and put them into the correct places in the X disks. That got me a reassembled X1, one of the two sectors in X3, and X4. Unfortunately, doscat on X2 had changed, so I set that aside for now. Nonetheless, halfway there.

Tar Format Details

Having hit paydirt, I turned my attention to the parts of the disk where a missing sector had bisected a file on X3, and X5. As technology has moved on over the years, even time-tested utilities such as tar have seen changes to keep up with the times. The archives on the disks were in what was known as the original “v7” tar format which has a simple and straightforward header (from Wikipedia):

Offset Size Field
0 100 File name
100 8 File mode
108 8 Owner's numeric user ID
116 8 Group's numeric user ID
124 12 File size in bytes (octal base)
136 12 Last modification time in numeric Unix time format (octal)
148 8 Checksum for header record
156 1 Link indicator (file type)
157 100 Name of linked file

Additionally, tar works on the concept of blocks, where are 512 bytes in length. My examination of the installer revealed that the disks were written with a blocking factor of 18, or in other words, each file was chunked into records of 9216 bytes (512*18). If the file wasn’t an exact multiple of 9216, it would be padded out to that length, and then the next record header would follow.

The tar header itself is also stored in its own unique block, and padded out to 512 bytes. For those keeping track at home, a missing sector on these disks were also 512 bytes. You might see where I’m going with this. By sheer luck, all three bifurcations happened in this padding section, and had annihilated the header, but left the binary data following it complete intact. By manually extracting this data, I confirmed that the binaries in all three cases matched binaries on Xenix 286.

By working backwards from the start of the binary data, I located the exact place in the archive the header *should* be. Due to the way blocking worked, the header would always be at the start of the 512 byte boundary that had been annihilated in the bad dumps. Since I knew the binaries matched the older release, I simply dropped in the old tar headers, saved the files, and “blamo”. That restored everything expect doscat, and a test on the VM confirmed I got working (and valid!) binaries. X1, and X3-5 were now fully restored.

Using the same technique, Michal reassembled the games disk.

Interlude: Serial Garbage

Before we get into talking about doscat, one thing that had been driving me up a wall was I couldn’t get the serial ports to work. They were being properly enumerated, and I could even bring them up with “enable tty1a”, but shortly afterwards, I’d get the following message:

“garbage or loose cable on serial dev 0, port shut down”

Searching on USENET showed that this was a relatively common problem in the era, and SCO even released a SLS update for Xenix 2.2 286 and 2.3 386 for it. Unfortunately, they *didn’t* release the update for 2.2 386, and even with that update, it appears this bug may have persisted well into the future as I could find reports of UnixWare reporting this error under VMWare.

I managed to identify the serial interrupt vector (_sioinir), but before I could even dig deeply into it, Michal beat me to the fix. I’ll let him explain in his own words:

The problem is that the emulated serial port is “too fast” and outgoing data may move over the virtual wire as fast as it is written. The Xenix driver does not expect that and if it successfully writes 10 characters in a row from its interrupt handler, or more accurately if there are still pending interrupts after ten loops through the interrupt handler, it throws up its hand and goes off to sulk in the corner.

Fortunately it’s easy to patch out the 10-loop limit and get functioning serial terminals.

(in this specific case, NOPing out the inc instruction in the serial driver).

With an even more patched kernel in place, I could now raise a terminal and enjoy copy and paste!. Unfortunately, vi wasn’t completely happy with this; Xenix doesn’t understand xterm termtype, and it *really* wants a legitimate vt100 on the other end of the connection. Still for basic data copying, it was wonderful, and would open the door to doing MicNet, and UUCP in the future.

As of this writing, I haven’t taught Xenix what an “xterm” is, but I could probably add it to it’s termcap database and rebuild it for a fully functional vi. Interesting, DTTerm from CDE works perfectly, and vi is much happier with it than the standard gnome-terminal or PuTTY.

And with that side trip over, let’s get back to doscat.

doscat

doscat was the first sector where reconstruction vs. recovery came into play. As I mentioned before, Xenix used its own variation of the x.out binary format to support multiple code and data segments. In this specific case, the missing piece of data was located towards the tail end of the binary — in the data segment — not far past the end of the string table. Given that we had two other versions of doscat to compare it to, Michal and I tried to reconstruct it.

F6s in doscat

Our one hint was the missing data block started with 02. Fortunately, a known copy of the Xenix development tools for 386 2.3 has survived, and better still, it was install-able with the "custom" utility.

Insert D1 Manipulate packages screen Select packages feeding disks

The ‘hdr’ utility confirmed that the missing bit was indeed in the data segment:

# hdr /usr/bin/doscat
magic number:   206     (x.out)
ext size:       002c
text size:      00004141
data size:      00000bbc
bss size:       00000f64
symbol size:    00000000
reloc table:    00000000
entry point:    003f:0000
little endian byte ordering
little endian word ordering
cpu type:       8086
run-time environment:
        Xenix version 5
        segmented format
        Small model, fixed stack, pure text, separate I & D, executable
stack size:     00001500
segpos:         00000060
segsize:        00000040

Using adb to explore the broken data segment, with the default fill pattern of F6, it would segfault with trying to write to binary location 0xF6F6, but doscat -h worked. Nulling out the block caused all output from the command to die. Michal managed to determine it was part of the __iob block which defines standard I/O operations, and successfully copied it from another version of doscat.

I think the area just before the end of the missing sector within ‘doscat’ contains the __iob array or something like that. The 02h byte which survived was almost certainly supposed to be preceded by 06h and a few other bytes. If it’s all zeroed, there will be no I/O (stdout closed or whatever). If F6h bytes are in place, some I/O might happen.

The 02h likely corresponds to the _file field of the stderr entry.

My earlier attempts to reconstruct the sector were lead with failure as I had a similar (but incorrect) block of data. With the rebuilt sector in place, I could successfully install all the Extended Disks, and have a full set of working utilities. Unfortunately, due to the hosed manifest file, /etc/custom which is used to install addon software still wasn't functioning. However, manual installation via tar -xf /dev/fd0 was perfectly viable. With all the extended utilities in place, the system was considerably more complete. One step closer to full restoration. All that remained at this point were four sectors: N1’s manifest, N4’s libmdep.a, N5's adb, and I3’s installation script.

Scoreboard

As I mentioned in the previous article, a copy of Xenix 386PS emerged the day before I posted part 1 of this series. As suspected, the version of the Extended Utilities it shipped was identical in version we were rebuilding, so we could actually see how close we were. When Michal compared them side-by-side, he found that X1, 3, 4, and 5 were identical to our reconstructions!

Talk about hitting a home run! X2 was different since the rebuilt doscat sector was something of a guess of how it should work. Still, we always knew that at best this was going to be a reasonable approximation of what SHOULD be on the disks. A functional reconstruction is better than no reconstruction after all. As part of this work, we also learned a lot about the SCO copy protection which became a pain in the butt for rebuilding N4.

The Road Ahead

Looking ahead, I’ve likely got one or two more articles on reconstruction, some interesting bits of history we gleaned from an in-depth study of the Xenix compiler (which has a surprising ancestor), an article demoing some Xenix apps such as Microplan, MicNet, and if I can set it up properly, UUCP support (plus discussing bang-path addressing). After we finish with that, I might have some interesting material on NeXTstep to write up, and perhaps reboot my old Itanium box and get some OpenVMS coverage.

On the whole, I think the reception to these type of articles has been good, and I’d like to thank our subscribers for their support in helping keeping SoylentNews up for three years now. As we proceed into 2017, I’m hoping we’ll reach a level of success where we can repay the stakeholders, and perhaps even have a budget for obtaining and exploring even more interesting pieces of technology.

~ NCommander

posted by martyb on Wednesday March 15 2017, @02:48PM   Printer-friendly
from the energy-utopia dept.

Scientific American has a story on recent developments made by scientists in solar fueled vehicles.

Experts have long been experimenting with techniques to create solar fuels, which allow all the advantages of conventional fossil fuels along with the environmental benefits of renewable energy. However, this requires a "photoanode" — a sort of catalyst that can set the ball rolling — and researchers have had a tough time identifying them in the past.

Now, scientists from the Department of Energy's Lawrence Berkeley National Laboratory and the California Institute of Technology think they've found a better way. If their experiments bear fruit, the results could revolutionize the renewable energy landscape. [...] Photoanodes are key to this procedure.

"The job of the photoanode is to absorb sunlight and then use that energy to oxidize water — essentially splitting apart the H2O molecule and rearranging the atoms to form a fuel. And because this photoanode material needs to have the right sunlight absorption and catalytic properties, they're very rare," explained Gregoire.

In fact, photoanodes are so rare that in the last 40 years, scientists have only been able to find 16 of them.

[...] Gregoire and his colleagues have come up with a new way to hunt for the catalysts, however, and it's much more effective. In two years, the scientists have already pinpointed 12 new photoanodes.

The technique used to identify the photoanodes uses a combination of theory and practice — the scientists worked with a supercomputer and a database of around 60,000 materials, and used quantum mechanics to predict the properties of each material. They then selected the ones that seemed most promising as photoanodes and used experiments to determine whether their calculations were right.

"What's special about what we have been doing is that it's a fully integrated approach," said Jeffrey Neaton, a physics professor with the University of California, Berkeley, and director of the Molecular Foundry. "We come up with candidates based on first-principle calculations, then measure the properties of the candidates to understand whether the criteria we used to select them are valid. The supercomputer comes in because the whole database we're starting with has about 60,000 compounds — we don't want to end up doing calculations on all 60,000."

This technology allows scientists a road map to find catalysts and eventually use them to create solar fuel. The final product, Gregoire said, would look something like a solar panel and involve three components: the photoanode, a photocathode, which forms the fuel, and a membrane that separates the two.


Original Submission

posted by martyb on Wednesday March 15 2017, @01:39PM   Printer-friendly
from the It's-the-end^W-beginning-of-the-world-as-we-know-it? dept.

Researchers have demonstrated that an enzyme-free metabolic pathway using sulfate radicals can mirror the Krebs cycle:

A set of biochemical processes crucial to cellular life on Earth could have originated in chemical reactions taking place on the early Earth four billion years ago, believes a group of scientists from the Francis Crick Institute and the University of Cambridge. The researchers have demonstrated a network of chemical reactions in the lab which mimic the important Krebs cycle present in living organisms today. In a study published in the journal Nature Ecology and Evolution, they say it could explain an important step in how life developed on Earth.

[...] One central metabolic pathway learned by every A-level biology student is the Krebs cycle. But how did this essential set of chemical reactions, each step catalyzed by an enzyme, first arise? Each step in the cycle is not enough by itself. Life needs a sequence of these reactions, and it would have needed it before biological enzymes were around: Amino acids, the molecular components of enzymes, are made from products of the Krebs cycle.

The research group from the Francis Crick Institute and the University of Cambridge say their demonstration offers an answer. They have shown an enzyme-free metabolic pathway that mirrors the Krebs cycle. It is sparked by particles called sulphate radicals under conditions similar to those on Earth four billion years ago. Senior author Dr Markus Ralser of the Francis Crick Institute and University of Cambridge explains: "This non-enzymatic precursor of the Krebs cycle that we have demonstrated forms spontaneously, is biologically sensible and efficient. It could have helped ignite life four billion years ago."

Found at ScienceDaily.

Sulfate radicals enable a non-enzymatic Krebs cycle precursor (open, DOI: 10.1038/s41559-017-0083) (DX)


Original Submission

posted by martyb on Wednesday March 15 2017, @12:11PM   Printer-friendly
from the follow-the-money dept.

In a 53-14 vote that took place days ago, South Dakota's legislative House passed legislation that makes arrest booking photos public records. The measure, which cleared the state's Senate in January, will be signed by Governor Dennis Daugaard.

With that signature on Senate Bill 25, (PDF) South Dakota becomes the 49th state requiring mug shots to be public records. The only other state in the union where they're not public records is Louisiana.

The South Dakota measure is certain to provide fresh material for the online mug shot business racket. These questionable sites post mug shots, often in a bid to embarrass people in hopes of getting them to pay hundreds of dollars to have their photos removed. The exposé I did on this for Wired found that some mug shot site operators had a symbiotic relationship with reputation management firms that charge for mug shot removals.

[...] The law allows for the release of mug shots, even including those of minors, for those arrested for various felonies. The law also allows agencies to refuse to hand over booking photos that are more than six months old. Agencies are entitled to recover costs "to provide or reproduce" mug shots.


Original Submission

posted by martyb on Wednesday March 15 2017, @10:37AM   Printer-friendly
from the with-a-90dB-horn? dept.

I have been getting calls that immediately start with, "Thank you for choosing Marriot Hotels!" for a couple years now. The message goes on to say how I am getting this great offer because I am a valued customer. On a couple occasions, I stayed on the line to get a human, they ask yes/no questions (are you over 28? do you have a valid credit card?). I just replied with questions of my own, and they immediately hung up. I can continue to ignore the calls, but they are always from a random local number and I get nearly twice as many of these calls than I get legitimate calls.

I did a search and found this has been around for a while and Marriot is aware:
http://news.marriott.com/2015/05/marriott-international-responds-to-continued-phone-scam-updated-oct-20-2015/

I have deliberated about posting, but I don't see the FCC [US Federal Communications Commission] as being able to act unless I can provide them something more than the spoofed phone number. Providing the number(s) probably won't help as they are spoofing the caller ID. I know that this is a long shot, but is there anything anyone can suggest beyond creating a spreadsheet of phone numbers, dates, and times to log these calls? Would that even be useful?

It seems that something is fundamentally broken with the current phone system, if this spoofing is even possible. But that is a side topic here, the real question is, what can I do, if anything, to get the data the FCC would need to shut this down?


Original Submission

posted by martyb on Wednesday March 15 2017, @09:03AM   Printer-friendly
from the Topping-Off-the-Mop-Tops dept.

Wired recently published an article about the Beatles' one live album:

The Beatles' remarkable catalog includes just one official live album, and the group's immense popularity made it unlistenable. The Beatles at the Hollywood Bowl, recorded in 1964 and 1965 but not released until 1977, was always a frustrating listen. Try as you might, you simply cannot hear much music above the fan-belt squeal of 10,000 Beatlemaniacs.

You can't blame the Fab Four, nor their legendary producer George Martin. Martin did what he could with the three-track tapes, but the limitations of 1970s technology did little to elevate the music above the din. Boosting the high frequencies—the snap of Ringo Starr's hi-hat, the shimmer and chime of George Harrison's guitar—only made the racket made by all those fans even louder.

All of which makes the remastered version of Live at the Hollywood Bowl especially impressive. The do-over, which coincided with the August release of Ron Howard's documentary film Eight Days a Week, squeezes astonishing clarity out of the source tapes. You can finally hear an exceptionally tight band grinding out infectious blues-based rock propelled by a driving beat, wailing guitars, and raspy vocals. This album never sounded so lucid, present, or weighty.

What makes the article interesting to geeks is how the the sound engineers were able to eliminate all that spectator noise:

To get a sense of what the team at Abbey Road Studios did, imagine deconstructing a smoothie so you're left with whole strawberries, peeled bananas, and ice cubes, then mixing them again from scratch.

The process is a bit more complicated than that but the article detailed description is an interesting read.


Original Submission

posted by martyb on Wednesday March 15 2017, @07:29AM   Printer-friendly
from the getting-around dept.

Astronomers have observed a probable white dwarf orbiting a black hole at a distance of around a million kilometers:

The close-in stellar couple -- known as a binary -- is located in the globular cluster 47 Tucanae, a dense cluster of stars in our galaxy about 14,800 light years away from Earth. While astronomers have observed this binary for many years, it wasn't until 2015 that radio observations revealed the pair likely contains a black hole pulling material from a companion star called a white dwarf, a low-mass star that has exhausted most or all of its nuclear fuel.

New Chandra data of this system, known as X9, show that it changes in X-ray brightness in the same manner every 28 minutes, which is likely the length of time it takes the companion star to make one complete orbit around the black hole. Chandra data also shows evidence for large amounts of oxygen in the system a characteristic of white dwarfs. A strong case can, therefore, be made that that the companion star is a white dwarf, which would then be orbiting the black hole at only about 2.5 times the separation between Earth and the moon.

"This white dwarf is so close to the black hole that material is being pulled away from the star and dumped onto a disk of matter around the black hole before falling in," said Arash Bahramian, lead author with the University of Alberta (Canada) and MSU. "Luckily for this star, we don't think it will follow this path into oblivion, but instead will stay in orbit."

Found at Michigan State University.

The ultracompact nature of the black hole candidate X-ray binary 47 Tuc X9 (open, DOI: 10.1093/mnras/stx166) (DX)

Considering the measured orbital period (with other evidence of a white dwarf donor), and the lack of transitional millisecond pulsar features in the X-ray light curve, we suggest that this could be the first ultracompact black hole X-ray binary identified in our Galaxy


Original Submission

posted by martyb on Wednesday March 15 2017, @05:55AM   Printer-friendly
from the tt0240900 dept.

Japanese scientists show that lazy ant workers step in to replace fatigued workers, improving colony long-term persistence.

A quick glance at an ant foraging trail or beehive shows throngs of tireless workers feeding and protecting their colonies. A closer look reveals otherwise. In fact, many ant, bee and termite workers are slackers. In some cases, four-fifths of workers appear to just rest, eat, clean themselves or walk about. The remaining workers toil hard.

Scientists have spotted lazy workers in social insects since the 1980s. Yet insect societies, similar to humans, compete on efficiency and productivity. So what explains the existence of lazy workers?

One possible explanation is that lazy workers slack to ensure the colony's survival against a wipeout of active workers, says a study published in Scientific Reports. In this study, a group of scientists at Hokkaido University and Shizuoka University in Japan found that when active ants are disabled by a rare catastrophe, the inactive ants, rested and energetic, step in to keep the colony running.

They have so much to teach us, these humble ants.


Original Submission

posted by martyb on Wednesday March 15 2017, @04:21AM   Printer-friendly
from the we-all-live-in-a-yellow-submarine dept.

The yellow submarine named Boaty McBoatface is set to leave for Antarctica this week on its first science expedition.

The robot is going to map the movement of deep waters that play a critical role in regulating Earth's climate.

Boaty carries the name that a public poll had suggested be given to the UK's future £200m polar research vessel.

The government felt this would be inappropriate and directed the humorous moniker go on a submersible instead.

But what many people may not realise is that there is actually more than one Boaty. The name covers a trio of vehicles in the new Autosub Long Range class of underwater robots developed at Southampton's National Oceanography Centre (NOC).

These machines can all be configured slightly differently depending on the science tasks they are given.

The one that will initiate the "adventures of Boaty" will head out of Punta Arenas, Chile, on Friday aboard Britain's current polar ship, the RRS James Clark Ross.

The JCR will drop the sub into a narrow, jagged, 3,500m-deep gap in an underwater ridge that extends northeast of the Antarctic Peninsula. Referred to as the Orkney Passage, this is the gateway into the Atlantic for much of the "bottom-water" that is created as sea-ice grows on the margins of the White Continent.

[...] The Dynamics of the Orkney Passage Outflow (DynOPO) expedition is a collaboration between BAS, the University of Southampton and NOC.

[Ed note: Emphasis copied verbatim from the original source.]


Original Submission

posted by Fnord666 on Wednesday March 15 2017, @02:47AM   Printer-friendly
from the beats-sanding dept.

A new method for 3D printed surface smoothing wastes less material while achieving better accuracy:

Waseda University researchers have developed a process to dramatically improve the quality of 3D printed resin products. The process combines greatly improved surface texture and higher structural rigidity with lower cost, less complexity, safer use of solvent chemicals and elimination of troublesome waste dust.

[...] The Waseda researchers developed and tested a method called 3D Chemical Melting Finishing (3D-CMF), which uses a tool like a felt-tip pen to selectively apply solvent to particular parts of the printed piece which require smoothing. The new 3D-CMF method has major advantages over previous methods: removing less material to create less waste and achieve more accurate shaping; and using less solvent for better safety and lower cost. In addition, pen tips can be changed to further increase surface shaping accuracy. These improvements promise to move 3D printing into a much more attractive commercial position, as a realistic possibility for in-home consumer use.

Development of the Improving Process for the 3D Printed Structure (open, DOI: 10.1038/srep39852) (DX)


Original Submission

posted by on Wednesday March 15 2017, @01:06AM   Printer-friendly
from the able-to-solve-the-travelling-salesman-problem-in-just-6-years dept.

Google, NASA, and Universities Space Research Association (USRA) run a joint research lab called the Quantum Artificial Intelligence Laboratory (QuAIL). That partnership has used a 512-qubit D-Wave Two quantum annealer, upgraded to the 1,152-qubit D-Wave 2x, and is now upgrading again to the company's latest D-Wave 2000Q system (2048 qubits):

Google, NASA, and the USRA are now buying the latest generation D-Wave quantum computer, as well, to further explore its potential. The new D-Wave 2000Q is not just up to 1,000 times faster than the previous generation, but it also has better controls, allowing QuAIL to tweak it for its algorithms. QuAIL is now looking at developing machine learning algorithms that can take advantage of D-Wave's latest quantum annealing computer.

[...] D-Wave also announced that it will help the Virginia Polytechnic Institute and State University (Virginia Tech) establish a quantum computing research center for defense and intelligence purposes. D-Wave's role will be to aid the Virginia Tech staff in developing applications and software tools for its quantum annealing computers. [...] Because D-Wave is not a universal quantum computer, like what Google and IBM plan to build over the next few years, it is not expected to be useful in cracking encryption. Virginia Tech plans to also focus on developing machine learning algorithms for the D-Wave computers.

Previously: Trees Are the New Cats: D-Wave Used for Machine Vision


Original Submission