from the psychedelic dept.
The Guardian is reporting that Google is trying to understand how its neural net for image recognition works by feeding in random noise then telling the neural net to look for certain features then feeding the resulting image back in. Apart from anything else some of the images generated are astounding.
Link to original Google research article.
The researchers used a modified version of Deep Dream to process a panoramic video of the university campus. Then they showed it to 12 volunteers, finding that the visual hallucinations were similar to those brought on by psilocybin, the active ingredient in magic mushrooms.
The volunteers were asked questions like whether they felt a loss of control or a loss of their sense of self, and whether they saw patterns and colours. Their answers matched up closely with the results of a 2013 study [open, DOI: 10.1523/JNEUROSCI.2063-13.2013] [DX] into the experience of taking psilocybin.
In a second experiment, 22 participants were asked whether they felt any sense of temporal distortion, or a warped sense of time. In this case the responses were similar to those recorded after watching control videos.
That would seem to suggest the researchers' machine can replicate some, but not all, the effects of being high on psychedelic drugs. However, only a few volunteers have been tested so far, and they were a different group to those quizzed on psilocybin back in 2013.
This is just the beginning for the technology – the system is very flexible and can be tweaked in all kinds of ways. In the future, participants could even get to adjust the parameters of the experience themselves.
With better hardware, the algorithms could be run in real time and applied to an augmented reality view instead of a pre-recorded panoramic video.
Also at Newsweek.
A Deep-Dream Virtual Reality Platform for Studying Altered Perceptual Phenomenology (open, DOI: 10.1038/s41598-017-16316-2) (DX)
Google is demonstrating music created using machine learning techniques. It has previously made psychedelic art:
It's a long way to Carnegie Hall, but we bet that Google researchers are already thinking of the day when they can send a robot or AI to play an interesting, improvised piano performance in a major venue.
While that's not the stated end goal of Magenta, a new project from the Google Brain team, it's certainly a possibility. The entire premise of Magenta is built around two simple questions: Can machines make art? And can machines make music? And, dare we say it, there's also an unstated third question: Can machines make either art or music that's any good?
We'll let you judge the last one. Here's the first piece of music from Google's machine-learning system. It's only 90 seconds long, but it's at least an early demonstration of Magenta's capabilities.
As artificial intelligence (AI) allows machines to become more like humans, will they experience similar psychological quirks such as hallucinations or depression? And might this be a good thing?
Last month, New York University in New York City hosted a symposium called Canonical Computations in Brains and Machines, where neuroscientists and AI experts discussed overlaps in the way humans and machines think. Zachary Mainen, a neuroscientist at the Champalimaud Centre for the Unknown, a neuroscience and cancer research institute in Lisbon, speculated [36m video] that we might expect an intelligent machine to suffer some of the same mental problems people do.
[...] Q: Why do you think AIs might get depressed and hallucinate?
A: I'm drawing on the field of computational psychiatry, which assumes we can learn about a patient who's depressed or hallucinating from studying AI algorithms like reinforcement learning. If you reverse the arrow, why wouldn't an AI be subject to the sort of things that go wrong with patients?
Q: Might the mechanism be the same as it is in humans?
A: Depression and hallucinations appear to depend on a chemical in the brain called serotonin. It may be that serotonin is just a biological quirk. But if serotonin is helping solve a more general problem for intelligent systems, then machines might implement a similar function, and if serotonin goes wrong in humans, the equivalent in a machine could also go wrong.
Related: Do Androids Dream of Electric Sheep?