Artificial intelligence isn't just good for customer service chatbots and personal assistants on your mobile, advances in the field are also helping to revolutionise scientific research.
Scientists from the Department of Energy's SLAC National Accelerator Laboratory and Stanford University have shown that a form of AI known as neural networks can accurately analyse complex distortions in spacetime a whopping ten million times faster than traditional methods.
"Analyses that typically take weeks to months to complete, that require the input of experts and that are computationally demanding, can be done by neural nets within a fraction of a second, in a fully automated way and, in principle, on a cell phone's computer chip," said postdoctoral fellow Laurence Perreault Levasseur, a co-author of a study published in Nature.
The team at the Kavli Institute for Particle Astrophysics and Cosmology, a joint institute of SLAC and Stanford, used the neural networks to look at images of strong gravitational lensing, where a picture of a far-flung galaxy is multiplied and distorted by the gravity of a massive object that's closer to us, such as a galaxy cluster. These distortions allow scientists to figure out how mass is distributed in space and how that distribution changes over time, both of which are properties linked to the invisible dark matter that makes up 85% of our Universe and to dark energy.
Previously, neural networks have been used in astrophysics for simple applications, such as determining whether a picture showed gravitational lensing or not. But this experiment went far beyond that.
"The neural networks we tested three publicly available neural nets and one that we developed ourselves were able to determine the properties of each lens, including how its mass was distributed and how much it magnified the image of the background galaxy," said the study's lead author Yashar Hezaveh, a NASA Hubble postdoctoral fellow at KIPAC.
As our ability to peer further and further across the Universe develops, so does the volume of data we acquire. But sifting through all that data becomes a monumental task.
The Large Synoptic Survey Telescope (LSST), for example, whose 3.2-gigapixel camera is currently under construction at SLAC, is expected to increase the number of known strong gravitational lenses from a few hundred today to tens of thousands.
"We won't have enough people to analyse all these data in a timely manner with the traditional methods," Perreault Levasseur said. "Neural networks will help us identify interesting objects and analyse them quickly. This will give us more time to ask the right questions about the universe."
As the name suggests, neural networks are modelled on how the human brain works, where a dense network of neurons quickly processes and analyses information.
"The amazing thing is that neural networks learn by themselves what features to look for," said KIPAC staff scientist Phil Marshall, a co-author of the paper. "This is comparable to the way small children learn to recognise objects. You don't tell them exactly what a dog is; you just show them pictures of dogs."
But in this case, Hezaveh said, "It's as if they not only picked photos of dogs from a pile of photos, but also returned information about the dogs' weight, height and age."
The scientists used the Sherlock high-performance computing cluster at the Stanford Research Computing Center for this test, but one of the neural networks they tested was designed to work on iPhones, raising the possibility that these complex deductions could be done at high speed on a scientist's mobile phone.