Sunday, 27 November 2016

Neuroscientists Develop AI-Based Method for Hacking Fear

Imagine a library of fear. Lining its aisles aren't photo albums of snakes and plummeting aircraft, but are instead volumes containing cross-sectional scans of human brains. They are familiar fMRI images of folded grey matter and feature red-yellow flames of neural activity. In the fear library, these flames vary volume to volume and fear to fear. Different fears feature unique, consistent signatures in the brain, which is the fact that allows them to be so neatly cataloged.

This is a real thing: Distinct fears have distinct signatures. They can be identified by recurring patterns of activity. This is what allows for our library, but it also implies something else: that we might be able to use these patterns to manipulate fears.

This appears to indeed be the case, according to research published Monday in the journal Nature Human Behaviour courtesy of UCLA cognitive neuroscientist Hakwan Lau and colleagues at Columbia University and the Nara Institute of Science and Technology in Japan. Fears can be erased, much as they are in aversion therapy—where phobias are conditioned away through exposure to the subject of the phobia, an oft unpleasant process—but without every having to conjure the fear itself.

Image: Lau et al

The process behind the technique is called "decoded neurofeedback." In the group's setup, fears are first created by giving electric shocks to study participants while at the same time presenting them an image of a colored vertical line. Over time, the specific color of line corresponding to the shocks becomes "scary," and the associated brain activity is recorded. This activity is then fed into an artificial intelligence visual recognition algorithm, which abstracts a pattern.

The AI algorithm is then used to observe further brain activity for artifacts of that same pattern, which will appear in fragmented form even though the participant is resting and is not currently being subjected to the fear stimulus (the colored line).

Once the algorithm has its fear patterns acquired, they can be used to deprogram the specific fears by associating positive rewards with the fear-associated colored lines. "Whenever your brain is representing or 'thinking about' the red line, one of the scary things, we [tell the subject], 'Congratulations you won 10 cents,'" Lau explained in an interview. "So, now whenever the red thing happens, instead of being paired with the electric shock, now it's associated with a positive monetary reward."

"With this procedure we can condition out the fear," he said.

This sounds like pretty standard psychology, but the key thing is that the subject never has to consciously think about the scary thing for it work. The recognition algorithm just has to see a fleeting fragment of the fear memory to trigger a reconditioning stimulus (passing out money). After the subjects were "reprogrammed," they were again shown images of the once-scary lines. Their fearfulness, as represented by standard skin sweat fear responses, had diminished.

The obvious catch is that somehow the original fear neural signature has to be acquired at some point for the AI to recognize it and trigger reprogramming reward events. This brings us back to our fear library. What Lau and colleagues have found is that these fear patterns are actually shared across many different people. There appear to be general fear fingerprints corresponding to different phobias that occur across populations. Opening the volume for arachnophobia in our library would then reveal just a single neural pattern. As it turns out, fears can be inferred from fMRI scans with up to 80 percent accuracy. This part of the group's research is so-far unpublished, but Lau expects that a new paper will be out within a few months.

"Using other people, we can infer what your brain's spider pattern will be with up to 80 percent [accuracy]."

"The idea is that you see dogs, cats, oranges, and butterflies, and then I see the same thing, " Lau explains. "Oranges, dogs, cats, and butterflies. There's a way to calibrate our brains' patterns in the same space. Once our brains are calibrated, you don't have to see spiders anymore [to capture the associated brain activity]. I can go see spiders. I can watch spiders for hours. Then, I know my pattern for spiders, and I can infer your spider pattern. It sounds sci-fi but it can be done, and it can be done for more than one brain. Using other people, we can infer what your brain's spider pattern will be with up to 80 percent [accuracy]."

Lau assures that this doesn't work the other way. We can't insert a fear or negative association into the brain through this method—only positive associations. Fears can be subtracted and not added. So, the consequences of getting it wrong are only that the patient might wind up with an artificial positive association.

There are some remaining barriers before we might see this in a clinical setting. For one thing, it's uncertain how long the reprogramming lasts. There's also a well-known phenomenon in PTSD in which fears can re-emerge as individuals encounter new contexts. If, say, someone gets rid of their fear of cars following a car accident, they may re-experience PTSD fear if they return to the location of the accident. This may happen here as well, which will only be revealed when similar experiments are run that swap simple colored lines for richer real-world stimuli.



from Neuroscientists Develop AI-Based Method for Hacking Fear

No comments:

Post a Comment