Machine learning gives glimpse of how a dog's brain represents what it sees



A first look at how the canine mind reconstructs what it sees has been provided by scientists' decoding of visual pictures from a dog's brain. The study conducted at Emory University was published in the Journal of Visualized Experiments.

The findings imply that dogs are more tuned into environmental actions than they are to the person or thing performing the deed.

Three 30-minute sessions totaling 90 minutes were used to collect the fMRI brain data from two awake, unrestrained dogs as they viewed videos. They next examined the patterns in the brain data using a machine-learning system.

According to Gregory Berns, an Emory professor of psychology and the paper's corresponding author, "We demonstrated that we can track the activity in a dog's brain while it is watching a movie and, to at least a limited extent, reconstruct what it is looking at." "It's incredible that we can get that off,"

Recent developments in fMRI and machine learning to decode visual stimuli from the human brain have provided fresh insights into the nature of perception, which served as the inspiration for the project. Only a small number of other species, including some primates, have been subjected to the method in addition to humans.

Erin Phillips, the first author of the publication, worked as a research expert in Berns' Canine Cognitive Neuroscience Lab and adds, "While our work is based on just two dogs it shows proof of concept that these methods work on canines." "I hope this publication paves the road for other researchers to apply these techniques to dogs and other species as well, so we can gather more data and bigger insights into how the minds of other animals work," the author writes in the paper.

Phillips, a Scotsman, attended Emory as a Bobby Jones Scholar as part of a collaboration between the universities of Emory and St. Andrews. At Princeton University, she is presently a graduate student studying ecology and evolutionary biology.

Berns and colleagues developed the first training methods for encouraging dogs to enter fMRI scanners, hold motionless and uncontrolled, and allow their cerebral activity to be recorded. His team released the first fMRI brain scans of an awake, unrestrained dog ten years ago. The Dog Project, which Berns refers to as a series of trials examining the mind of the earliest domesticated species, was able to begin as a result.

His laboratory has authored studies over the years on how the canine brain interprets sights, sounds, odors, and rewards like praise or food.

In the meantime, machine-learning computer algorithms' underlying technology continued to advance. Scientists have been able to decipher some patterns of human brain activity thanks to technology. The device "reads minds" by identifying the various things or behaviors that a person is perceiving while watching a video inside brain-data patterns.

"I started to consider whether we could use comparable methods on dogs." Berns cites.

Finding video material that a dog might find engaging enough to watch for a long time was the first difficult task. The Emory study team mounted a video recorder on a gimbal and selfie stick to capture stable footage of dogs from around waist height or somewhat lower than a human.

They utilized the tool to record a half-hour film of situations from the majority of dogs' daily life. People patted dogs and gave them treats as part of the activities. Dogs appeared in scenes sniffing, playing, eating, and strolling on a leash. Activity scenes included people sitting, people hugging or kissing, people handing a rubber bone or a ball to the camera, people eating, and automobiles, bikes, or a scooter passing by on a road.

The video data was divided into different categories based on time stamps, including object-based categories (such as dog, car, human, and cat) and action-based categories (such as sniffing, playing or eating).

In three sessions lasting a total of 90 minutes, just two of the dogs that had been trained for fMRI research had the attention span and temperament to lie motionless and watch the 30-minute video without pausing. Daisy, a mixed-breed dog who may be partially a Boston terrier, and Bhubo, a mixed-breed dog who may be partially a boxer, were these two "super star" canines.

Phillips, who watched the animals during the fMRI sessions and saw their eye movements on the camera, claims that they didn't even require treats. It was humorous because it involved serious science and took a lot of time and effort, but all it involved was the dogs viewing movies of other dogs and people playing ridiculous.                                                                                                         
The identical experiment was performed on two people, who sat in an fMRI and watched the same 30-minute video three times.

Using time stamps, the brain data might be mapped onto the video classifiers.          
Ivis, a neural network machine learning technique, was used to analyze the data. Using a neural network, a computer analyzes training instances to do machine learning. In this instance, the neural network was trained to categorize the content of the brain data.

According to the findings for the two human volunteers, the model created using a neural network mapped brain data to object- and action-based classifiers with 99% accuracy.

The model failed to produce accurate results for the object classifiers when decoding video information from the dogs. However, when it came to identifying the dog actions, it was 75% to 88% accurate.

The findings imply significant variations between the functioning of the human and canine brains.

According to Berns, "We humans are tremendously object oriented." "The English language has ten times as many nouns as verbs because we are particularly obsessed with naming things. Dogs seem to be more focused on the action than the person or thing that they are witnessing."

According to Berns, there are significant distinctions between the visual systems of dogs and humans. Dogs only see in blue and yellow tones, but they have a somewhat higher density of motion-detecting vision receptors than humans.

Dogs' brains being highly oriented to behaviors first and foremost "makes perfect sense," the author claims. "To avoid being eaten or to keep an eye on potential prey, animals must pay close attention to what is going on in their environment. Movement and action are crucial."

For Philips, understanding how various animals see the world is crucial to her field research currently being done on the potential effects of predator reintroduction in Mozambique on ecosystems. She claims that historically there hasn't been much of a connection between ecological and computer science. However, the subject of machine learning is expanding and starting to have larger applications, especially in ecology.

Daniel Dilks, an associate professor of psychology at Emory, and Kirsten Gillette, a neuroscience and behavioral biology major at Emory, are additional writers on the paper. Since graduating, Gilette has enrolled in the University of North Carolina's postbaccalaureate program.

Emory University

Comments

Popular posts from this blog

Do You Sleep on Your Back or Side? Here's The Research on 'Optimal' Sleep Positions

The Science of Beards

Scientists Uncover a Surprising Connection Between Appetite and Sun Exposure