Predicting the Behavior and Health of Individuals: Why Do Brain Models Fail?



With the help of machine learning, scientists have been able to better understand how the brain creates complex human traits by identifying patterns of brain activity linked to traits like impulsivity, attributes like working memory, and diseases like depression. These techniques allow scientists to create models of these linkages, which can then be applied to forecast human behavior and health.

It only functions, though, if models are inclusive, which prior research has demonstrated is not the case. There are those people that simply do not fit any model.

In a study that was just published in the journal Nature, researchers from Yale University examined who these models tend to fail in, why that happens, and what can be done to correct it.

The study's principal investigator, an M.D.-Ph.D. Abigail Greene, a Yale School of Medicine student, says that in order to be most useful, models must be applicable to any particular person.

She explained that the model must be applicable to the patient who is currently in front of them if this type of work is to be applied in a clinical setting, for instance.

Two strategies are being considered by Greene and her coworkers because they think they could improve the precision with which psychiatric categorization is delivered by models. The first is by more precisely classifying patient populations. For instance, schizophrenia can be diagnosed based on a wide range of symptoms that can differ substantially from person to person. If researchers have a better understanding of the neurological basis of schizophrenia, including its symptoms and subtypes, they may be able to classify patients in more accurate ways.

Second, some traits, like impulsivity, are present in a wide range of circumstances. Regardless of the underlying medical condition, knowing the neurological underpinnings of impulsivity may help doctors treat that symptom more successfully.                                                                                                 
Greene added that these developments would have an impact on how a patient responds to treatment. The more effectively we can adapt treatments to various subsets of people who may or may not share the same diagnoses, the better.

Greene and her colleagues first built models that could utilize brain activity patterns to forecast how well a person would perform on a number of cognitive tests in order to investigate model failure. When put to the test, the models successfully predicted how most test takers would do. However, for some individuals, they were mistaken, assuming incorrectly that individuals would score poorly when they actually scored well, and vice versa.

The research team next looked at which individuals the models misclassified.

The same individuals were consistently misclassified throughout tasks and studies, according to Greene's findings. "And the misclassified individuals in one dataset shared characteristics with the misclassified individuals in another dataset. Being misclassified therefore has significant significance.

They next investigated whether there were any changes in those people's brains that would account for these identical misclassifications. However, there were no discernible variations. Instead, they discovered that misclassifications were linked to clinical characteristics like symptom severity and sociodemographic parameters like age and education.

In the end, they came to the conclusion that the models weren't just reflecting cognitive ability. According to Greene, they were actually reflecting more intricate "profiles" that combined cognitive abilities with numerous sociodemographic and clinical variables.

She added, "And everyone who didn't fit that stereotypical description was failing by the models."

For instance, one of the models employed in the study linked greater schooling to better results on cognitive tests. Any less educated people who performed well were incorrectly predicted to be low scorers since they didn't fit the model's characteristics.

Due to the model's lack of access to sociodemographic data, the issue became more complicated.

According to Greene, "the sociodemographic characteristics are integrated in the cognitive test score." In essence, biases in the design, administration, scoring, and interpretation of cognitive tests may influence the outcomes. Additionally, prejudice is a problem in other industries. For instance, research has shown how bias in input data impacts models used in the criminal justice and healthcare industries.

Accordingly, the test results represent summaries of a person's cognitive ability and other elements, and the model is forecasting the composite, explained Greene. As a result, researchers must consider more carefully what a test is actually measuring and, consequently, what a model is forecasting.

The authors of the study offer a number of suggestions for reducing the issue. They advise scientists to use techniques that reduce bias and increase the reliability of the metrics they're employing during the study design phase. Additionally, after gathering data, researchers should frequently employ statistical techniques that adjust for any residual stereotyped features.

These metrics will result in models that more accurately represent the cognitive construct being researched, the researchers claim. However, they point out that as complete bias elimination is improbable, it should be taken into account when interpreting model results. Furthermore, it could be required to use multiple models for specific measures.

The senior author of the study and professor of radiology and biomedical imaging at Yale School of Medicine, Todd Constable, predicted that there would come a day when different models were simply required for various populations. "One model won't work for everyone."

By YALE UNIVERSITY 

Comments

Popular posts from this blog

Hubble Spies a Spectacular Spiral Galaxy

Breakthrough: Physicists Take Particle Self-Assembly to New Level by Mimicking Biology