A new study reveals that an artificial intelligence tool can detect tiny, imperceptible facial movements in patients with severe brain injuries, suggesting some may be conscious days or even weeks before it can be confirmed through standard bedside examinations. The research, published in Nature Communications Medicine, presents a novel computer vision method that could help clinicians better identify which patients are aware, potentially leading to earlier intervention and improved care.
Researchers have long faced a profound challenge in assessing patients after a severe brain injury. In a hospital’s intensive care unit, a doctor might ask a patient who appears unconscious to perform a simple task, such as squeezing their hand or wiggling their toes. A successful response confirms a level of awareness. However, a significant portion of patients, estimated to be between 15 and 25 percent, may be conscious but physically unable to produce these larger, visible movements. This condition, known as covert consciousness or cognitive-motor dissociation, creates a diagnostic gray area where a patient who can hear and understand is mistakenly classified as unresponsive. This misclassification can have serious consequences for treatment decisions and rehabilitation efforts.
Existing advanced techniques to detect covert consciousness, like specialized brain imaging or electroencephalography, are not always practical for continuous bedside monitoring. The scientists behind this new work hypothesized that the earliest signs of returning consciousness might appear as very small, low-amplitude movements in the face. Because the face has a complex network of small muscles and a large representation in the brain’s cortex, it presented a promising area to search for these subtle signs of volition.
To test this idea, a team of neurosurgery researchers at Stony Brook University developed and evaluated a computer vision tool they named SeeMe. The study involved 37 adult patients who were in a coma following an acute brain injury from causes like trauma or hemorrhage, as well as a comparison group of 16 healthy individuals.
For the patients, the research team conducted daily sessions when medically appropriate. They would first pause any sedating medications for a short period to allow for the most accurate assessment. A camera was positioned at the foot of the patient’s bed, focused on their face. The patient wore disposable headphones through which a series of three recorded commands were played: “Open your eyes,” “Stick out your tongue,” and “Show me a smile.” Each command was repeated ten times during a session. The SeeMe algorithm then analyzed the video recordings.
The system works by identifying and tracking thousands of microscopic points on the face, such as individual pores, creating a detailed vector map of facial movement. By comparing facial movements during the quiet baseline period to the period immediately following a command, the software could quantify a response even if it was completely invisible to the human eye.
To determine how SeeMe’s performance compared to human observation, the researchers had two trained medical students, who were unaware of the patients’ clinical status or SeeMe’s findings, review the same video recordings. These blinded raters were tasked with identifying any visible responses to the commands. The team also compared SeeMe’s results to the patients’ official clinical assessments recorded by the hospital staff, specifically the Glasgow Coma Scale for eye-opening and the Coma Recovery Scale-Revised for command following involving the mouth.
The findings showed that SeeMe was substantially more sensitive than human observers. Across all sessions, the algorithm identified an average of 5.4 responsive movements out of 10 commands, while the blinded human raters detected only 2.8. More importantly, the tool detected signs of consciousness significantly earlier than standard clinical examinations. In the 30 patients where SeeMe detected any movement, it found a response before clinicians in more than half of the cases. Specifically for eye movements, SeeMe detected a response to the command “Open your eyes” an average of 4.1 days earlier than doctors documented eye-opening in response to voice.
For mouth movements, which signal a higher level of cognitive function known as command following, SeeMe detected a response an average of 8.3 days before it was observed by clinicians. For instance, one patient who was in a coma after a car accident showed stimulus-evoked mouth movements on SeeMe on day 18 after his injury, but he was not documented as following motor commands clinically until day 37.
The researchers also found a strong link between the movements detected by SeeMe and patients’ eventual outcomes. The magnitude and frequency of the tiny facial movements measured by the system correlated positively with a patient’s functional status at the time of hospital discharge. Patients who had larger and more consistent facial responses detected by SeeMe were more likely to have regained consciousness and have a better overall outcome.
To confirm that these subtle movements were genuine attempts to follow commands rather than random spasms or noise, the team employed another artificial intelligence classifier. This deep neural network was trained on the facial movement data and was able to predict which of the three commands had been given based solely on the pattern of the patient’s facial response with 65 percent overall accuracy. This suggests that the detected movements were specific and intentional.
The study does have some limitations. The researchers note that in a few cases, clinicians detected a response before SeeMe did, which could be explained by natural fluctuations in a patient’s arousal and alertness. The study also included patients with different types of brain injuries, which could affect their recovery paths. In addition, medical equipment like ventilators sometimes obscured the view of the patient’s mouth, making analysis difficult for those commands. The use of sedation, while necessary for patient care, also complicates the detection of motor responses. The team suggests that their tool is meant to complement, not replace, existing clinical assessments and long-term observation.
For future research, the scientists plan to conduct larger clinical trials to further validate their findings. They hope to incorporate objective measures of muscle activity to confirm the movements detected by the software. The long-term vision is to integrate SeeMe with other monitoring technologies, such as electroencephalography, to create a more comprehensive platform for tracking consciousness in the intensive care unit. Such a system could provide clinicians with a clearer, more objective picture of a patient’s inner state, helping to guide treatment plans and ensure that patients with covert consciousness are identified earlier, giving them a better opportunity for rehabilitation and recovery.
The study, “Computer vision detects covert voluntary facial movements in unresponsive brain injury patients,” was authored by Xi Cheng, Sujith Swarna, Jermaine Robertson, Nathaniel A. Cleri, Jordan R. Saadon, Chiemeka Uwakwe, Yindong Hua, Seyed Morsal Mosallami Aghili, Cassie Wang, Robert S. Kleyner, Xuwen Zheng, Ariana Forohar, John Servider, Kurt Butler, Chao Chen, Jordane Dimidschstein, Petar M. Djurić, Charles B. Mikell, and Sima Mofakham.