Speech signals play a major role in human communication. It includes not the words a person says but also their emotions. They’re made up of both verbal and non-verbal traits that make up how a person is trying to communicate.
As humans, we spend our lives surrounded by other humans learning these subtle cues. Many of us can tell the difference between someone who is feeling fine and someone who is just saying it to cover feelings of anxiety or stress.
In healthcare, there are many situations where picking up on more subtle speech signals is vital to assess a person’s wellbeing and, in some cases, language and speech signals can even play a role in early diagnosis of certain neurological diseases such as Alzheimer’s disease or Parkinson’s disease.
This is what University of Sheffield’s Dr Heidi Christensen is most interested in. As a senior lecturer in computer science, her research interests are on the application of AI-based voice technologies to healthcare. She was recently named a winner in the 2021 FDM everywoman in Technology Awards.
“The speech signal is hugely interesting to me. It carries so much information, not just about what a person is wanting to communicate but also about the current state this person is in,” she said.
“Transient things like emotions they may be feeling as well also more long-term things such as if they’re feeling depressed or anxious, or whether they’re at the early stages of neurodegenerative conditions like Parkinson’s or dementia.”
‘We need to make sure that the data we use is representative of the diversity we see in the population’
– DR HEIDI CHRISTENSEN
Christensen wants to explore how computers can be used to detect these subtle speech cues, which could help doctors with early detection, improved treatment plans and could even provide better training for psychotherapists or teachers.
“At the core of it is programming algorithms to process the speech signal and learn the patterns in there using machine learning,” she said.
“But, if we are to develop systems and tools with real impact, it is really important that we also involve our stakeholders right from the start: we need to understand their concerns and experiences and, as engineers, design solutions that address their needs. For my research, this means understanding the needs of both patients, their families and doctors and healthcare professionals.”
A recent research paper Christensen worked involved a fully automated cognitive screening tool based on assessment of speech and language. The team recruited participants with Alzheimer’s disease (AD), mild cognitive impairment (MCI), functional memory disorder (FMD) and a control group.
Participants responded to 12 questions posed by a computer-presented talking head. The team found that the automated tool could distinguish between participants in the AD or MCI groups and those in the FMD or control groups with a sensitivity of more than 86pc.
In the broader area of using technology in communication, last week saw a massive breakthrough with researchers at Stanford University successfully implanted a brain-computer interface that converted a person’s thoughts about writing words into text.
But while ground-breaking research such as this is impressive, it’s not without its challenges. “Current successful state-of-the-art algorithms depend on having access to large amounts of data,” said Christensen.
“In the healthcare domain, it can be difficult for researchers to collaborate and share data, and it will take a long time to build up the sorts of sizes of datasets you see in more mainstream speech technology areas.”
Finding the balance between using healthcare data for research while protecting patient privacy is an ongoing dilemma for both researchers and clinicians alike.
Bioethicist Marielle Gross previously spoke to Siliconrepublic.com about the intentional barriers set up between clinical research and clinical practice.
“The ideal is that everything I learn, I share with my colleagues and everything they learn, they share with me and we all have that synergistic benefit. But that means, in a sense, treating all patients like research subjects, at least in some regard,” she said.
Christensen also highlighted another regularly cited problem when it comes to bringing AI systems into healthcare – the concerns around structural bias.
“We need to make sure that the data we use is representative of the diversity we see in the population as a whole, not just those signing up to take part in study trials,” she said.
Women in engineering
Outside of her interest in voice tech within healthcare, Christensen is also a passionate advocate for bringing more women into the engineering sector.
She said one of the most important elements of encouraging more women into STEM is to ensure they have role models. “Everyone needs to be able to imagine themselves in a particular career before choosing it: if you don’t see anyone that looks like you, then we know that that’s a major barrier,” she said.
“We have a moral responsibility to make sure that people in STEM working on solving the world’s problems are themselves representative of the world as a whole.”
Christensen has been actively involved with trying to make the university in which she works a more inclusive and equal place for staff and students.
“We try and make sure that everyone feels supported and respected and that everyone feels a sense of belonging and that they can be themselves. We have made great progress, but there is still a lot to do.”
Want stories like this and more direct to your inbox? Sign up for Tech Trends, Silicon Republic’s weekly digest of need-to-know tech news.
The post How speech signal technology could be used to advance healthcare appeared first on Silicon Republic.