Hi, we’re Synaptax. As you may have learned from our previous blog posts, we’re looking to find a way to detect neurodegenerative disorders before they become problematic, using language as both an indicator and potentially even a tool for delaying or preventing these diseases’ onset. Most of our feedback indicated that our area of focus is indeed of high potential impact, so we’re going to stick with the general concept that we initially decided to pursue. A lot of our feedback also suggested that we should narrow down the diseases that we research and the methods through which we tackle the issue, so we’ve spent the past week thinking about specific focuses for our project and refining our idea.We decided to revise huge problem to using language to specifically detect Alzheimer’s disease and potentially use language as a tool for delaying or preventing this disease’s onset.
While brainstorming a more specific focus, one common question we’ve thought about is how we plan to gather language data to guide our pre-diagnoses. One early idea we had was to collect data from phone calls or text messages of those at an age that puts them at elevated risk for neurodegenerative disorders. However, this seemed too intrusive and also seemed like it would be susceptible to the power of suggestion; if patients knew they were being monitored for signs of decline, the data received from these monitored sessions might not be representative of the patient’s normal patterns. Another idea that we considered was yearly “interviews” with elderly people to check for signs of decline. This idea seemed more promising in that it wouldn’t be as intrusive as phone call monitoring, but we realized that a half hour of conversation data every year might not provide enough reliable data to accurately give us the early predictors we’re hoping to detect, especially with day-to-day variation in people’s alertness due to sleep and other factors. In the same vein, we considered giving frequent assessments to those at an elevated risk for neurodegenerative diseases, such as assessments that require use of language to describe pictures. This idea seems promising, as it will likely provide relevant data to cognitive decline with minimal intrusion. Therefore, we hope to explore it further in the coming weeks.
Aside from collecting data from conscious patients, we also considered the idea of collecting data from subjects who are asleep: specifically, any speech that they produce while sleeping. However, we realized that sleep speech might not be common enough to serve as an accurate enough predictor.
We also considered some ideas that relate more directly to treatment than to diagnosis. One interesting possibility that we considered was to provide some method of psychologically reinforcing proper use of language in order to prevent -- and to an extent, also detect -- cognitive decline. This could take place through a noise automatically being administered to a patient when they make language errors. However, like several other ideas that we considered, we determined that this idea was also somewhat too intrusive and potentially annoying or infuriating for the patients. We then considered doing similar types of exercises without the immediate correction, and instead pointing out mistakes at the end of each exercise. While we thought that this was a step in the right direction, we determined that these types of exercises might not be specific enough to the patient. On top of that, we realized that these exercises might not be effective enough to be useful. This is when we arrived at our second idea that we would like to further explore. Instead of simply giving each patient the same generic set of exercises, we would give each patient exercises tailored to their specific condition. We would do this by training a piece of machine learning software to take in data from the results of a patient’s exercises and use that to determine what exercises a patient should do during their next session.