A quick refresher: We are trying to find a new solution for language acquisition disorders in young individuals using a musical approach. We propose using machine learning to find the areas of the brain affected by language disorders and using different aspects of music to help target those areas.
Our Original Big Question: How can we tackle language acquisition disorders among young individuals(infants, toddlers) with a musical approach?
Receiving feedback from the class was extremely useful. It helped us reevaluate and readjust our proposed moonshot.
Below is a summary of the feedback we received:
- Many people suggested narrowing down our focus to just one language disorder. This will allow us to dive deeply into that disorder.
- Suggested different approaches to the machine learning aspect and were a little concerned about the technology component and how it would work.
- Most people felt our question was moonshot material
- Many people thought it was really interesting that there was a connection between music and language learning
- Interested in our focus being specifically on children
- Impressed by computational and technological aspects
- A few people felt we should think a little bigger
And some of the commonalities in those criticisms included:
- Suggested looking into child language development lab(?) at Penn and see if they are doing/have ever done anything with music
- Narrowing down to one specific language disorder
The criticisms and feedback made us rethink about our question. It led us to discuss and decide on changing our focus to one particular language disorder. We feel this will allow us to gather a more in-depth understanding of how one language condition works with the hopes of mastering that and applying our gained knowledge to additional language disorders and delays, if possible. Narrowing our focus to one language disorder will also provide us with some level of common ground among patients which can serve as a control of sorts.
Our Revised Big Question : Based on this feedback, we revised our big question and approach to answer it.
How can we tackle Specific Language Impairment (SLI) among young individuals (infants, toddlers) using a musical approach?
We want to find a new solution for SLI in young individuals using a musical approach. We will take NIRS data on young individuals in both participation-based and listening-based music classes and compare that data to machine learning data based on what specific musical elements target which areas of the brain.
Failing Forward: As a group, we brainstormed many ideas about possible solutions to our big questions/problem. Below is a list detailing 6 of our failed ideas.
- Music class: listening based
- Didn’t pick because: ERPS works better with experiments that examine response to a specific stimulus; however, in our experiment, we would want to see how young individuals respond to different stimuli.
- ERPS does not allow test subjects to talk, so the study can only be listening based; however, we would expect young individuals to sing along or verbally respond to the music; also, it is just so hard to keep kids from making any sound.
- Also, it is hard for ERPS to tell where exactly the signal comes from.
2. Early exposure for higher risk babies (genetic)
- Compared to control subjects that are predisposed - see who develops SLI.
- Didn’t pick because it is really hard to know beforehand if a baby will develop SLI even if they are genetically predisposed - and expensive.
3. Octave based music? Thirteen note based music? Based on language and cultural background?
- Didn’t pick because we don’t know which cultures listen to which types of music and how to proceed in different curriculums for various types of scales - becomes too many factors to know what is going on.
4. Music treatment based on “type”(where affected) of SLI if there are subsets of the disorder?
- Didn’t pick because we do not know that there will be specific common irregular areas of the brain across individuals with SLI. This could be problematic because if we define subsets of SLI and then subsequently treatments there could be individuals that fall outside of these established subsets on whom the corresponding treatment does not work.
5. Compare 4 year olds with SLI and 4 year olds without SLI
- Didn’t pick because it is just a minor idea that is not big enough to constitute our solution. Rather, we may just use this as one aspect of our solution to gather the best and most accurate data on the disorder to then create the most encompassing solution.
6. Treatment once data gathered→ from home or go into lab/office?
- App with children data embedded
- Similar to a few of the other non-selected ideas this idea is not large enough to stand on its own. This is not essential to solving the problem, but could just be something to be added once our solution has been tried and perfected. We would first need to find out if a home treatment was even feasible based on equipment/technology needed.
Our Two Best Ideas:
- Music class with NIRS - participation & listening based (singing & instruments)
- It is financially-wise, considering that our budget is limited and we want to test on large enough samples to get enough data for our computational model
- It doesn’t make any noise, so we make sure that young individuals involved in our experiment are actually responding to “music” instead of random noise.
- Subjects can move around, which is great since our test subjects are toddlers
- It can be used with babies
Why music class:
- A setting of music class makes it easier for young individuals to relax and provide brain data similar to real-life situations
- It enables us to see how they respond behaviorally and neuro-biologocially to different musical elements
- It makes it easier for us to build connections between music pieces and different music elements involved in each music piece
2. Computational Machine Learning Model:
How it works:
a) Input :
- brain data for each individual (specific regions with impairment)
- corresponding “music” data for each individual (for each musical element, whether or not this individual can respond to)
- To find out the specific correspondence between brain activities and musical elements/types.
- To find out what musical elements each music piece used in our experiment is composed of.
- To find out a specific therapy for each individual with SLI by deriving a best combination of those musical elements that they should be exposed to according to our model.
- For any arbitrary but particular individual with SLI, we provide a musical therapy that is suggested by our model.
Why computational machine learning model:
- It utilizes all the data we get in an optimal way.
- It helps to show the connection of data that is otherwise very hard to figure out
- It gives us the currently best music therapy that we can provide to any young individual involved in our experiment. Also, once we start applying this musical therapy with the individual, and we get more data, we can then feed new data into our model and obtain more accurately predicted solutions along the way.
In addition to these two ideas, we are also interested in looking into if exposure to different music types can ameliorate SLI for children of a different native language. Perhaps, listening to music in other(non-native or fluent) languages can reach brain areas of speech/language that individual is not exposed to and ultimately help reduce or remove SLI symptoms. This idea reminded us a bit of the Jusczyk head turn experiment because that demonstrated that babies can distinguish phonemes in non-native languages that become indistinguishable once 12 months of age. Perhaps, exposure to non native music types can help children with SLI through a similar paradigm.