After consideration of the experts' feedback, we created a revised solution for our moonshot. With this revised solution, we then created a prototype(multiple prototypes for the different components) and the details of this work can be found below.

Our Question: ‌‌

How can we tackle Specific Language Impairment (SLI) among young individuals (infants, toddlers) using a musical approach?

Final solution we will pursue:

  1. Part 1: Experiment 1
  2. Part 2: Experiment 2
  3. Part 3: Machine learning model
  4. Part 4: Testing & Improving our solution

After hearing all of the feedback we got from our two experts, we decided to make a few changes. One of the biggest of these was to change our original idea of an active language task, that would have been participation-based, to something that will be passive and listening based. This will allow us to more directly compare this task to our second task, where we will have kids exposed to music. This way, the two activities are set up similarly, so any other factors that could contribute to brain activation are taken away. The two experts we reached out to were extremely helpful in making this change. We also want to have a control group for each of these tests. We will, in our music listening stage, test different types of music with each of these groups to find a type of music that affects SLI patients in dramatically different ways than the control group. While we have made these changes, we will still use NIRS to record our data, and keep many other elements the same. With our NIRS data, we will use machine learning, but we have adapted our use of machine learning based on our expert feedback. At each of many checkpoints, at 1 week, 1 month, 3 months, and 6 months, we will input our data into the machine learning model to make it smarter and better. This way, we will build connections over time between brain patterns and both linguistic and musical behavior. We can also get personalized solutions for each child. To conclude, we kept a similar solution, but made a few key changes with the help of our experts that we think will make a big difference in our results. We especially like how our machine learning model will now improve over time, and how we can be more sure that the results in our experiment are due to the music, instead of a false result due to a difference in how the two experiments are conducted. We are confident now that we have a strong experiment thanks to our experts.

Part 1: Experiment 1--> Passive Listening Task

  1. The prototype is an explanation and an overview of the process and steps that will occur for the initial language task. First, the subjects' (children aged 2-5) information will be entered into our database. Then, the head measurements will be taken and cap fitted to the head, with the probes then properly placed. Then, the passive language task will occur. Details of the task can be seen on the prototype. The data from the experiment/task will be automatically uploaded to and stored in the database. This information will then be input into the machine learning model.

Part 2: Experiment 2--> Music Listening Task

a. Figure out what music works best

Different options for music:

  1. Classical music (which songs?) (more repetitive?)
  2. Kids songs with words and stories (rhyming!) (itsy bitsy spider, five little monkeys, baa baa black sheep, i’m a little teapot)
  3. Songs in other languages: latin music?

Testing which music kids respond to most:

For control group and SLI patients, use NIRs to find which music creates the most brain activity and if there is a type that has different results between the control group and the patients

b. Use this music

1. Why?

Ultimately, we want to make improvements in the SLI patients’ speech abilities. Listening to this music routinely, combined with language tasks, may help these kids communicate better.

2. What?

We will figure out if these kids will become better and more confident speakers through regularly listening to music and targeting the parts of their brain that need improvement through specific language tasks.

3. Who?

SLI patients around age 2-5

4. When?

Over the course of many months, with the kids doing the activities/listening to the music at regular weekly or biweekly intervals.

5. How?

In a listening-based music class, and using NIRs to collect data.

Part 3: Machine learning model:

1. Why?

To get a personalized solution for a SLI patient based on the NIRS data from the passive language task and the music listening task.

2. What?

Input: NIRS data from passive language task and music listening task

Output: a set of music that they should listen to in order to activate the parts of brain that are currently not being activated during language tasks.

3. Who? (no subjects involved; just us developing the model)

4. When:

After we conduct experiment 1 and 2 for the first time

After they used our solution for 1 week, 1 month, 3 months, and 6 months

5. How:

First solution model (still deciding what modeling to use)

Refining our model by plugging in new data during the revising stage of our solution.