For our first draft of our moonshot, we decided to focus on the language skill deficits common in individuals with Down Syndrome. After some research, we found that individuals with Down Syndrome often receive less speech input from their environments which results in delayed language learning. We also found that many individuals with Down Syndrome have poor verbal short-term memory, which also results in delayed language learning. Thus, our questions were, “How can we increase speech input from the environment?” and “How can we functionally improve verbal short-term memory?”

In terms of feedback, we received mixed reviews about our moonshot pitch. The majority agreed that our problem was too broad and that it should be more defined and narrowed down. We did a great job of explaining what Down Syndrome is and the importance of our moonshot project; however, our peers think that we have presented many problems associated with Down Syndrome, without focusing on a particular one to solve. More specifically, some of our peers think we should focus on increasing environmental speech input over improving verbal short-term memory, because the solution to this problem could be useful in other situations and in other communities. Additionally, some of our peers inquired about the intersection of technology and artificial intelligence with our moonshot idea.

Based on the feedback, we decided to go with our first question, “How can we increase speech input from the environment?” Most of the criticism we received asked us to narrow our scope down to one specific question, and while some groups liked question 1 better, others liked question 2 better. Thus, we are not modifying our question but are choosing to go with the first question simply because we all immediately had more ideas for solutions to this question than to the other.

We identified 8 solutions to our problem:

  1. Build an AI robot that converses with the child with Down Syndrome.
  2. Build a wearable device that translates the child with Down Syndrome’s thoughts into a machine that says what they are thinking. This could increase the communication between a child with Down syndrome and their immediate environment.
  3. Build an AI that tracks the child’s language development and their speech intake via microphones and informs parents of the most effective ways to speak to their child in terms of increasing speech input.
  4. Build a wearable device that interprets non-verbal communication from others and repeats the information verbally to the child. This would increase speech input.
  5. Design a weekday camp/program specialized for children with Down syndrome that is based on conversation practice learning and hands-on activities.
  6. Design an online program that parents/guardians of children with Down syndrome can engage in to work on their speech input and conversation at home.
  7. Build an assistive device that periodically reinforces new vocabulary or unfamiliar words they encountered by speaking it to them.
  8. Build a wearable device that translates text to speech (i.e., at school, the writing on the board would be heard by the child).

Our two best solutions:

  1. Build a wearable device that translates the child with Down Syndrome’s thoughts into a machine that says what they are thinking. This could increase the communication between a child with Down syndrome and their immediate environment. How the device is going to translate the child’s thoughts is unknown to us so far, but we believe that including one’s thoughts can improve communication, providing the child a base for what communication looks like. The main result of this solution would be that because the child is “speaking” more, more conversation  would ensue, thus resulting in increased speech input. Hopefully, the child would outgrow the device eventually, as it could be problematic if the child learns to rely too heavily on the device.
  2. Build an AI that tracks the child’s language development and their speech intake via microphones and informs parents of the most effective ways to speak to their child in terms of increasing language skills. We think this is a great solution because it allows for increased communication between the child and the parents. This seems to be the most authentic way of increasing speech input, as an assistive wearable device is not required. This would also be a great option because it would increase speech input and facilitate conversation practice between child and parent, even at home.

We also identified 6 failed solutions. The reasons for failure are in caps:

  1. A brain-implant device that speaks with the child: COULD RESULT IN PSYCHOSIS
  2. A toy that immediately repeats everything that it heard (via microphone): PROBABLY WOULDN’T HELP THAT MUCH BECAUSE IT’S JUST REPEATING; NOT ADDING NEW SPEECH
  3. Providing each person with Down Syndrome with another person to speak to them all day everyday: INFEASIBLE; A ROBOT CAN PROBABLY DO THIS MORE EFFICIENTLY; FAMILIES MIGHT NOT CONSENT TO THIS
  4. An assistive device that translates real-time conversations at a slower speed to ensure full comprehension: IT’S TRANSLATING SPEECH THAT’S ALREADY BEEN SAID, RATHER THAN INCREASING SPEECH INPUT
  5. An assistive device that instantaneously corrects any speech errors on the spot. Communication with others won’t be impeded, and they can correct and learn from their mistakes in real-time conversations.: IT MIGHT IMPEDE CONVERSATION FLOW AND LEAD TO DECREASED SPEECH INPUT, EVEN IF UNCONSCIOUSLY An assistive device that fills in errors or predicts the next words that the person with Down syndrome might say to prevent conversation pauses: MIGHT END UP RELYING ON THIS TOO MUCH, LEADING TO DECREASED SPEECH INPUT OVER TIME