As a reminder, our question is “How can we increase speech input from the environment in order to aid in language skills development in people with Down syndrome?” The solution we chose to move forward with is: Build an AI that tracks the child’s language development and their speech intake via microphones on a wearable device. The product would inform caretakers of the most effective ways to speak to their child in terms of increasing language skills. We think this is a great solution because it allows for increased communication between the child and the caretaker. This would be a great option because it would increase speech input and facilitate conversation practice between child and caretaker, even at home. This is also a completely non-invasive solution that relies only on a small wearable device.

We chose to make a paper prototype that is a hybrid of a storyboard and a product blueprint. Our prototype outlines our process of making the wearable device ready for sale including steps of what the physical wearable ear cuff will look like, and a diagram of the screens of the app that go along with the wearable AI device that give caregivers the necessary data, information, and suggestions to allow for increased modes of communication between the child and caregiver. It will also include beginner’s instructions on how to navigate the various functions of the device when first starting out, along with more advanced instructions for when they have adapted to the basics.

The steps we will take are outlined below.

  1. Contact a hardware company that specializes in wearable technology and ask them to design with a wearable ear cuff that has a microphone and is able to transmit data via cellular data.
  2. Apply for FCC (Federal Communications Commission) approval of the device.
  3. Contact researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and Institute of Medical Engineering and Science (IMES) who have been working with an AI wearable system that can predict a conversation’s tone based on one’s speaking patterns.
  4. Contact linguists and psychologists who specialize in language skill development to ask what aspects of the speech environment are important for strong language skill development and what aspects are usually missing from one’s speech environment.
  5. Contact a software company that specializes in language analysis (ie Google or Facebook). The company would need to provide us with AI software that is able to analyze speech input and note aspects of the speech environment that could be changed in order to improve language skill acquisition. We would also ask the company to create software that analyzes aspects of speech that are relevant to language skill acquisition recommended by the linguists, psychologists, and MIT researchers. The software company would also create an online application that users can log into to see their speech environment data and the suggestions made by the AI.
  6. Have a “training” phase where the model is trained by receiving speech input from many environments. This would entail having a cohort of people with Down syndrome use a prototype of the device in order to provide such data. This training phase is necessary to “teach” the model how to make effective suggestions for increasing speech input.
  7. Conduct a research study where the principal investigator & company can test the efficacy of the device. The study will consist of an experimental group (kids with Down syndrome who use the device everyday) and a control group (kids with Down syndrome who do NOT have access to the device). The study will ideally last for 5-6 months to confirm both short and long-term effects on language skill development. We then can begin to distribute the device to the general public, as users will feel comfortable and confident in using something that has been thoroughly tested.
  8. Have the respective companies refine the software and hardware components and reach out to a marketing consulting firm to help us market the device.