A brief reminder of our two solutions that we put forward to our experts:

1. Using a specific music frequency to target specific “bilingual” neurons

2. Brain type pacemaker: Physically rebooting that area

This is the feedback we received from various experts (Roy Hamilton):

1) “While it may be true that neurons have certain optimal firing frequencies (for example, neurons in different regions of the brain have different baseline cortical oscillation rates), do you have evidence that sounds played at those frequencies would actually initiate firing at the level of the cortex at that frequency. After all, sounds have to be encoded at the Organ of Corti, transduced through the auditory nerve, and make several stops at different brainstem nuclei before reaching their destination in the auditory cortex. Do you have any evidence that frequency information is reliably conserved across all of those transductions and transmissions?”

2) “Can you show me what evidence there is (I'm truly curious) that there is a cortical frequency associated with L1 or of the neurons of the language network that mediate L1.”

3) “Your brain pacemaker idea for language is broadly aligned with what my lab works on (i.e. brain stimulation), but there are so many differences between what a pacemaker does and what brain stimulation does that I'm not sure the metaphor is very apt. Is your idea to create some kind of closed-loop-system that detects impending language errors and makes corrective stimulations? This would require a ton of BCI and machine learning that seems a little sci-fi at this point in time (but perhaps not impossible).”

The feedback received pushed us to further research the possibility of our solution--encouraging the consideration of neuronal complexities we hadn’t previously thought of. Here’s how our thought process went

It is difficult to prove that the sounds played at those frequencies would initiate firing at the cortical level we wish to reach. This may only be able to be answered through experiments, though we will continue to look for research supporting our hypothesis that this is possible. As far as the conservation of frequency information across transduction and transmission, we believe the 1992 journal article by Yang et al. is evidence in favor of conservation. This article touches on the auditory representation of acoustic sounds arguing that through reconstruction of natural speech sounds, minimal information was lost through the auditory pathway. An article by Hansen and O’Shea (2015) also mentions that information transduction through frequency regulation is limited to conserved signal identity--meaning the information can be received across genes even through the transductions/decoding of genes. This article is more micro than we hope to get, but is helpful nonetheless to understand the ways in which adapting frequencies will affect neurons.

2)

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3169662/.

While we still need to conduct more research on the association of frequency specific to the L1s that would allow us to dive deeper into our research and solutions, research within the article above has found that there is a difference in L1s and L2s regarding which region of the brain processes the L1 and L2 language. In addition to the article above, Kim et al. (1997) also provides evidence that language acquisition of L1 and L2s cause cortical areas associated with either to differ. L2s, are represented next to L1s, but not within them (different L1s tend to overlap). This information may allow us to target, the area where the L1 resides in the STG rather than the L1 itself through a specific cortical frequency.

Within the study conducted in the article above, some of the major findings suggested that L2 words processed like nonword auditory stimuli in the brain and this was shown by lower activation than that elicited by L1 words in the superior temporal and inferior parietal regions. Using this information we could probably turn our focus to the superior temporal and inferior parietal region with knowledge that most L2s are most likely processed within the auditory cortex and the solution with frequency might actually benefit L2s or might not with further research.

Also within conduction aphasia, a condition is the inability to repeat words or phrases back. In the study conducted, it was found that both linguistic and nonlinguistic processing, including the phonological store, can be executed in parallel, and for unfamiliar or low-frequency use of words in repetition tasks, there is more dependence on nonlinguistic processes. This may suggest something about the inability to repeat words in aphasia and its relation to nonlinguistic processing.

We will conduct further research to narrow down our research to target specifically L1s as they activate different parts of the brain than L2s. After that, we can then find the frequency perception of L1 languages and their processing within the brain.

3) We understand that our idea to use a sort of “pacemaker” to stimulate the brain and reboot the damaged area may not be the clearest solution, since a pacemaker operates differently than brain stimulation. However, our solution isn’t about correcting impending language errors--our primary thought dealt with using brain stimulation to rewire this portion of the brain, using the pacemaker as a way to send signals to the brain that would ease an aphasiac patient’s production of language. In short, we don’t want to correct language errors, we want to help the brain recreate its ability to produce language.

Overall, the feedback we have received from experts have helped us delve deeper into the research we should be doing. It has helped us tune our solution to further understand what is plausible and what might not be. Taking into consideration all the feedback we received, we still plan to move forward with our solution to use a brain pacemaker to reboot the affected language areas using frequency. With the help from the experts we contacted, we are able to see the gaps in our thinking and, with further research, address them directly.

Sources:

Kim, K. H., Relkin, N. R., Lee, K. M., & Hirsch, J. (1997). Distinct cortical areas associated with native and second languages. Nature, 388(6638), 171.

Hansen, A. S., & O'Shea, E. K. (2015). Limits on information transduction through amplitude and frequency regulation of transcription factor activity. Elife, 4, e06559.

Sugiura, L., Ojima, S., Matsuba-Kurita, H., Dan, I., Tsuzuki, D., Katura, T., & Hagiwara, H. (2011). Sound to language: different cortical processing for first and second languages in elementary school children as revealed by a large-scale study using fNIRS. Cerebral Cortex, 21(10), 2374-2393.

Yang, X., Wang, K., & Shamma, S. A. (1992). Auditory representations of acoustic signals. IEEE transactions on information theory, 38(2), 824-839.