Bimodal bilinguals know both a signed and a spoken language (hence two "modes" of language). It has been found that bimodal bilinguals have words in both modalities activated when watching only sign language or reading text from a spoken language. This study expands upon this body of work by seeing whether sign language words are activated by hearing words from a spoken language.

Previous studies show that unimodal bilinguals (i.e. individuals who know only two spoken or two signed languages) have co-activation of words in both languages when primed in only one. [1] showed that when Russian-English bilinguals were shown four pictures (of a marker, a post stamp, and two others) and were instructed to click on the "marker", they looked at pictures of both the "marker" and the "post stamp" more than the other two pictures present on the screen. The Russian word for "post stamp" is "marka" and thus shares similar sound features. The surprising thing about this study is that all instructions were given in English, however participants still looked at the post stamp more, meaning that the Russian word for post stamp was activated due to the shared sound features (regardless that the entire study was conducted in English). Thus, it is argued that both languages are active in the brain, regardless of the language in which input is presented.

This current study [2] seeks to expand on this finding by looking at bimodal bilinguals. These authors seek to replicate the findings of studies like that by [1], by looking at whether use of a spoken language could also activate the speaker's knowledge of their signed language.

The authors presented subject with multiple trials wherein four pictures were presented (figure 1). The four pictures represented words in English that shared no sound structures (i.e. "cheese", "paper", "watch", "stamp"). However, two of these words shared sign structures, such as hand: shape location, motion, and orientation. For example, the signed words for cheese and paper share 3 out of 4 of these features. These features are sign languages' equivalents to sounds in spoken languages.

The study presented these four images, and all participants were asked in spoken English to click on the "target" word, which shared ASL sign features with another "competitor" word. For example (figure 1) "cheese" would be the target and "paper" would be the competitor. These would be presented along with two distractor words which shared no sound structure in spoken English nor sign structure in ASL (e.g. "stamp" and "watch", figure 1). The authors then tracked ASL-English bilinguals' and English monolinguals' eye gazes when presented with these stimuli.

They hypothesized the following: the English sounds in spoken language activated the target word and its meaning in the brain. If bimodal bilinguals look at the competitor picture more than the two unrelated pictures, then this competitor word must have been activated too. The only way that this competitor word could have been activated more than the distractor words, the target word (or its meaning) activated its sign counterpart. Then this sign word activated the competitor's signed word (since both words share features in ASL, just like "marker" and "marka" in [1]). See black arrows in figure 2 for details. However, English monolinguals should look at this competitor word no more than the two distractor words since they are in no way related in English based on their sounds.

The authors found, as predicted, that bimodal bilinguals looked at the competitor word significantly more than the distractors, whereas monolinguals looked at the competitor the same amount as the distractors. Likewise, not only did they find this difference in number of looks, but also in time spent looking at each picture. The implication is that the activation of the sounds of an English word activated the ASL equivalent of that word, which in turn activated ASL words with similar sign structures. This extends the findings in [1] to bimodal bilinguals.

One confounding factor here is what is exactly activating the ASL word. Either the English word is directly activating it (figure 2, blue line), or the English word is triggering its own meaning. This mental representation of the meaning then activates the ASL word (figure 2, red line).

This study could lead to interesting findings in neuroscience. Since sign languages rely on visual input, whereas spoken languages rely on auditory input, they must use different tracks in the brain. However, this co-activation of words and meanings in both languages may indicate that words and meanings are stored in the same place in the brain. If this is the case, we may be able to compare brain activation patterns of these bimodal bilinguals, with spoken language and sign language monolinguals and determine an area of the brain responsible for words and meaning. however, this assumes that sign language word features and spoken language word sounds activate separate locations in the brain (which very well may not be the case). The issue still holds of whether it is the meaning of the spoken target word, or the spoken target word itself, that is activating the signed word.

Co-activation of words across modalities could not have been easily discovered given current neuroscientific methods. Thus, this study emphasizes the importance of behavioral research in understanding the process of language and the brain. Therefore, further research within this field, specifically trying to disambiguate whether this co-activation occurs through the words or the word meanings, would help inform and guide future neuroscientific research in discovering how these processes occur.

Figure 1: What the participant sees on the screen
Figure 2: The participant hears sounds (green). This activates the word "cheese" in the brain. This then activates the mental representation of the meaning of "cheese" (picture of the cheese). Either or both the word "cheese" and the mental meaning of cheese then activate(s0 the sign language word for cheese. The sign language word for cheese activates the features of how hands sign this word. These features then activate the signed word for paper, since paper and cheese share a lot of signed features. This then activates the mental meaning of paper.

[1] Marian, V., & Spivey, M. (2003). Bilingual and monolingual processing of competing lexical items. Applied Psycholinguistics, 24(2), 172-193. https://doi/org/10.1017/S0142716403000092

[2] Shook, A., & Marian, V. (2012). Bimodal bilinguals co-activate both languages during spoken comprehension. Cognition, 124(3), 314-324. doi:10.1016/j.cognition.2012.05.014