Allophones perform the same in ASL as they do in spoken languages, where different phonemes can cause free variation, or complementary and contrastive distributions.
These are subdivided into parameters: handshapes with a particular orientation, that may perform some type of movement, in a particular location on the body or in the "signing space", and non-manual signals.
[5][6] Other models consider movement as redundant, as it is predictable from the locations, hand orientations and handshape features at the start and end of a sign.
The Symmetry Condition requires both hands in a symmetric two-handed sign to have the same or a mirrored configuration, orientation, and movement.
[12] The brain processes language phonologically by first identifying the smallest units in an utterance, then combining them to make meaning.
The cognitive method of phonological processing can be described as segmentation and categorization, where the brain recognizes the individual parts within the sign and combines them to form meaning.
Even though the modalities of these languages differ (spoken vs. signed), the brain still processes them similarly through segmentation and categorization.
For example, during a brain surgery performed on a deaf patient who was still awake, their neural activity was observed and analyzed while they were shown videos in American Sign Language.
Spoken language creates sounds, which affects the auditory cortices in the superior temporal lobes.
[15] For example, the left superior temporal gyrus is stimulated by language in both spoken and signed forms, even though it was once assumed it was only affected by auditory stimuli.