Prev
| Home | Next
Communication
is the life-principle for all living organisms - be they plants, insects, animals
or human beings (Sovak, 1979). The fact that plants communicate with animals has
been proved by so-called herbo-galvanic reflex. Insects are said to utilize chemical
releasers (pheromones) which are probably the first signal put to service in the
evolution of animal communication (Wilson, 1969 : 75). Various kinds of visual
and acoustic communication signals are used in varying degrees by various animals.
But it is with man that we get a qualitatively distinct model of language which
is both specific and species-consistent. The evolution of this model is hidden
in the series of transformations, structural and functional, that took place in
the course of the formation of modern man. It is tied to the history of physiological
adaptations, of cognitive specializations, of sensory specifications (Lenneberg,
1968 : 610 and 1967). Chomsky has shown that the basic form of this model (Universal
Grammar) is innate in character; in fact, it is the reflex of the human mind accountable
in terms of human biology (Chomsky, 1965; 1968 and 1975). Thus, according to him,
'Some intellectual achievements, such as language learning, fall strictly within
biologically determined cognitive capacity (Chomsky, 1975 : 27).
Certain
facts regarding the nature of cognitive capacity, as conceived by Chomsky, should
be mentioned before we discuss the communicative potentials a deaf shows through
his behaviour. It is emphasised that one of the basic faculties of human mind
which is common to the species is the faculty of language. Secondly, the intrinsic
nature of the language faculty is free of stimulus control. Thirdly, given appropriate
stimulation, it is this faculty which constructs a grammar. Fourthly, Universal
Grammar - the system of principles represented in the genetically determined initial
state, and Grammar - the system of knowledge represented in the mind and the brain
in the eventual steady state - are real objects, part of the physical world (Chomsky,
1983). It is true that a typical human language has the vocal-auditory channel
of communication. But the exploitation of vocalauditory channel belongs to the
area of performance. A channel of communication has nothing to do directly with
the cognitive faculty or linguistic competence. If a child has a normal mind (a
normal control of nervous system) it is hypothesized that he can be trained to
exploit linguistic signs for communication purposes. There exist a number of interesting
circumstances wherein spoken language is either totally or partly in abeyance.
Thus arise various sign-languages, as they are called (Critchley, 1939 : 32).
Indian
language philosophers, like Bhartrhari, make distinction between covert speech
(implicit speech element) and overt speech (external manifestation in the form
of spoken language). According to him there cannot be any cognitive state of human
mind without the imprint of implicit speech element (inner speech). His familiar
metaphor for revealing the intrinsic relationship between consciousness and language
is fire and illumination: Just as fire and illumination cannot exist without each
other so can't consciousness and language. Like a child, a deaf may not use language
overtly, but he is obviously endowed with implicit speech. The external world
in fact awakens the 'word seed' in the light of which the child cognizes the external
world through the process of vikalpa (fictional construction). Every person -
normal speaking being or a deaf - does have the ability to construct the fictional
world. It is a different matter whether he is physiologically competent to externalize
the fictional world through overt speech or not. One way argue here that in adverse
circumstances such as impaired hearing, one may adopt another sensory channel
such as visual and tactile.
The
most important fact to be remembered is that sign languages are produced by human
agents through the visual-gestural channel of communication. These sign languages
are also the result of the cognitive faculty of human mind. There are many kinds
of evidence to prove that speech and gesture have developed side by side demanding
a common type of communicative competence. It is a common-day observation that
our discourse is often accompanied by nonverbal but parallel system of communication:
intonation, mimicry, gesture, etc. Our talk is often accompanied on the one hand
by nonverbal sounds like interjections, onomatopoeia, voice modulation and the
like, and on the other hand by shrugging of the shoulders, raising of the eyebrow,
nodding or shaking of the head, etc. I These non-verbal codes of communication
are often integrated with the verbal code of sign system in such an inextricably
inter-woven way that without a composite conceptual framework it is not possible
to reveal the significance of either the vocal symbol or visual sign system. As
referred to by Critchley, Abraham has mentioned that the spoken language of the
Brazilian Puris is so meagre that it has only one word - 'day' for the concepts
'yesterday', 'today' and 'tomorrow'.
For
the sake of comprehensibility, the Brazilian Puris has to supplement the vocal
symbol with that of gestures; he indicates 'yesterday' by saying 'day' along with
a concurrent visual symbol, i.e., poinint behind him, for 'today' he points to
the sky and for 'tomorrow' indicates forwards (Critchley, 1939 : 24).
When
two normal human beings communicate with each other by way of talking they generally
employ an admixture of verbal and non-verbal codes controlled by a common cognitive
plan of action. This is possible because for a normal man both the channels of
communication, vocal-auditory and visual-gestural, are available for sign production.
It is interesting to note what happens when one of the channels for communication
is completely blocked because of the interlocutors' physical impairment. For example,
in the case of the blind we find that the channel related to sight is completely
blocked while in case of the deaf it is the vocal-auditory channel which becomes
dysfunctional. But, while the blind can acquire a language in a normal way and
converse with themselves as well as with other normal human beings by using the
vocal-auditory channel of communication, deaf people are ' left only with the
visual-gestural channel of communication. Because man is basically a sign producing
animal, a social being that cannot exist without communicative function, he is
forced to exploit the system of visual sign to the maximum. He invents a sign
language with a system of signs parallel to the vocal code. The underlying principle
behind the system of signs is the same as we find in the verbal code because both
kinds of sign manifestations - non-verbal as well as verbal - flow from the same
cognitive faculty .(i.e., central nervous system).
We
will call this perspective of looking at the system of signs employed by deaf
people during their act of communicative functioning Semio-linguistic. It has
been called semio-linguistic because, firstly, it involves a study in semiotic
theory which relates the two aspects of signs - signatum which are immediately
perceptible and signature which are inferable and apprehensible, and secondly,
it proposes that all man-created sign-systems have the same deep seated principles
or schemata which language as a verbal system displays. This semio-linguistic
perspective' directs us to study the sign language employed by deaf persons not
as an ad hoc use of gesture but as a system of gestural signs based broadly on
linguistic principles. What is being emphasised here is that the semio-linguistic
orientation has a breadth of perspective which is on the one hand wider than what
linguistic theories at present are able to offer because it includes in its orbit
those non-verbal components of communicative behaviour which get manifested and,
on the other hand, narrower than what a general theory of semiotics would like
to include in its scope because it excludes from its orbit all those sign delineations
which are not species-specific and species consistent.
This
semiolinguistic perspective to the study of sign language can dispel some of the
major misunderstandings about the semiotic nature of the true sign language, say
American Sign Language (ASL). Some of such misunderstandings are as follows:
1.
Sign Language are lnfra-human
Such
a misunderstanding is based on the belief that the sign language of deaf (mutes)
is based on instinctive gesture which are shared commonly by lower animals. The
reason for this misunderstanding lies in our failure to differentiae between non-verbal
signs which are exclusively human (anthropo-semiotic) and those gestures which
are shared commonly by human beings and other animals of lower order (Zoosemiotic).
According to Sebeok the two are often confused but it is important to distinguish
the purely anthroposemiotic systems, found solely in man, from the zoosemiotic
systems which man shares with at least some ancestral species (Sebeok, 1968).
As pointed out by Diebold there are some recurrent gestural responses which are
part of the shared bodily and behavioural characteristics of phylogenetically
closely related species, but there also exist gestural responses which are highly
socio-culturally conditionable categories which can easily be labeled as 'ethnic
gesture' (Diebold, 1968). The existence of such vehicles proves that sign languages
are not sub-human.
2.
Sign Languages are not Language-like
It
has been reiterated by many scholars that as sign languages are basically mimetic
(iconic and indexical) in nature, they are in their manifestation also universal
and pancultural. According to them, gestural signs employed by the deaf (mutes)
during their act of communication involve instinctive gestures which belong to
the category of 'natural signs'. Contrary to this, it has been stressed by many
structuralists that the symbolism involved in verbal communication is not iconic
but artibtary.According to Hockett, it is this principle of arbitrariness which
puts a human language in an advantageous position because then there is no limit
to what can be communicated about (Hbckett, 1960 : 4).
Recent
researches in the field of linguistics and semiotics disprove the above mentioned
points of view. On the question whether there exists a simple and constant relation
between the gestural signifiers and their denotative signatum, or on the issue
whether sign languages employed by deaf mutes are universally and panculturally
understood or not there is no ambiguity left. It has been shown by Stokoe (1972)
that the relations between gestural vehicles and what they denote are anything
but simple and iconic - at least his semiotic study of ASL dispels such a naive
assumption. Similarly, researches conducted by Battison and Jordon (1976) and
Jordon and Battison (1976) on seventeen different national sign languages categorically
state that a sign language is neither universally understood nor is pancultural
in orientation. They have also convincingly shown that like different verbal languages,
different sign languages are also not mutually intelligible. In addition to this
while studying historical changes which ASL has undergone, Frishberg has pointed
out a universal tendency which sign languages exhibit - a change in the direction
of arbitrariness away from iconicity (Frishberg, 1975).
Linguists,
on the other hand, are now becoming convinced that the principle of arbitrariness
is a legacy of earlier intellectual trends (Lehmann, 19'68). According to Friedrich,
'the multiple fallaciousness of the principle of arbitrariness suggests that it
is time to make a 1801 turn and state the antithetical position: the lexical symbol
is nonarbitrary (Friedrich, 1978). According to him non-arbitrariness is the unmarked
condition. In fact, it is a matter of degree describable in terms of trends and
frequencies. In the light of the above facts, it becomes obvious that to say that
sign languages are not language-like because their signs are iconic while verbal
signs are arbitrary, holds no validity.
3.
Sign Languages Cannot be Studied Linguistically
Though
some attempts have been made in the direction of studying sign languages used
by deaf-mutes on the basis of linguistic theories, it is held by a section of
scholars working in the field of non-verbal behaviour that linguistics alone cannot
provide a conceptual framework for the multi-dimensional aspects of a sign system
in general and inherently specific nature of manual alphabets in particular. Amongst
those who have worked on deaf-sign system linguistically, the names of McCawl
(1965), Stokoe (1966 and 1972), Schlesinger (1969), Cicourel and Boese (1971),
Battison (1974), Frishberg (1975), Fischer and Gough (1978) deserve mention. It
is true that gesture symbols or manual alphabets used by deaf persons are tertiary
symbol sets (for a normal man who uses vocal symbols) and are manifested in three
dimensional space and time simultaneously. But what is more important to remember
is that these are the signs of a sign language. They are not isolated instances
of gesticulation. It is the syntactic orientation which differentiates signs from
gestures. Signs occur in phrases and sentences. Similar or identical gestural
phenomena which are not signs signify messages which need no parsing because the
vehicles have no syntactic structure: each one means what it means by virtue of
being what it is. Signs in a sign language, however, mean what they mean by virtue
of relation to other signs used with them as much as by being signs (Stokoe, 1972
: 13). This relationship between signs of ASL has been dealt with in detail within
the transformational generative model in some of the papers included in a recent
book edited by Klima and Bellugi (1979); however, it is commonly upheld by scholars
that 'linguistics as the science of the internal structure of spoken languages
may not be the best discipline for analysis of human sign languages' (Stokoe,
1980 : 895). Unless all the three aspects of sign act - what acts, what it does
and where (configuration, action and location of sign) - are taken into account
and unless a general theoretical conspectus is evolved that includes in its scope
essentials and strategies of non-verbal behaviour, no linguistic theory could
provide a model for explicating the true nature of sign languages. A semiolinguistic
perspective, as defined in the earlier part of the paper, is thus inevitable for
studying sign languages used by deaf-mutes.
It
is true that the visual-manual sign languages employed by deaf persons, which
have been developed as a system of communication, do perform almost all the functions
of a language which operates on vocal-auditory channel. It is equally true that
these sign languages satisfy the inner urge of deaf persons to communicate with
their fellow-beings. However, to whatever extent such a sign language may possess
the basic properties of the usual verbal system, it is also equally true that
its communicative potential is basically restricted to the intra-group of the
deaf. Such an intra-group communication has resulted in the social segregation
of the deaf (Blanton, 1968). Deaf persons are generally well adjusted in their
own community (Baroff, 1963). The result is that they are left with no motivation
to communicate with the normal-hearing section of society. An attempt has therefore
been made to develop communicative potential in them which could enable them to
become a member of the broader speech community in which they live. This is achieved
by compensating for their sensory deficit of auditory perception by providing
them hearing aids. We would like to call this dimension of study on deafness'
Audio-lingual.
The
audio-lingual perspective raises many theoretical and empirical questions related
to verbal behaviour. However, it dispels first the belief that a deaf child cannot
learn a verbal language because his audition is defective. It also refutes the
view that the cognitive development of the deaf cannot follow the same learning
process as that of hearing children. It also asserts that there is never a case
of total deafness and further points out that howsoever small the potential of
hearing a deaf child has, it can be utilised in the development of speech. The
Nuffield Hearing and Speech Centre of London has convincingly shown that Wit6
proper audio-metric criteria for testing and with linguistically correct method
of observation and classification of acoustic cues, children and adults with impaired
hearing can be trained to develop responses to sounds. As a result, the clinic
has produced many instances of children with very severe hearing losses (7.0-80,
and 90 db over the whole audible range in the better ear) who have developed excellent
speech, sometimes indistinguishable from normal, and who have consequently been
educated successfully among normally hearing children in an ordinary school (Fry,
1966 : 200).
The
audio-lingual perspective of language development in the normal and the deaf child
emphasizes the following aspects of linguistic theory:
1.
A linguistic theory is a theory of biological endowment capturable in terms of
Universal Grammar (UG). UG thus is the theory of the human faculty of language
and hence, underlies the acquisition and use of language. There is no a priori
reason for believing that the deaf child's cognitive faculty for learning a language
is in quality different from that of the hearing child's. In fact, scholars have
convincingly shown that the cognitive development of deaf children follows the
same path as that of hearing children (Fry, 1966).
2. While UG appears
to be unique in significant measure to the faculty of mind, within the framework
of UG language specific grammars differ from language to language. Any particular
grammar conforms to the principles of UG, but is further articulated; it presents
as well accidental facts that distinguish the particular language in question
(Chomsky, 1977 : 2). This leads one to accept that the accidental facts of a language
(which differentiate the grammar of one language from that of another) are also
a component of child's language faculty. They are learnt by the child through
the linguistic data to which he gets exposed. It is to be emphasised that the
peripheral hearing mechanism relays information to the brain in the form of nerve
impulses, but the organisation of this information is solely the work of the central
mechanism (Fry, 1966 : 199). In the deaf children it is the hearing mechanism
for receiving the acoustic cues which is defective (and not the central mechanism
which organises the information in the form of linguistic competence).
Because of this defect in the hearing mechanism, a deaf child is unable to combine
phonation with articulation during the babbling stage. A deaf child also tries
out his vocalisation mechanism by way of coos and chuckles but as he is unable
to establish a feedback loop, he is also unable to establish a link between auditory
cues and kinesthetic impressions. As the deaf child is unable to set up connection
between the auditory and motor aspects of babbling like a normal child, babbling
soon disappears from his verbal behaviour. The above discussion goes to suggest
that the onset of language is not dependent upon either acoustic impression or
motor control, and hence as shown by Lenneberg (1962), it is quite independent
of motor skills.
3. As the crucial component of the learning process lies
in the brain in the form of speech-sound processor and organizer of acoustic sensation,
the audiolingual perspective asserts that it is possible to bring the functional
units of sound (like phoneme, phonological syllable, etc.) to the psychological
reality in the linguistic competence of even a deaf child. Three facts in this
respect have to be noted - (a) the functional categories (like phonemes) are functionally
discriminated from one another and are units of linguistic competence; (b) categorised
perception of functional units is the consequence of the operation of speech-sound
processor (which is located in the brain) and (c) the physical characteristics
of speech sounds are not in one to one relationship with the phonologically relevant
units of sound systems.
Those scholars who are working with deaf children within the audiolingual model
of speech training, try to provide the child with a set of acoustic cues (which
a deaf child is capable of perceiving after suitable amplification) so that with
suitable exposure to speech, he is able to develop the phonological system. Because
of the fact that the amount of speech a child develops depends not so much on
the amount of hearing per se as upon the use he is able to make of his hearing
for language learning (Fry, 1966 : 201), an attempt was first made to measure
the speech perception potential a hearing impaired child or adult had. It was
this hearing potential which was utilised for training these children. Attempts
have also been made to evolve a technique for measuring the speech perception
of hearing-impaired children. A conventional audiometric clinical testing generally
includes pure tone detection threshold, a speech reception threshold and a sound
discrimination power for each ear. According to ASP, the optimal field of hearing
usually produces better speech discrimination scores than wide-band amplification
or a frequency response that amplifies those frequencies that have the greatest
hearing loss (ASP, 1975 : 215).
Taking into consideration the time of onset of the hearing impairment and speech
development potential Kostic (1980) has proposed the following classification
of hearing impaired children:
Type
| Speech
Potential |
Zero
Group |
Total
deafness.No contact with sounds.Loss at birth. |
Children can acquire elements of speech through kinesthetic feedback or through
visual symbols. |
First
Group |
Severe
hard-of-hearing.Contact with sound only through a narrow band (500 to 600 Hz).Loss
between 2 1/2 and 9 months (period of early babbling). | Children
can perceive only supra-signals - i.e., tonal patterns within frequencies of their
remnants. |
Second
Group |
Hard-of-hearing;
contact with sound at frequencies not less than 1000 Hz and not more than 1500
Hz.Loss between 9 and 24 months (period of babbling). | Children
can perceive with supra- signals voice quality like voicing, features
associated with first formant of Vowels and certain cues for consonants. |
Third
Group |
Uneven
pattern of hearing loss. Cannot develop speech without selective amplification.(Loss
between 2 and 4 years of age). |
Children can acquire normal speech with appropriate selective amplification. |
Fourth
Group |
Slight
to lineal loss.(Loss between 5th to 7th year). | Children
can acquire normal speech after speech training. |
Under the general Supervision of Kostic, Rudi Cajaree, Electronic Factory, Yugoslavia,
is now producing instruments (SAFA - Selective Auditory Filter Amplifier). The
instrument has 27 independent units which process the frequency spectrum for 105
Hz to 9600 Hz. The main purpose of the instrument is to bring the hard-of-hearing
child into contact with speech sound, after the diagnosis that leads to a quantitative
and qualitative evaluation of the child's sensation and perception of speech and
environmental sound stimuli (Gerdes, 1982).
The Audio-lingual perspective
on the deaf-education also takes into account the etiology of the hearing loss,
the time of onset of deafness and the extent and degree of hearing impairment.
The common etiologies are hereditary, rubella, meningitis, fever and premature
birth (Asp, 1975). The time of onset divides deaf patients basically into two
groups - prelingual and postlingual (i.e., deafness before the acquisition of
language or after). If it is a postlingual hearing loss, the task becomes relatively
easy for any well experienced speech pathologist. Here the problem is restricted
to the area of perception and articulation (i.e., performance). In case of prelingual
hearing loss, the problem is also to develop in the child the linguistic competence.
It should be remembered 'that onset of speech consists of a gradual unfolding
of capacities' (Lenneberg, 1966 : 221), and thus, while enabling a child to develop
language competence attention has to be paid to potentialities of behaviour the
underlying matrix for behaving - instead of a speech act.
This new Audio-lingual
perspective to deaf education is quite different from the old method practised
in traditional schools for teaching the deaf child to speak. It emphasises the
need for early detection of deafness and its diagnosis, preferably before the
age of one year, extensive exposure to language in early age, fitting of two hearing
aids, one in each ear, developing in the child the language competence rather
than restricting the teaching to the area of performance (skill oriented) etc.
This method minimises the role of formal instruction and gives low priority to
the manual language which deprives the children of the experience of intrinsic
properties and function of language.
The two perspectives - Semio-linguistic
and Audio-lingual for deaf-education, are built on linguistic principles. While
the former perspective makes the deaf persons use visual signs in a system to
communicate with members of their own intragroup, the latter perspective builds
in the child an ability to talk with persons outside of his group also. It is
generally presumed that the second line of educational action has greater potential
for the rehabilitation of deaf children because it provides them an opportunity
to achieve the normal educational and vocational levels. But the situation is
not as simple as it is generally thought to be. Gorman points out that in Britain
deaf children educated in oral schools often leave the school at the age of fifteen
because they do not find their group identity, social stability and economic viability.
They generally make contact with the non-oral deaf social group and prefer to
enter that group after learning the sign language (Gorman, 1960). This simply
proves that deafness is not merely a problem of semantics and cognition or of
speech perception and production; it is also a severe social handicap.
NOTES
1. It should be noted that Semiology was considered by Saussure
as the Science which studies the life of signs in society (Saussure, 1916). Linguistics,
according to him, is only a part of the general science of semiology. Scholars
now talk about the necessity of inverting Saussure's declaration; for example,
according to Barthes, 'Linguistics is not a part of general science of signs,
even a privileged part, it is semiology which is part of linguistics...' (Barthes,
1967 : 11) and according to Julia Kristeva, all sign systems are articulated like
a language (Kristeva, 1970).
2.
ASL is the language used by most deaf Americans. It has close relation to French
Sign Language which is first described in Epee (1776), having been imported by
a native signer of the latter, Laurent Clere in 1817. (Stokoe, 1972 : 119).