Linguistics is the scientific study of language
Theoretical (or general) linguistics encompasses a number of sub-fields, such as the study of language structure (grammar) and meaning (semantics). The study of grammar encompasses morphology (formation and alteration of words) and syntax (the rules that determine the way words combine into phrases and sentences). Also a part of this field are phonology, the study of sound systems and abstract sound units, and phonetics, which is concerned with the actual properties of speech sounds (phones), non-speech sounds, and how they are produced and perceived.
Linguistics compares languages (comparative linguistics) and explores their histories, in order to find universal properties of language and to account for its development and origins (historical linguistics).
Linguistic inquiry is pursued by a wide variety of specialists, who may not all be in harmonious agreement; as journalist Russ Rymer put it: “Linguistics is arguably the most hotly contested property in the academic realm. It is soaked with the blood of poets, theologians, philosophers, philologists, psychologists, biologists, anthropologists, and neurologists, along with whatever blood can be got out of grammarians.”
Divisions, specialties, and subfields
The central concern of theoretical linguistics is to characterize the nature of human language ability, or competence: to explain what it is that an individual knows when said to know a language; and to explain how it is that individuals come to know languages.
All humans (setting aside extremely pathological cases) achieve competence in whatever language is spoken (or signed, in the case of signed languages) around them when they are growing up, with apparently little need for conscious instruction. Non-humans do not. Therefore, there is some basic innate property of humans that causes them to be able to use language. There is no discernible genetic process responsible for differences between languages: an individual will acquire whatever language(s) they are exposed to as a child, regardless of their parentage or ethnic origin.
Linguistic structures are pairings of meaning and sound (or other externalization). Linguists may specialize in some subpart of the linguistic structure, which can be arranged in the following terms, from sound to meaning:
- Phonetics, the study of the physical aspects of sounds of human language
- Phonology, the study of patterns of a language’s sounds
- Morphology, the study of the internal structure of words
- Syntax, the study of how words combine to form grammatical sentences
- Semantics, the study of the meaning of words (lexical semantics) and fixed word combinations (phraseology), and how these combine to form the meanings of sentences
- Pragmatics, the study of how utterances are used (literally, figuratively, or otherwise) in communicative acts
- Discourse analysis, the analysis of language use in texts (spoken, written or signed)
Many linguists would agree that the divisions overlap considerably, but the independent significance of each of these areas is not universally acknowledged. Regardless of any particular linguist’s position, each area has core concepts that foster significant scholarly inquiry and research.
Intersecting with these domains are fields arranged around the kind of external factors that are considered. For example
- Stylistics, the study of linguistic factors that place a discourse in context
- Developmental linguistics, the study of the development of linguistic ability in an individual, particularly the acquisition of language in childhood
- Historical linguistics or Diachronic linguistics, the study of language change
- Evolutionary linguistics, the study of the origin and subsequent development of language
- Psycholinguistics, the study of the cognitive processes and representations underlying language use
- Sociolinguistics, the study of social patterns of linguistic variability
- Clinical linguistics, the application of linguistic theory to the area of Speech-Language Pathology
- Neurolinguistics, the study of the brain networks that underlie grammar and communication
- Biolinguistics, the study of natural as well as human taught communication systems in animals compared to human language
- Computational linguistics, the study of computational implementations of linguistic structures
- Applied linguistics, the study of language related issues applied in every day life, notably language policies, planning, and education
A substantial part of linguistic investigation is into the nature of the differences among the languages of the world. The nature of variation is very important to an understanding of human linguistic ability in general: if human linguistic ability is very narrowly constrained by biological properties of the species, then languages must be very similar. If human linguistic ability is unconstrained, then languages might vary greatly.
But there are different ways to interpret similarities among languages. For example, the Latin language spoken by the Romans developed into Spanish in Spain and Italian in Italy. Similarities between Spanish and Italian are in many cases due to both being descended from Latin. So in principle, if two languages share some property, this property might either be due to common inheritance or due to some property of the human language faculty. Of course, there is always the possibility of random chance being at the root of the similarity, such as with Spanish ‘mucho’ and English ‘much’, which are not related historically in any way, though they mean essentially the same thing and sound similar.
Often, the possibility of common inheritance can be essentially ruled out. Given the fact that learning language comes quite easily to humans, it can be assumed that languages have been spoken at least as long as there have been biologically modern humans, probably at least fifty thousand years. Independent measures of language change (for example, comparing the language of ancient texts to the daughter languages spoken today) suggest that change is rapid enough to make it impossible to reconstruct a language that was spoken so long ago; as a consequence of this, common features of languages spoken in different parts of the world are not normally taken as evidence for common ancestry.
Even more striking, there are documented cases of sign languages being developed in communities of congenitally deaf people who could not have been exposed to spoken language. The properties of these sign languages have been shown to conform generally to many of the properties of spoken languages, strengthening the hypothesis that those properties are not due to common ancestry but to more general characteristics of the way languages are learned.
Loosely speaking, the collection of properties, which all languages share, can be referred to as “universal grammar” (or UG), the characteristics of which is a much debated topic. Linguists and non-linguists also use this term in several different ways.
Universal properties of language may be partly due to universal aspects of human experience; for example, all humans experience water, and all human languages have a word for water. Nonetheless, UG seeks to define those structures which are necessarily a part of all human language because of the de facto structure of the human mind–so similarities in human language which can be attributed to having arisen out of similarity of experience does not provide information for answering the more difficult questions about UG. Clearly, experience is part of the process by which individuals learn languages; but experience by itself is not enough, since animals raised around people learn extremely little human language, if any at all.
A more interesting example is this: suppose that all human languages distinguish nouns from verbs (this is generally believed to be true). This would require a more sophisticated explanation, since nouns and verbs do not exist in the world, apart from languages that make use of them.
In general, a property of UG could be due to general properties of human cognition, or due to some property of human cognition that is specific to language. Too little is understood about human cognition in general to allow a meaningful distinction to be made. As a result, generalizations are often stated in theoretical linguistics without a stand being taken on whether the generalization could have some bearing on other aspects of cognition.
Noam Chomsky; recognized as the father of modern linguistics
It has been understood since the time of the ancient Greeks that languages tend to be organized around grammatical categories such as noun and verb, nominative and accusative, or present and past, though, importantly, not exclusively so. The grammar of a language is organized around such fundamental categories, though many languages express the relationships between words and syntax in other discrete ways (cf. some Bantu languages for noun/verb relations, ergative/absolutive systems for case relations, several Native American languages for tense/aspect relations).
In addition to making substantial use of discrete categories, language has the important property that it organizes elements into recursive structures; this allows, for example, a noun phrase to contain another noun phrase (as in the chimpanzee’s lips) or a clause to contain a clause (as in I think that it’s raining). Though recursion in grammar was implicitly recognized much earlier (for example by Jespersen), the importance of this aspect of language was only fully realized after the 1957 publication of Noam Chomsky‘s book Syntactic Structures, which presented a formal grammar of a fragment of English. Prior to this, the most detailed descriptions of linguistic systems were of phonological or morphological systems, which tend to be closed and admit little creativity.
Chomsky used a context-free grammar augmented with transformations. Since then, context-free grammars have been written for substantial fragments of various languages (for example GPSG, for English), but it has been demonstrated that human languages include cross-serial dependencies, which cannot be handled adequately by Context-free grammars. This requires increased power, for example transformations.
An example of a natural-language clause involving a cross-serial dependency is the Dutch.
Ik denk dat Jan Piet de kinderen zag helpen zwemmen
I think that Jan Piet the children saw help swim
‘I think that Jan saw Piet help the children swim’
The important point is that the noun phrases before the verb cluster (Jan, Piet, de kinderen) are identified with the verbs in the verb cluster (zag, helpen, zwemmen) in left-right order.
This means that natural language formalisms must be relatively powerful in terms of generative capacity. The models currently used (LFG, HPSG, Minimalism) are very powerful, in general too powerful to be computationally tractable in principle. Implementations of them are scaled down.
Contextual linguistics may include the study of linguistics in interaction with other academic disciplines. While in core theoretical linguistics language is studied independently, the interdisciplinary areas of linguistics consider how language interacts with the rest of the world.
Theoretical linguistics is concerned with finding and describing generalities both within particular languages and among all languages. Applied linguistics takes the results of those findings and applies them to other areas. Often applied linguistics refers to the use of linguistic research in language teaching, but results of linguistic research are used in many other areas, as well.
Many areas of applied linguistics today involve the explicit use of computers. Speech synthesis and speech recognition use phonetic and phonemic knowledge to provide voice interfaces to computers. Applications of computational linguistics in machine translation, computer-assisted translation, and natural language processing are extremely fruitful areas of applied linguistics which have come to the forefront in recent years with increasing computing power. Their influence has had a great effect on theories of syntax and semantics, as modelling syntactic and semantic theories on computers constrains the theories to computable operations and provides a more rigorous mathematical basis.
Whereas the core of theoretical linguistics is concerned with studying languages at a particular point in time (usually the present), diachronic linguistics examines how language changes through time, sometimes over centuries. Historical linguistics enjoys both a rich history (the study of linguistics grew out of historical linguistics) and a strong theoretical foundation for the study of language change.
In universities in the United States, the non-historic perspective seems to have the upper hand. Many introductory linguistics classes, for example, cover historical linguistics only cursorily. The shift in focus to a non-historic perspective started with Saussure and became predominant with Noam Chomsky.
Main article: Prescription and description
Research currently performed under the name “linguistics” is purely descriptive; linguists seek to clarify the nature of language without passing value judgements or trying to chart future language directions. Nonetheless, there are many professionals and amateurs who also prescribe rules of language, holding a particular standard out for all to follow.
Prescriptivists tend to be found among the ranks of language educators and journalists, and not in the actual academic discipline of linguistics. They hold clear notions of what is right and wrong, and may assign themselves the responsibility of ensuring that the next generation use the variety of language that is most likely to lead to “success,” often the acrolect of a particular language. The reasons for their intolerance of “incorrect usage” may include distrust of neologisms, connections to socially-disapproved dialects (i.e., basilects), or simple conflicts with pet theories. An extreme version of prescriptivism can be found among censors, whose personal mission is to eradicate words and structures which they consider to be destructive to society.
Descriptivists, on the other hand, do not accept the prescriptivists’ notion of “incorrect usage.” They might describe the usages the other has in mind simply as “idiosyncratic,” or they may discover a regularity (a rule) that the usage in question follows (in contrast to the common prescriptive assumption that “bad” usage is unsystematic). Within the context of fieldwork, descriptive linguistics refers to the study of language using a descriptivist approach. Descriptivist methodology more closely resembles scientific methodology in other disciplines.
- Speech appears to be a human universal, whereas there have been many cultures and speech communities that lack written communication;
- People learn to speak and process spoken languages more easily and much earlier than writing;
- A number of cognitive scientists argue that the brain has an innate “language module“, knowledge of which is thought to come more from studying speech than writing, particularly since language as speech is held to be an evolutionary adaptation, whereas writing is a comparatively recent invention.
Of course, linguists agree that the study of written language can be worthwhile and valuable. For linguistic research that uses the methods of corpus linguistics and computational linguistics, written language is often much more convenient for processing large amounts of linguistic data. Large corpora of spoken language are difficult to create and hard to find, and are typically transcribed and written. Additionally, linguists have turned to text-based discourse occurring in various formats of computer-mediated communication as a viable site for linguistic inquiry.
The study of writing systems themselves is in any case considered a branch of linguistics.
Main article: History of linguistics
Linguistics, or at least the current version practiced today, has its origins in Iron Age India with the analysis of Sanskrit. The Pratishakhyas (from ca. the 8th century BC) constitute as it were a proto-linguistic ad hoc collection of observations about mutations to a given corpus particular to a given Vedic school. Systematic study of these texts gives rise to the Vedanga discipline of Vyakarana, the earliest surviving account of which is the work of Pāṇini (c. 520 – 460 BC), who, however, looks back on what are probably several generations of grammarians, whose opinions he occasionally refers to. Pāṇini formulates close to 4,000 rules which together form a complete and extremely compact generative grammar of Sanskrit. Inherent in his analytic approach are the concepts of the phoneme, the morpheme and the root. A consequence of his grammar’s focus on brevity is its highly unintuitive structure, reminiscent of contemporary “machine language” (as opposed to “human readable” programming languages). His sophisticated logical rules and technique have been widely influential in ancient and modern linguistics.
Indian linguistics maintained a high level for several centuries; Patanjali in the 2nd century BC still actively criticizes Panini. In the early centuries BC, however, Panini’s grammar came to be seen as prescriptive, and later commentators became fully dependent on it. Bhartrihari (c. 450 – 510) theorized the act of speech as being made up of four stages: first, conceptualization of an idea, second, its verbalization and sequencing and third, delivery of speech into atmospheric air, all these by the speaker and last, the comprehension of speech by the listener, the interpreter.
In the Middle East, the Persian linguist Sibawayh made a detailed and professional description of Arabic in 760, in his monumental work, Al-kitab fi al-nahw (الكتاب في النحو, The Book on Grammar), bringing many linguistic aspects of language to light. In his book he distinguished phonetics from phonology.
Western linguistics begins in Classical Antiquity with grammatical speculation such as Plato‘s Cratylus, but remains far behind the achievements of ancient Indian grammarians until the 19th century, when Indian literature begins to become available in Europe.
An early 19th century linguist was Jakob Grimm, who devised the principle of consonantal shifts in pronunciation known as Grimm’s Law in 1822, Karl Verner, who discovered Verner’s Law, August Schleicher, who created the “Stammbaumtheorie” and Johannes Schmidt, who developed the “Wellentheorie” (“wave model”) in 1872.
Ferdinand de Saussure was the founder of modern structural linguistics. Edward Sapir, a leader in American structural linguistics, was one of the first who explored the relations between language studies and anthropology. His methodology had strong influence on all his successors. Noam Chomsky’s formal model of language, transformational-generative grammar, developed under the influence of his teacher Zellig Harris, who was in turn strongly influenced by Leonard Bloomfield, has been the dominant model since the 1960s.
Chomsky remains by far the most influential linguist in the world today. Linguists working in frameworks such as Head-Driven Phrase Structure Grammar (HPSG) or Lexical Functional Grammar (LFG) stress the importance of formalization and formal rigor in linguistic description, and may distance themselves somewhat from Chomsky’s more recent work (the “Minimalist” program for Transformational grammar), connecting more closely to earlier work of Chomsky’s. Linguists working in Optimality Theory state generalizations in terms of violable rules, which is a greater departure from mainstream linguistics, and linguists working in various kinds of functional grammar and Cognitive Linguistics tend to stress the non-autonomy of linguistic knowledge and the non-universality of linguistic structures, thus departing importantly from the Chomskyan paradigm.
Extract by Mr.KHIM Vicheka