德國留學(xué)生漢語語言學(xué)方面英語本科課程essay參考Serial Verb Constructions in Mandarin Chinese
您需要留學(xué)生語言學(xué)課程essay定制的dissertation具體信息如下:
dissertation題目:serial verb constructions in Mandarin Chinese
dissertation語言:英語dissertation English
代寫價(jià)格:
dissertation專業(yè):英語
字?jǐn)?shù):約4000
學(xué)校國家:德國
是否有數(shù)據(jù)處理要求:否
您的學(xué)校:
dissertation用于:BA Term Paper 本科課程essay
截止日期:
補(bǔ)充要求和說明:
主要用到下列文獻(xiàn):
主文獻(xiàn)
dissertation主要是對(duì)這篇文章的總結(jié),給出自己的觀點(diǎn)并舉例,求證作者的理論是否正確,文中給出的數(shù)據(jù)是否正確,如果有問題,問題在哪里,或者,作者的舉例是否太過簡單,如果復(fù)雜的例子是否仍符合其理論。正文大概在12-15頁。
主文獻(xiàn)2是同樣的作者所寫,我會(huì)掃描后郵件發(fā)給您。
其余參考文獻(xiàn)
也可以參考其他文獻(xiàn),只要在文章最后的litLinguistics Essayerature中注明即可,文獻(xiàn)的個(gè)數(shù)不限,但也不要太多,十個(gè)左右。凡是寫作中引用的資料都必須注明出處。
寫作規(guī)格:左間距2.5cm,右間距4cm,行距1.5cm,正文字體12,成段引用文字體11,字體 times new roman, 凡是引用別人觀點(diǎn)和文字的要注明出處,可以在文中直接注明,格式為 (作者名 作品年份:所在頁),如 (Fox 1987: 106). 其完整的作品信息應(yīng)在最后的literature list中列出。多用一些腳注,5到10個(gè)。
Syntactic and semantic parsing has become one of the hot research topics of modern linguistics.There are two main tasks for syntactic parsing: 1 .To determine the structure ofinput sentence and to recognize the components of a sentence and the syntacticrelationships among these components. 2. http://www.mythingswp7.com/yyxzy/ The normalization of sentence structures. (CY Shi, C. N. Huang and J. Q. Wang, 1993, p.333)The task of semantic parsing refers to the derivation of the formal representation,which is able to reflect the sentence meaning, according to the syntactic structure of the input sentence and the lexical meaning of each lexical entry in the sentence. (C. Y C. N. Huang and J. Q. Wang, 1993, p.423)
For the relationship between syntactic parsing and semantic parsing, relationship is "syntactic parsing prior to semantic parsing" (Campbell, 2004), namely first deriving the syntactic representation of the input sentence, and then deriving the semantic representation of the input sentence through independent semantic parsing. The other type of relationship is "simultaneous processing of syntax and semantics", namely paralleling of syntactic parsing and semantic parsing. (Z.W Feng, 2001,p.176)#p#分頁標(biāo)題#e#
For the methods of syntactic parsing and semantic parsing, it is pointed out that due to the distance between linguistic theories and actual application of natural languages,parsing should comprise both grammars and algorithms. (T.J. Zhao, 2000, p.156) And it is also pointed out that grammars are only the illustrative representation of knowledge structure of languages. (R.Q. Lu, 1996, p.955) When grammars function as the tool of the parsing of languages, grammars are referred to as language recognizers. Here the specific recognition processes are not included in the grammars overtly. Therefore, in order to construct the actual recognition processes, it is necessary to have another representation method, which is referred to as automata such as pushdown automata.(Lewis&Papadimitriou, 1998)
Head-driven phrase structure grammar (HPSG) is an integrated theory of natural language syntax and semantics. (Pollard&Sag, 1994)
In HPSG word, phrase and sentence are all signs. Signs have phonological, syntactic, semantic and discourse attributes. One way to represent attribute structure is
through attribute-value matrix (ABM). (Pollard&Sag, 1994)
HPSG consists of rule schemata and lexical entries. The attribute structure of a lexical entry is the total sum of its attributes and the values of these attributes. The attribute structure of a schema is the sum of attribute structures of the lexical entries in the schema. The attribute structure of a sentence is the sum of attribute structures of the schemata in the sentence. (Pollard&Sag, 1994)
The mother node in a schema is the head. The reason is as follows:
HPSG has two characteristics: 1 .It inherits the principles of GPSq and is a grammar based on constraints. 2. In the meantime, it assimilates the strong points of LFq http://www.mythingswp7.com/yyxzy/ emphasizes the importance of lexicon in language construction. The main feature
of this grammar is that it emphasizes the role that heads play in parsing, and the whole grammatical system is head-driven. (Pollard&Sag, 1994)
HPSG embodies the process of imparting head information through the SUBCAT
feature of lexical items. In HPSq the SUBCAT feature is made of a constituent list,
which describes in detail the features of the lexical item which is a head. (Pollard&Sag,1994) Heads in phrase structure grammar refer to constituents that determine the main functions of a sentence or phrase when a sentence or phrase is constructed. For example,in a verb phrase, the verb is the head of this phrase. (T. J. Zhao, 2000)
Since HPSG particularly highlights the role which head plays, according to SUBCAT feature of the head, it is possible that the grammatical information of the head as well as other constituents can be associated very conveniently, such that information of the whole sentence is associated with the heads as the kernel. Complex attributes can be used to represent information of sentences, and this facilitates the processing. (T. J.Zhao, 2000)#p#分頁標(biāo)題#e#
The attribute passing mechanism of HPSG is unification and structure-sharing.
(This is not required in general attribute grammar.) Unification refers to the existence of maximum unification. For two attribute structures Xl and X2, if another attribute structure X3 is subsumed in both X1 and X2, then X3 is called a unification of X1 and X2. And if for any unification X of X1 and X2, X is subsumed in X3, then X3 is the maximum unification of X1 and X2. (R.Q. Lu, 1996, p.969) Structure sharing means that the attribute value of the mother node of the lower schema is passed to the daughter node of the upper schema.
The attribute passing mechanism of HPSG is embodied in schemata and principles.Immediate (ID) schemata. Immediate (ID) schemata in HPSG occupy a position in the theory analogous to that of X一schemata in GB theory: they are a small, universally available set of disjunctive constraints on the immediate constituency of phrases, from among which each language makes a selection an thus the disjunction of the ID schemata itself constitutes a universal principle, which we call the Immediate Dominanoe Prinoiple (IDP). (Pollard&Sag, 1994)
And for principles, in HPSq there are some very important principles, for example,the head feature principle (HFP), the subcategorization principle, the trace principle,and the semantic principle. We will introduce these principles which will be used in later chapters of this dissertation.
The semantic principle is defined as follows:
a. the RETREIVED value is a list whose set of elements forms a sub-set of the union of the QSTOREs of the daughters; and the QSTORE value is the relative complement of that set
b. (Case 1) if the semantic head's CONTENT value is of sort psoa, then the NUCLEUS value is identical with that of the semantic head, and the QUANTS value is the concatenation of the RETREIVED value and the semantic head's QUANTS value;(Case 2) otherwise the RETRIEVED value is empty and the CONTENT value
is token-identical to that of of the semantic head.
The head feature principle and the Subcategorization Principle are defined as
follows.
1 .Head Feature Principle (HFP): The HEAD value of any headed phrase is structure-shared with the head value of the HEAD daughter. (Pollard&Sag, 1994)
2. Subcategorization Principle: In a headed phrase (i.e. a phrasal sign whose DTRS value is of sort head-struc), the SUBCAT value of the head daughter is the concatenation of the phrase's SUBCAT list with the list (in order of increasing obliqueness) of SYNSEM values of the complement daughters. (Pollard&Sag, 1994)
The effect of the HFP is to guarantee that headed phrases really are projections of
their head daughter. And the Subcategorization Principle is to check off the subcategorization requirements of the lexical head as they become satisfied by the complement daughters of its phrasal projections. (Pollard&Sag, 1994)#p#分頁標(biāo)題#e#
We now use the following example to explicate the HFP and the Subcategorization Principle. Consider the sentence "She walks." Here the head value of "she" can be abbrievated as NP[nom][3rd, sing]. Here we can use the tag國to represent NP[nom][3rd, sing]. And the head value of "walks" can be abbrieavated as verb[fin]. While the SUBCAT list of "walks" is <NP[nom][3rd, sing]>, which indicates that the head daughter "walks" needs a complement daughter whose head value should be NP[nom][3rd, sing]. Let us consider in bottom-up fashion, beginning with the lexical head "walks". NP[nom][3rd, sing] is on the SUBCAT list of "walks", therefore, some kind of minus operation is necessary, which means NP[nom][3rd, sing] has to be"checked off' or "cancelled." In other words, another complement daughter like "she"whose head value is NP[nom][3rd, sing] is needed to check off NP[nom][3rd, sing] on the SUBCAT list of "walks" and then "she" is combined with "walks". And thus we obtain "she walks".
The trace principle is defined as follows.The Trace Principle: Every trace must be subcategorized by a substantive head.(Pollard&Sag, 1994)
The task of semantic parsing refers to the derivation of the formal representation,
which is able to reflect the sentence meaning, according to the syntactic structure of the input sentence and the lexical meaning of each lexical entry in the sentence. Theoretically and practically, it is a very important research topic to study what forms
can be used to express sentence meaning. (C. Y Shi, C. N.Huang and J. Q. Wang, 1993,p.423)
For the prospect of semantic parsing, the development of formal semantics provides the basis for semantic parsing. (T. J. Zhao, 2000, pp. 242-245) Originally, the
term "formal semantics" means "the semantic analysis of formal languages". The term
"formal linguistic semantics" is introduced to refer to the part of linguistic semantic structure of human languages. (Lyons,1995) Recently,simple type theory,and linear logic in formal semantics have developed into new automatic semantic parsing technology.
In semantic parsing, the most important issue is to have grammars which are able to describe solution natural language semantics. Montague Grammar lays the foundation for the to this problem.
Around 1970, the American logician Richard Montague has established Montague Grammar, which offered a new approach to study natural language syntactic http://www.mythingswp7.com/yyxzy/ structures and semantic relationships. (T. J. Zhao, 2000, pp. 242-245) R. Montague has described the truth semantic feature of human language in detail. Montague claims that "there is no important theoretical difference between human language and the artificial languages of logicians".#p#分頁標(biāo)題#e#
Exemplifying Montague Grammars.(Morrill, 1994, pp.13-20)By a sign Montague Grammar means an association of a symbol with a meaning. In the logic, sets of signs are classified into categories (or: types) represented by formulas. The logic of signs can be understood as an enrichment of categorical grammar. A sign consists of a prosodic form a, and a semantic form we write? An assignment consists of a sign and a category form and is written as follows.
When we employ linear logic to do parsing, one of the problems is: premises in
logic are resources that get used up in inference to produce conclusion. Premises are no longer available. Linear logic is able to solve this problem, and it has been adopted by several logic deductive systems for parsing.
Naturally, linear logic and Montague Grammar can be combined and applied to
parsing as deduction. Glue Semantics for HPSG is such a kind of logic deductive
system applied to HPSG.
CYK algorithm is a bottom-up parsing algorithm (Aho&Ullman, 1972). CYK
algorithm is the short for Cocke-Young-Kasami algorithm. It is a parallel processing
parsing algorithm. CYK algorithm adopts the idea of dynamic programming. It starts
from small parse trees and expands what is derived gradually. The same parse tree is
never computed for a second time. No backtracking is required. All rules are in Chomsky normal form. And the computation can be finished within cubic time. (Z. W
Feng, 2001)
CYK algorithm is designed to construct the above chart. The length of the input
sentence is 5. CYK algorithm can be formulated in the following two steps:
Step One determines the POS (part of speech) of the words in the input sentence. If a word belongs to several POSs, then we record all these POSs in the chart. Step Two constitutes the syntactic structure of the sentence.
相關(guān)文章
UKthesis provides an online writing service for all types of academic writing. Check out some of them and don't hesitate to place your order.