banner

June 2006
Volume 10, Number 1

Contents  |   TESL-EJ Top

Timed versus At-home Assessment Tests: Does Time Affect the Quality of Second Language Learners' Written Compositions?

Roger Kenworthy
Ohio University, Hong Kong
<hkadvisorohio.edu>

Abstract

This preliminary study examines what the effects of additional time and different media have upon the overall quality of English language learner's written assessment tests. Sixteen intermediate-level students (L1 Cantonese), enrolled at a satellite campus of an American university within Asia, manually wrote a 45-minute timed placement test in the confines of an educational setting. Based upon identical topics provided for this first set of writings, several weeks later these same students were allotted one week to complete a computer-generated essay at their personal residence. Statistical analyses (t-tests) revealed mostly insignificant differences between the frequency counts of selected lexical features found within both sets of writings. By contrast, there were statistically significant differences in the number of reported grammatical errors. On the whole, when compared to the timed writings, the at-home essays were characterized as having fewer numbers of grammatical errors and greater holistic scores which supports the idea that participants efficiently used additional time to affect overall textual quality.

Introduction

Learning to be a proficient writer in a second language is a demanding task for most students. These writers must face a complex set of challenges that include mastering numerous lexical, grammatical, and syntactic skills which must seem both daunting at times and possibly insurmountable at others. At present, even though our theoretical views and pedagogical practices of second language writing have moved far beyond the expectation of students producing error-free texts, there are still persistent demands upon writers to produce compositions that display a level of grammatical and linguistic accuracy that results in effective communication across a diverse audience of readers.

Acquiring this wide array of requisite skills is challenging enough, nevertheless, compounded by the necessity of students to complete timed assessment tests as they pursue their academic goals. Within the environment of timed placement tests and classroom examinations, a learner must quickly become personally and cognitively engaged with a topic and produce a composition that best demonstrates their overall writing skills. This method of testing is often criticized by instructors and students alike for it is believed that texts produced in such an artificial environment, under the pressure of time constraints, might not be a reliable indicator of one's true ability (Sanders & Littlefield, 1975; Horowitz, 1986; Kroll; 1990). Although contemporary writing instruction essentially supports the process approach with extended composing processes (planning, drafting, revising, and editing) to arrive at a suitable end product (Seow, 2002), the principle tenets comprising this method are not fully utilized within a timed-test environment. Brand (1992) posited that this condition results in a writer producing only an initial draft for assessment which runs contrary to the common practices followed by most writing practitioners. In order to avoid this theoretical and pedagogical contradiction, assessment should be bound to conditions that mirror current practices followed within the second language writing classroom.

Literature Review

There are relatively few studies that have examined the effect of time upon the quality of language learner's written texts. Two noted experiments include Kroll (1990), and the more recent experiment of Polio, Fleck, and Leder (1998). Kroll examined the effect of additional time upon the reduction of syntactic and discourse errors within essays produced by 25 advanced ESL students. This experiment was unique in that the corpus of essays to be analyzed was written under different conditions. Initially, each student completed an essay under strict examination conditions within a 60-minute time limit. This same group was then required to complete an additional essay over a 10- to 14-day period within their own home. In a comparison of syntactic and discourse features within these compositions, it was discovered that although the at-home texts displayed fewer syntactic errors and greater holistic scores, there were no statistically significant differences between these two sets of essays. Hence, Kroll argued that an additional amount of time did not affect the overall performance of these second language writers, which is contrary to the commonly held belief of both educators and students alike.

In a related study, Polio et al. (1998) investigated the relationship between time and sentence level error reductions within the essays of 64 undergraduate and graduate ESL learners. Unlike Kroll's study, there was an experimental and control group; the experimental group was provided with weekly grammar instruction (articles, subject-verb agreement, word forms) and editing practice over the duration of a semester. In contrast, the control group received no additional grammar or editing instruction. In order to examine the possibilities that time affected essay quality, participants wrote compositions during a half-hour writing session; then two days later they were allowed a 60-minute revision period to correct any possible grammatical and lexical errors identified within their texts. Statistical analysis of these writings indicated that the experimental group did not perform at a higher level of linguistic accuracy than the control group. Clearly, the findings from these two studies have indicated that even when provided with additional opportunities for editing grammatical and lexical errors, the overall results were not statistically significant, thus, it can be said that in the end, time was not an issue.

Even though the relationship between time and essay quality has been chiefly overlooked, by contrast, there has been a thorough investigation of the potential effects of electronic technology upon writers and writing quality. A growing body of research appears to support the assumptions that the use of a personal computer affects essay quality by having the potential to change the writing process, to make textual revisions easier to complete, and to affect the attitudes of student writers. The composition process has been forever changed due the very nature of electronic technology. A fundamental component of each and every personal computer is an electronic keyboard which permits mechanical reproduction of text. Currently, a writer is able to select a variety of fonts that standardize the written word and results in legible texts which are not dependent upon personal handwriting skills. Hyland (1990) and Murray (1995) both maintain that since writers no longer are burdened by the physical reproduction of text, there is potential for higher quality student essays.

As well, a great deal of research has been focused upon the relationship that exists between the interactions of a student writer with a computer screen. It is conjectured that the visual representation of text within a computer monitor's limited space affects a writer's cognitive processes differently than those found within other modes of writing (Pennington, 1993a, 1996b). Visually limiting a portion of a writer's manuscript results in a "piecemeal" approach to composing which facilitates increased focus upon viewed work, as a result, more attention is directed towards surface structures including syntax and grammar (Dam, Legenhausen, & Wolff, 1990; Pennington; 1996a). This idea is supported by the majority of Neu and Scarcella's (1991) ESL undergraduates who believed they received higher essay grades due to their increased attention to local errors, such as grammar, spelling, and punctuation, as a result of viewing limited amounts of written text. In addition, Clarke (1986) argued that viewing short segments of texts narrows a writer's focus bringing about an increased awareness of cohesion and coherence which results in greater organization and higher quality written discourse.

Fundamentally, the editing features found within personal computers affect a writer's perception of text. Murray (1995) observed that participants in her study were provided a sense of liberation for they perceived that written texts were now "infinitely malleable and changeable," as a consequence, textual revisions were not viewed as such a risky endeavor. A generally held supposition is that changes made within electronic texts are completed more quickly and easily than within traditional writing methods (Haas, 1988). At present, any changes deemed necessary by student writers can be made without completely rewriting a sentence, a paragraph, or an entire essay. Global revisions (organization) could be handled effectively by a cut-and-paste feature while local changes (spelling, grammar, word choices) are addressed by means of deletion, spell checker, grammar checker, and thesaurus, features inherent within most word processing programs. Overall, a number of studies (Sommers, 1985; Tella, 1992a, 1992b; Murray, 1995; Sullivan & Pratt, 1996) have found that access to editing features resulted in students carrying out an increased number of revisions that lead to improved quality of their writings

Another possible consequence of composing with a word processing program is the effect upon a writer's attitude. There appears to be a common consensus amongst a number of researchers that student writers are positively influenced when composing texts with a word processing program (Hawisher, 1987; Kurth, 1987; Bernhardt, Wojhan, & Edwards, 1988; Phinney, 1991; Pennington & Brock, 1992; Strauss and Frost, 1999). Primarily, a generally positive attitude towards electronic composing stems from familiarity with the media. It is believed that experience is gained through exposure to other forms of electronic technology (e.g., games software). These previous encounters manifest into greater personal confidence for users; accordingly, they are more likely to employ the full range of a computer's capabilities in creative ways to develop language and writing skills (Pennington, 1993b). This exposure to electronic technology grows ever larger for at present there is an entire generation that has experienced access to computers from very early ages which prompted Owston and Wideman's (1997) observation that, "many young writers now craft their first sentences on the word processor" (p. 202). Although electronic technology is one of the variables suspected of improving one's attitude, it is a generally held belief that having a positive attitude is an effective predictor of a learner's success (Krashen, 1981).

Students frequently complain there is little, if any, opportunity to reread and make changes to written tests and examinations within a timed writing environment. Based upon this observation and the results from previous research, the present study sought to investigate the possible affects that time (timed vs. at-home writing) and technique (pen and paper vs. word processing) have upon the quality of second language learners written texts. Asian learners completed two sets of essays under different time restraints, methods, and environmental locations which were then holistically rated by a team of evaluators. Then the frequency and accuracy of selected lexical and grammatical features were measured to determine if a different setting and technique affected overall quality of second language learners' written compositions. The results of this investigation are found below.

The Present Study

As a prerequisite for admittance into the Ohio University, Hong Kong Programs, 16 new applicants (L1 Cantonese) were required to sit for a timed placement test to gauge their overall English composition skills. This test was limited to a 45-minute period using traditional writing methods (pen-and-paper), and conducted within a university setting under strict examination conditions. Testees were provided with three prompts that focused upon a different writing mode (compare/contrast, descriptive, or persuasive), then asked to select and write an essay based upon only one of these prompts. Upon completion, these essays were collected and transcribed manually to electronic text using a word processing program. All lexical and textual errors including spelling, punctuation, and word forms were copied exactly in electronic form as found within the original pen-and-paper assessment compositions. Within the confines of this environment and method of testing, student texts averaged approximately 344 words in length; the fewest number of words written was 198 while the greatest number totaled 650 words.

Eight weeks later, at the beginning of a new school semester, these same students were required to complete a second composition (hereafter referred to as the Diagnostic Test) based upon the exact topic that they had selected in their original assessment test. However, the conditions for this second test varied substantially from the initial test in regards to method, environment, and time. Each student was called upon to submit a word-processed essay which had been composed within their private residence. In addition, students were given a much longer duration of time, one full week, to complete their diagnostic essay. After this predetermined length of time, the class submitted hard and soft copies of their texts to the researcher. These computer-generated, at-home compositions roughly averaged 466 words in length; the shortest text was found to contain only 160 words in contrast to the longest text with 950 words. It must be stated that within this second procedure, no class time was set aside to discuss any elements of the diagnostic essay. Additionally, the class was notified that this writing task would not be graded nor included as part of their overall term mark.

Having both sets of essays in hand, a number of selected grammatical and lexical features were examined for specific frequency counts as well as correct or incorrect use. Before statistical analysis, the raw frequency counts of the various features were normalized (actual occurrences of a feature divided by the words in the writing sample multiplied by 100) to take into account variances in overall textual length of the essays. Then statistical analysis (t- tests) was conducted to determine if the frequencies and errors of selected lexical and grammatical features differed between a writer's Placement and Diagnostic Test. Next, two experienced evaluators assigned holistic grades, from between 1 and 10, to the individual essays. Once a grade was assigned to an essay, the raters were asked to write a brief summary to explain the decision they had made for final grading (Appendix A). They were specifically asked to focus their comments upon both global (organization, ideas) and local features (spelling, grammar) within the student's compositions. In order to ascertain if suitable levels of inter-rater reliability existed, a Pearson Coefficient Test was conducted, and the results indicated that a high level of reliability (r = .91) was confirmed between the two evaluators. Finally, a questionnaire was sent electronically to all participants in order to provide general information and the role of electronic technology in their editing process (Appendix B). The following section discusses results of the particular features under examination.

Results

In this study, the intent is to examine the effect that distinct time and media have upon the quality of second language learner's written compositions. The findings of this study support the hypothesis that time and technique did influence at least one specific area under examination; that being the numbers of errors committed by second language learners within two types of assessment tests.

Table 1. The mean and standard deviation for the number of words, sentences, and
spelling errors within the Placement and Diagnostic tests

 Placement TestDiagnostic Test
Number of wordsM = 344.00M = 466.81
SD = 134.17SD = 240.29
Number of sentencesM = 24.25M = 30.81
SD = 11.09SD = 15.96
Words per sentenceM = 14.56M = 15.40
SD = 2.50SD = 3.59
Spelling errors (actual)M = 6.50M = 0.43*
SD = 4.64SD = 1.26
Spelling errors (normalized)M = 1.95M = 0.16*
SD = 1.21SD = 0.50

* t-test results: p = <.001

The first table illustrates that two of the five features examined are statistically significant; Spelling errors (actual) [t(15) = 4.578, p < .001], and Spelling errors (normalized) [t(15) = 4.790, p < .001]. In yet another example from this table, it is seen that Number of words, although not statically significant, is trending towards significance [t(15) = -1.94, p= .07].

Table 2. The mean and standard deviation of lexical features
within the Placement and Diagnostic Tests

 Placement TestDiagnostic Test
Cohesive devicesM = 3.14M = 3.93
SD = 1.38SD = 1.20
ArticlesM = 6.33M = 6.16
SD = 2.74SD = 1.70
PronounsM = 5.87M = 5.50
SD = 2.72SD = 2.98
Result clausesM = 0.42M = 0.96
SD = 0.47SD = 1.07
Adjective clausesM = 3.84M = 3.65
SD = 1.02SD = 2.01
Adverb phrasesM = 0.52M = 0.76
SD = 0.41SD = 0.57
Prepositional phrasesM = 3.44M = 3.81
SD = 0.92SD 1.53
SynonymyM = 1.16M = 1.67*
SD = 0.40SD = 1.03
AntonymyM = 0.15M = 0.14
SD = 0.26SD = 0.36
DemonstrativesM = 1.04M = 0.97
SD = 0.74SD = 0.96

*t-tests results: p < .05

This next table indicates statistically insignificant results reported in the majority of the ten features examined. Although there is a single exception, Synonymy, which recorded significance [t(15) = -2.13, p = .04] while the difference between the conditions almost reached significance with another feature, Result clauses, [t(15) = - 1.98, p = .06).

Table 3: The mean and standard deviation of grammatical errors
within the Placement and Diagnostic Tests

 Placement TestDiagnostic Test
Subject-verb agreementM = 1.17M = 0.49**
SD = 0.59SD = 0.43
Word formsM = 3.33M = 2.35*
SD = 1.12SD = 0.89
Word choicesM = 1.67M = 0.89**
SD = 0.81SD = 0.77
PrepositionsM = 1.11M = 0.53*
SD = 0.91SD = 0.38
Pro/noun agreementM = 0.26M = 0.34
SD = 0.41SD = 0.21
Missing verbsM = 0.58M = 0.15**
SD = 0.33SD = 0.18
Infinitive useM = 0.64M = 0.22*
SD = 0.59SD = 0.30
Article misuse/omissionM = 2.98M = 2.14*
SD = 0.77SD = 0.94

*t-test results: p < .001
**t-test results: p < .01

In Table 3, it is clear that all the above listed categories of errors are statistically significant except for Pronoun/noun agreement [t(15) = 1.45, p = .16]. Within these recorded grammatical errors, almost half of the reported errors (Subject/verb agreement, Word choices, and Missing verbs) are highly significant (p values of <.001). It is notable that these particular features are highly influenced by the difference in time and method upon the final outcome of essay quality.

Finally, in order to determine the level of agreement between the rating team, a Pierson Correlation Test was conducted which resulted in a high level of agreement between the members (r = .91). Prior to conducting this test, the entire set of essays were holistically graded, and perhaps it was no coincidence that the grades assigned by two experienced EFL instructors indicated that the three shortest Diagnostic essays ranked lower scores than the initial Placement essays. Furthermore, it was found that the essays that received the lowest scores also recorded the greatest number of grammatical errors.

Discussion

Although these findings have indicated that time and technique did not produce statistically significant differences in the reported frequency counts of assorted lexical features, it does appear that these varying conditions did produce significant differences in the numbers of grammatical errors and levels of holistic scores that appeared within two distinctive methods of assessment testing.

Within this study, it is apparent that electronic technology played a crucial role in the reduction of spelling errors found within the second assessment test. It was discovered that students committed a total of 140 spelling errors words within the Placement Test which is substantially greater than the 7 made within the Diagnostic Test. What is noteworthy is that 5 of the 7 recorded inaccuracies within the at-home essays (over 70%) were the sole responsibility of Student 6; however, it is not known why these mistakes were not corrected. In general, correcting orthographic errors is usually a straightforward procedure; first, a word processing program immediately draws attention to a writer's spelling error by red underlining, and second, a simple click rectifies the mistake. Unanimously, Student Questionnaire (Appendix B) respondents indicated that they made use of a spellchecker during the editing process. This ancillary step reduced the overall number of spelling errors which ultimately influenced one of the evaluator's final grading decisions within this study--Evaluator 1 was particularly mindful of poor spelling, accordingly, this was the main reason for assigning a lower holistic score when comparing two sets of writings. Still, this point is not unique to raters within this study alone, Santos (1988) and Engber's (1995) findings corroborate that spelling errors and overall essay quality are inextricably linked.

A point must be made about certain participants' lack of diligence which affected one category of comparison; number of words written per essay. It is counter-intuitive that an at-home composition would result in fewer words written than an essay completed within a timed environment, and this idea is substantiated by the findings, however, substantially fewer words did manifest from within several writing samples. For example, Student 5's Diagnostic Test was notable for it was 52% shorter than the Placement Test, and also the fewest words written within the entire corpus of essays submitted for analysis (160 vs. 340 words); Student 2 produced 35% fewer words within the second assessment test than the first (340 vs.445 words); and Student 16 wrote 12% fewer words on the Diagnostic (315 vs. 355 words). When these same learners were questioned about the discrepancies in performance, the overwhelming response is summed up by Student 5, "I forgot about the assignment until the last minutes just before the class." This comment makes one question the level of seriousness a number of writers had towards the second assignment which in turn affects the end results. Granted, Placement Test results had much greater significance upon a learner's future aspirations. In all probability, achieving an acceptable grade provided the opportunity for enrollment in a university degree program, while the results of the Diagnostic Test bore no real consequences to a writer. Ultimately, to measure correctly the effect time and method had upon overall composition length is not about skill or the proper use of time in these few examples, rather, one's lack of attentiveness and commitment to the task at hand.

The frequencies of most lexical features under examination within this study appear to be unaffected by additional time, different technique, and distinct environment. On the contrary, statistical analysis reveals that only a singular feature, Synonymy, was significant in the at-home writings. It is conjectured that the unique combination of technology and time is a likely explanation for this occurrence. Referring again to the Student Questionnaire replies, two crucial points are reported by the respondents; the majority indicated that they repeatedly made use of a thesaurus within their word processing program, and they had the luxury of extra time to edit their work that was not afforded on the first test. In general, writing assessment tests with electronic technology facilitate the identification and utilization of appropriate synonyms for writers, which manifests in a more diverse selection of word choices. Lexical diversity is an important writing component for research findings have been consistent, diverse vocabularies differentiate higher from lower rated essays (Grant & Ginther, 2000; Jarvis, 2002). As a result, implementing word processing programs for assessment testing is crucial to a writing student's final vocabulary choices.

Unlike the insignificant results reported between specific lexical features, a comparison of grammatical errors identified within the two sets of writings resulted in highly significant differences. In particular, the categories of Subject/verb agreement, Word choice, and Missing verbs appeared to have the highest level of significance (p < .001). Once again, it is inferred that these results are the corollary of varying time, method, and setting. In each and every case, respondents of the Student Questionnaire replied they had no time whatsoever to edit their first assessment text, while having spent anywhere from between 15 to 60 minutes of additional time to review their at-home writings. Furthermore, several participants commented about the pressure upon them to perform quickly and accurately within the timed test which they believed counter-productive to the writing task. Since the second assignment was assigned at the initial class on the first day of a new semester, these newly enrolled students did not receive formal grammar instruction or corrective feedback to account for the significant differences reported. Having little proof that a writer could draw upon newly found grammatical knowledge to explain these significant editing differences; it may be concluded that these learners possessed innate abilities and skills to correctly edit their work when not hindered by the constraints of time, technique, and environment.

Conclusion

In spite of the fact that the process approach continues to dominate the second language writing classroom, educators persist in using timed-tested assessment to gauge a learner's overall composition skills. With the intention of examining this apparent contradiction, second language students completed two sets of essays under varying time, method, and environment. Statistical analysis indicated that the frequencies of selected lexical features are roughly equivalent when compared between both sets of compositions; whereas a higher level of grammatical accuracy within the at-home writings did result in statistically significant differences. Unlike previous studies that argue students did not take full advantage of their time, the participants within this study effectively utilized additional time to produce fewer grammatical errors within higher quality writings than those in the confines of an institutional setting, and with traditional means to do so. Perhaps it is time that educators base student assessment upon contemporary practices found within most second language writing classrooms?

References

Bernhardt, S.A., Wojhan, P., & Edwards, P. (1988). Teaching composition with computers: A program evaluation. (ERIC Document Reproduction Service No. 295191).

Brand, A. G. (1992). Writing assessment at the college level. (ERIC Document Reproduction Service No. ED345281).

Clarke, D.F. (1986). Computer-assisted reading--what can the machine really contribute? System, 14(1), 1-13.

Dam, L., Legenhausen, L., & Wolff, D. (1990). Text production in the foreign language classroom and the word processor. System, 18(3), 325-334.

Engber, C. (1995). The relationship of lexical proficiency and the quality of ESL compositions. Journal of Second Language Writing, 4(2), 139-155.

Grant, L., & Ginther, A. (2000). Using computer-tagged linguistic features to describe L2 writing differences. Journal of Second Language Writing, 9(2), 123-145.

Haas, C. (1988). How the writing medium shapes the writing process: Effects of word processing on writing. (ERIC Document Reproduction Service No. ED309408)

Hawisher, G.E. (1987). The effects of word processing on the revision strategies of college freshmen. Research in the Teaching of English, 21(2), 145-159.

Horowitz, D. (1986). Process not product: Less than meets the eye. TESOL Quarterly, 20(1), 141-144.

Hyland, K. (1990). Literacy for a new medium: Word processing skills in EST. System, 18(3), 335-342.

Jarvis, S. (2002). Short texts, best-fitting curves and new measurement of lexical diversity. Language Testing, 19(1), 57-84.

Krashen, S. D. (1981). Second language acquisition and second language learning. New York: Pergamon Press.

Kroll, B. (1990). What does time buy? ESL student performance on home versus class compositions. In B. Kroll (Ed.), Second language writing: Research insights from the classroom (pp. 140-154). New York: Cambridge University Press.

Kurth, R.J. (1987). Using word processors to enhance revision strategies during student writing activities. Educational Technology, 27(1), 13-19.

Murray, D. (1995). Knowledge machines: Language and information in a technological society. New York: Longman.

Neu, J., & Scarcella, R. (1991). Word processing in the ESL writing classroom: A survey of student attitudes. In P. Dunkel (Ed.), Computer-Assisted language learning and testing: Research issues and practices (pp. 169-187). New York: Newbury House.

Pennington, M. C. (1993a). Exploring the potential of word processing for non-native writers. Computers and the Humanities, 27(3), 149-163.

Pennington, M.C. (1993b). Modeling the student writer's acquisition of word processing skills: The interaction of computer, writing, and language media. Computers and Composition, 9(4), 59-79.

Pennington, M. C. (1996a). The power of the computer in language education. In M.C. Pennington (Ed.), The power of CALL (pp. 1-14.). Houston, TX: Athelstan.

Pennington, M. C. (1996b). Writing the natural way: On computer. Computer Assisted Language Learning, 9(2-3), 125-142.

Pennington, M.C., & Brock, M. (1992). Process and product approaches to computer- assisted composition. In M. Pennington & V. Stevens (Eds.), Computers in applied linguistics: An international perspective (pp.79-109). Philadelphia: Multilingual Matters.

Phinney, M. (1991). Computer-assisted writing and writing apprehension in ESL students. In P. Dunkel (Ed.), Computer-Assisted language learning and testing: Research issues and practices (pp. 189-204). New York: Newbury House.

Polio, C., Fleck, C., & Leder, N. (1998). "If I only had more time: ESL learners' changes in linguistic accuracy on essay revisions." Journal of Second Language Writing, 7(1), 43-68.

Owston, R.D., & Wideman, H.H. (1997). Word processors and children's writing in a high computer access setting. Journal of Research on Technology in Education, 30(2), 202-220.

Sanders, S.E., & Littlefield, J.H. (1975). Perhaps test essays can reflect significant improvement in freshman composition: Report on a successful attempt. Research in the Teaching of English, 9(2), 145-153.

Santos, T. (1988). Professors' reactions to the academic writing of nonnative-speaking students. TESOL Quarterly, 22(1), 69-90.

Seow, A. (2002). The writing process and process writing. In J.C. Richards & W.A. Renandya (Eds.), Methodology in language teaching: An anthology of current practice (pp. 315 - 320). Cambridge: Cambridge University Press.

Sommers, E.A. (1985). Integrating composing and computing. In J.L. Collins & E.A. Sommers (Eds.), Writing on-line: Using computers in the teaching of writing (pp. 3-10). Upper Montclair, NJ: Boynton/Cook.

Strauss, J., & Frost, R.D. (1999). Selecting instructional technology media for the marketing classroom. Marketing Education Review, 9(1), 11-20.

Sullivan, N., & Pratt, E. (1996). A comparative study of two ESL writing environments: A computer-assisted classroom and a traditional oral classroom. System, 29(4), 491-501.

Tella, S. (1992a). The adoption of international networks and electronic mail into foreign language education. Scandinavian Journal of Educational Research, 36(4), 303-312.

Tella, S. (1992b). Talking shop via e-mail: A thematic and linguistic analysis of electronic communication. (ERIC Document Reproduction Service No. ED352015).


Appendix A

Appendix B


About the Author

Roger Kenworthy. M.Ed, is the Student Services Coordinator for the Hong Kong Programs of Ohio University.
© Copyright rests with authors. Please cite TESL-EJ appropriately.

Editor's Note: The HTML version contains no page numbers. Please use the PDF version of this article for citations.