I'm tasked today with thinking how ILC ( Independent Learning Center) activities can be developed to improve reading skills. The textbook
Reading Explorer is being used with the students we are teaching, and the text itself appears authentic, based as it is on
National Geographic articles and pictures. Certainly many readers of this blog will relate to childhoods growing up in anticipation of the newest
National Geographic to appear in the mailbox, to be paged through by kids first for the provocative photos, and if interest was sufficiently piqued, for background reading of the text. Many of this generation have boxes of old
National Geographics stored in an attic somewhere, and with all the moving house my wife and I have done, our
National Geographics have followed us.
One of the great attractions of living in the UAE is relative ease of travel in the region and farther afield. The geographic orientation of
Reading Explorer should play nicely on this propensity of Emiratis and expats alike to travel. I can imagine using Google Earth and Google maps to have students plan trips to some of the places mentioned in the readings. What if you wanted to visit Machu Pichu, or Orangatuns in Borneo, or the caves in Oman (whose counterpart cave chambers can also be reached in Mulu just south of Brunai in Borneo)? Google Earth will fly you there, and there are innumerable web resources to help students envisage travel to any part of the world. Each section in
Reading Explorer could have a different student give a presentation on a real or imagined trip. An ebook could be created in one of any number of spaces online, to be utilized and improved on by current and subsequent students.
Should IELTS be taught with or without activating schema?
In my last posting, I would sometimes get into discussions with colleagues whether when teaching IELTS preparation it was best to try and replicate the experience for students or to stimulate their schemas beforehand. The colleagues with whom I would have this discussion would hit their students with a reading out of the blue, have them take the practice tests on topics in which they might have little interest and which were often beyond their comprehension, and rely for support on a lesson on debriefing at the end what went wrong. My approach was different. I would spend a class in advance in which we would Google the topic and find information and then have them take the test, with subsequent debrief. I was told that by interjecting prior knowledge and understanding of the topic, I was compromising the experience with an opportunity that the students wouldn't actually be given to prepare the material in advance. I felt that at the preparation stage this would help the students to better envisage the task even if they would have to leave that step out when they actually took the test.
Besides activating all available schema on reading topics, I think it is important that students read not only what they are assigned to read, but also that they read for enjoyment, out of their own interest.
Task-based work in ILC and student accountability for independent learning there
For students to work on their own interests in an ILC setting, each student must be accountable for making progress toward the agreed upon goal. What I am picturing here is a program in the ILC where students are responsible for ticking off items in their portfolio. That is each student must submit certain deliverables and budget time in the ILC accordingly. The process starts with all students signing a EULA, or end-user license agreement, wherein all parties agree to what is acceptable behavior in an ILC and where the line is for unacceptable behavior. This agreement would acknowledge for all concerned the reason that the ILC has been established for the students, and give the parameters they must meet in order to maintain their privilege of using it.
After that, we specify various categories of deliverables the students are expected to produce. This would be agreed-upon evidence of performance in areas such as:
- Project work; for example given a set of readings for a term, students are assigned to one location which they must explain to the others, how they would reach there, what they would find on arrival (attractions besides the ones described in the book). Ideally students would make their presentations at the outset of the KWL approach to the reading for that unit, the K part, or prior knowledge or schema part).
- Completion of activities associated with the reading that should be set by the teachers of a course for the given readings; this could involved work with online exercises, or critiques of films associate with the readings, or with recommendations and critiques of supplementary multimodal artifacts online, such as YouTube videos, or other visualizations of the material.
- Evidence of free-reading, such as book reports, exercises completed on graded readers; e.g. evidence of reading on a vocational topic, either publications in print or online.
Learning vocabulary through text analysis
Vocabulary is an important focus of Reading Explorer. Specifically, this entails work with word links (roots, prefixes, and suffixes), partnerships (collocations and phrases), and usage.
One important aspect of vocabulary development is student ownership of the words, and for this reason, students are encouraged to keep lists of new words they encounter with examples of word links, partnership, and usage. I used to share ideas, and an office in Oman, with Tom Cobb who was working at the time on sets of hypercard stacks that he eventually developed into the Compleat Lexical Tutor (
http://lextutor.ca). This is a comprehensive set of word exploration tools that can enable student learning through exploration of the aforementioned features and interrelationships among words.
Tom and I were working at the time with concordancing in the SRC (student resource center) at SQU (Sultan Qaboos University) in Muscat. I developed a technique which I suggested as a replacement to gap fills whereby bundles of 4 or 5 instances of concordance output were presented for a set of words, and in each bundle the word concordanced was blanked. The task for students was, rather than find the word from the list that fit any given blank in a passage of continuous prose, find the word from the list that fit the 4 or 5 contexts in each bundle. I argued through research that this was a more doable and authentic task for students than gap fill exercises sometimes quickly contrived by language teachers to help students with vocabulary (Stevens, 1991).
Learning vocabulary through experimentation with concordancing
KWIC concordances can be used to identify collocations. KWIC means 'key word in context'. It's the kind of concordance output that can be sorted so that words left or right of the 'key word' (i.e. word or string being concordanced) can be sorted on and identified. Sometimes much can be learned from playing with this feature, but this can also be limited by the corpus of text used.
It is possible to purchase concordance programs and to either tap into existing corpora or to accumulate one's own. Brown and LOB are two well-known corpora of over a million words each; whereas the more specific the corpus is, the fewer words, and the more limited its output. For example, at SQU we in the language department requested and accumulated texts being written by science professors for our students. Although this work predated the open movement, as in OER (open educatational resources) we did 'share' materials, and I was surprised to hear Michael Barlow reference our corpus in a presentation he gave on text analysis at a TESOL conference only a few years ago. This relatively limited text base allowed us to create language learning materials for our students, and also produced some surprises, such as where a word that might be introduced in a subject matter text as being important, in fact appeared only once in the entire corpus.
Here is a sampling of corpora available through Tom Cobb's Compleat Lexical Tutor:
There has been a lot written and researched on using concordancing in ESOL. Nik Peachey has an article online, for example (2005). I have a page on text analysis that I have not maintained all that strongly (
http://vancestevens.com/textanal.htm, last updated in 2007), but I have accumulated more recent links in my Delicious account, here:
http://delicious.com/tag/concordancer.
My Delicious links point to PIE (
http://phrasesinenglish.org/) which searches the BNC (British National Corpus of over 100 million words). PIE pulls random hits from the BNC and displays them in the context in which they appear. This gives a good overview of word usage, but because this is not KWIC it is not possible to easily find productive collocations. There are other tools available for querying BNC, available from
http://ucrel.lancs.ac.uk/tools.html, and pointed to via the UCREL site here:
http://bncweb.info/.
Just the Word (
http://www.just-the-word.com/) is such a tool, and appears utilitarian in that it pulls out many collocations in its primary analysis (there is even a tool here to view the results in Wordle). For example, here are an analysis of collocations on the word "familiar" and an alternative visualization in Wordle:
In these examples I'm wondering what happened to the expected "familiar with" but we might have to learn more about how the "clusters" work. There is also a cousin to Wordle available online, possibly worth checking for any useful affordances: Concordle,
http://folk.uib.no/nfylk/concordle/.
There are also concordance programs that can be downloaded for free and run locally. One is Laurence Anthony's AntConc (
http://www.antlab.sci.waseda.ac.jp/software.html) and another is TextSTAT (Simple Text Analysis Tool) from the Frei Universitat Berlin,
http://neon.niederlandistik.fu-berlin.de/en/textstat/. Both of these programs come as exe files that run as-is, without installation on users' computers. The big catch is that both require users to provide their own corpora. TextSTAT allows users to pull one in off the Internet. Note in the screen shot that you have the option to send the spider out for text on just one page, or several pages (I think this means from one or more levels of hyperlinks), or from the entire domain or server. This could result in quite a lot of text if not used judiciously (although in the end, lots of
relevant text is what you want).
What to do about "the big catch" - assembling or finding a corpus
One obvious source of text is the Internet itself. Yet another online concordancer with big promises in this regard is
http://www.webcorp.org.uk/live/, which "lets you access the Web as a corpus - a large collection of texts from which examples of real language use can be extracted." Inputting my test strings 'familar' and 'extreme' here produced surprisingly no results (I even tried the word Google using a Google dbase, no hits). However, I had better luck when I signed up (free) to use the Webcorp linguists tool at
http://wse1.webcorp.org.uk/. Still I didn't see how to sort the output for 'familiar' on collocates to the right, but the pattern 'familiar with' at least appeared to dominate the results.
On the other hand I found Mark Davies's collections of corpora assembled at BYU,
http://view.byu.edu/. You chose a corpus, enter it, and the put in your search term, and it extracts the data. Again I was unable to sort for collocates on the "familiar" search term, but the databases available seem extensive here.
Where to from here?
I found more programs available; e.g.
I was looking today for an ideal tool that would be freely available online, either web based or running from exe files on a local computer, but that would point to a ready-made corpus, or one compiled at a click from Web URLs, AND that would do some of the common text analysis features of commercial programs, most usefully giving simple searches alphabetically on words immediately left and right of the KWIC. I found a number of candidates meeting one or more of these criteria, but no one-stop shop for what I was looking for.
When I get more time I might look more closely at
- http://lextutor.ca - there might be a tool in there close to what I am looking for
- TextSTAT, to see how it compiles corpora from Web resources
- Mark Davies's collection of resources at BYU, http://view.byu.edu/
References