Stanford

LINGUISTICS DEPARTMENT - STANFORD UNIVERSITY

An Invitation to CALL

Foundations of Computer-Assisted Language Learning

Home | Unit 1 | Unit 2 | Unit 3| Unit 4| Unit 5| Unit 6 | Unit 7 | Unit 8 | Supplement


Click here for PDF

An Invitation to CALL

Unit 2: Finding and Evaluating CALL Resources

OVERVIEW    

Goal 2, Standard 1 of the TESOL Technology Standards (2008, 2011) states: "Language teachers identify and evaluate technological resources and environments for suitability to their teaching context." In line with that standard, identifying and evaluating resources and environments is the general topic of Unit 2. Relevant resources can include CALL courseware, online materials for teachers, online materials for students, and resources for connecting teachers and students. The focus here is not on the resources themselves--later units will provide a lot of examples for English language learning and some for other languages as well. Rather, we discuss the process of identifying candidate resources and more importantly evaluating their suitability for your curriculum and students.

IDENTIFYING RESOURCES

Finding suitable resources is not an easy task despite the increasingly large amount available for English and other commonly taught languages. Dedicated resources (those designed specifically for language learning), both free and commercial, abound. To locate desired materials on the web, good searching skills are needed. Becoming familiar with Google's more advanced search techniques (http://www.google.com/advanced_search?hl=en) and trying a range of search terms rather than just the first one that comes to mind will usually yield more favorable results than a basic search suing a broad category term like "ESL". Other sources include professional organizations. For example, the TESOL CALL Interest Section has a virtual library with hundreds of tagged resources: http://www.diigo.com/user/call_is_vsl. Although the focus is on ESL, some of the materials, tools, and activities there can be used for foreign language teaching and learning as well. As noted in the previous unit, for those interested in tutorial software, the TESOL CALL Interest Section Software List has been archived (last updated 2011) by Deborah Healey at http://www.eltexpert.com/softlist/index.html, and the CALICO Journal Software Reviews are available for English and other languages on the journal website: http://www.equinoxpub.com/journals/index.php/CALICO. The reviews can be found by issue in the archives, but many can also be located using the search term "software review" on the journal site. Another source of CALL activities and materials can be found on book publisher's websites. A number of textbooks published in recent years include additional online resources for teachers and supplementary materials and exercises for students. Some of these can be useful even if you are not using the textbook. For examples from Pearson see https://www.pearson.com/english/professional-development/resources.html.

IDENTIFYING ENVIRONMENTS

The overall technology environment includes both the local environment and the online environment (the emerging mobile environment has elements of both). The local environment is that of the user--the teacher and students--and includes the institutional setting, home setting, mobile options such as mp3 players, smartphones and tablets, and other settings such as libraries or Internet cafes in proximity to the institution or student homes. The local environment consists of a number of factors.

The online environment has some of these same considerations plus some additional ones.

The mobile environment: apps, apps, and more apps...
    The rise of smartphones and tablets like iPads and their Android and Windows competitors has opened up a new environment for learning that is "anytime, anywhere." Apps come in various types: some are akin to the disk-based program of the past, allowing mobile learning and review of vocabulary, grammar and pronunciation. Others, like Whatsapp (http://www.whatsapp.com/) and the many mobile versions of social networking sites, are primarily for taking communication and social interaction beyond the level of the phone call and text message. Despite the fact that they are convenient, ubiquitous, and often free or very inexpensive, apps need to be treated no differently than other potential learning tools. First, you need to find them. Then, candidates must be evaluated judgmentally for their potential value as language learning supports for your teaching context. The section below provides guidelines for doing so.

Understanding any environment is a key step in determining what resources you will actually be able to use effectively. A 2013 paper I co-authored with Glenn Stockwell openly discusses general aspects of the mobile environment and offers a set of principles for teachers, developers and learners to consider in implementing mobile language learning: http://www.tirfonline.org/wp-content/uploads/2013/11/TIRF_MALL_Papers_StockwellHubbard.pdf.

EVALUATING COURSEWARE (INCLUDING APPS)

The evaluation component of this unit begins by looking at the sub-field of tutorial CALL from the perspectives of both of the end users: teachers and students. It introduces the term courseware, which refers to software that is used to support formal language learning. In practice, courseware has been used to refer to everything from complete software packages that can be used without a teacher to software that is just a part of a language learning course, sometimes a minor or optional supplementary part. We will use the term interchangeably with that of tutorial software to include any software designed for language learning purposes. Although CALL courseware has arguably lost its dominant position in CALL over the past decade, it is still widely used and continues to be a significant part of the field. At the very least, it is worth exploring so that you can make an informed decision about whether to incorporate it in your own teaching or recommend it to your students for independent study. It is worth noting that more and more free courseware is showing up on the web on institutional sites or those supported by advertising. Also, there is educational, native-speaker courseware that can sometimes be adapted for language learning purposes.

I have been interested in evaluation for some time, and in a series of papers from 1987 to 1996, I attempted to develop a comprehensive methodological framework for CALL that integrated evaluation with development and implementation. The CALL world has turned out to be more complex than that original vision (it did not anticipate the rise of CMC (Unit 3), for example, and other uses centered on the computer as a tool), but it still serves a purpose in laying out areas of consideration for any software that has an identifiable teaching presence. As we will see, it can be adapted somewhat for use in evaluating a broader range of CALL tasks and activities. The framework expanded on an earlier one by Martin Phillips (1985) and used the Richards and Rodgers (1982) framework (Method: approach, design, and procedure) as an organizing scheme to characterize the apparent relationships between elements of language teaching and learning and the computer. The driving force behind it was the observation that existing approaches to instructional design and in particular evaluation did not pay sufficient attention to language learning or else limited themselves to specific teaching approaches. I introduce a simplified version of the framework here. Although the focus of this unit is evaluation, I discuss its relationship to development and implementation as well.

ORGANIZING PRINCIPLES

Development, evaluation, and implementation are part of a logical progression in any situation that has an end product. If a company produces a computer program for balancing your checkbook, for instance, they need to 1) design it with the needs of the end users in mind, 2) evaluate it in house and encourage outsiders to review it, and 3) have a mechanism to implement it, including figuring out how to make it available and training end users in its effective operation. Of course this can be and often is cyclic rather than linear, with the feedback from evaluation and implementation providing data for subsequent development.

CALL software is a bit different from a simple checkbook balancing program in that it typically involves a more diverse view of who the evaluators and end users are. Evaluation, for instance, may be connected to the developer and be used for improving the courseware prior to release, or it may be done by an outside reviewer for a professional journal. It may also be done by an individual teacher representing a school or institute, selecting materials for his or her own class, or even blogging for the wider language teaching community. It may even be by a student evaluating for possible use or purchase, or to communicate impressions to other users. As Chapelle (2001) notes (see https://www.lltjournal.org/item/2368 for a review), evaluation can be done judgmentally at the level of initial selection, based on how well-suited a piece of software appears to be, and it can also be done empirically, based on data collected from actual student use. We focus on the former here.

Development, evaluation, and implementation are thus simultaneously part of a logical progression of a courseware project and interacting manifestations of its reality. This is true whether the project is for CALL or for some other educational purpose. However, the specific domain of language teaching and learning imposes on these three a set of considerations that are not exactly the same as we would find in courseware for, say, history or chemistry or math. The framework that follows addresses those considerations. This is a revised and simplified form of the content in Hubbard (1996) and in the papers listed below (see references). The others go into more depth in language teaching approaches (1987), evaluation (1988), and development (1992).  Note that an updated version for evaluation can be found in Hubbard (2006): www.stanford.edu/~efs/calleval.pdf, also covering Chapelle's (2001) framework and evaluation checklists.

Two final notes. First, in an extensive critique of this framework in Levy (1997) (see https://www.lltjournal.org/item/2258 for a review of this work) argues that "Hubbard's framework for CALL materials development, which assumes that all CALL is tutorial in nature, is not generally applicable to the computer as a tool.  Similarly, the Richards and Rodgers model...only has limited application for the computer as tool" (p. 211). I think there is more applicability than he suggests, but for the moment we will follow Levy's view and assume this is a framework for tutorial CALL only. We will return to a more expanded application of it below.

Second, like Richards and Rodgers' framework, but unlike most others for CALL, there is an attempt to be agnostic here with respect to what actually constitutes good language teaching and learning through computers. For the field as a whole, we need a framework which can be used equally by those whose language teaching approaches might be as diverse as those of grammar-translation, lexical, communicative, sociocultural, or interactionist proponents. Thus the framework is descriptive rather than prescriptive.

FRAMEWORK FUNDAMENTALS

The three modules (development, evaluation, implementation) share core components inspired by Richards & Rodgers (1982). In each case their original components are adapted, interpreted, and supplemented to include the reality of the computer as the interface between the teacher/developer and materials and the learner. (Realistically, in any tutorial program there IS a teacher (or at least a teaching presence) in addition to the materials themselves, just as there is a teaching presence in a textbook.) The development and evaluation modules are most closely related in terms of the elements considered. Implementation feeds on the output of evaluation. However, each module can impact the others over time, as when information from evaluation and implementation is returned to developers for updates, patches, or considerations in later versions of the product.

                       

                         CALL Framework Interrelationships

THE EVALUATION MODULE

Evaluation involves three kinds of considerations. A crucial aspect is to understand what the courseware does first before attempting to judge it: this is, not surprisingly, difficult to do because as soon as we start interacting with a program we want to judge it. If an evaluator wants to approach the problem a little more objectively (and hence effectively), the first consideration then is the operational description of the software, which essentially focuses on the procedure level elements. The design elements essentially can be subsumed under the label "learner fit." That is, based on the information from the operational description, you are looking to see how well the design elements (see Development Module, below) of language difficulty, program difficulty, program content, etc. fit the students you are evaluating for. The approach elements, in this case approach-based evaluation criteria, can be subsumed under the label "teacher fit"--broadly, what does the software appear to represent in terms of assumptions about what language is and how language is learned, and how compatible are such assumptions with those of the teacher doing the evaluation? More generally, what kind of "teaching" is the software likely to be doing? Ultimately, then, evaluation consists of getting a clear understanding of what the software actually has in the way of material and interaction, and then judging how closely it fits with the learner's needs as determined by their profiles and learning objectives (perhaps themselves determined by a course syllabus) and your own language teaching approach. This relationship is sketched below.

                         

                         CALL Evaluation Framework

It is worth noting that a modified version of this framework is still used by the CALICO Journal, http://www.equinoxpub.com/journals/index.php/CALICO for its courseware reviews. See the resources and references sections below for more details about this and alternative conceptions: https://www.equinoxpub.com/home/wp-content/uploads/2018/04/CALICO_LearningTechnologyReviewGuidelines.pdf.

EXTENDING THE MODEL TO OTHER RESOURCES

In a more recent paper (Hubbard, 2011) I have extended the preceding model to the web more generally, that is, to resources beyond those that have a clearly tutorial component. While Levy's tutor/tool framework still has value in describing the applications themselves, in the case of tools and resources in particular (e.g., discussion boards, email programs, media players, social networking sites, learning management systems, repositories of authentic audio and video, etc.), the key is how they are used for language learning, the activities and tasks that are built upon them. Admittedly, the methodological framework in its original form is not a perfect fit: operational description, for example, will not include input judgment and feedback in non-tutorial materials, but other components such as screen layout, types of input accepted, help options, and so on still need to be addressed. Teacher fit and learner fit similarly remain relevant--whatever resource is utilized, it should be done in a way consistent with your assumptions of how languages are learned and with the curricular objectives and students characteristics taken into account as well.

Additionally, the mobile revolution of the past few years has brought tutorial CALL back into the mainstream. There are numerous mobile apps for individual skills (notably vocabulary, pronunciation, and grammar) as well as multi-skill apps like Duolingo. Rosell-Aguilar (2017; see below) among others has revisited evaluation with a new look at the dimensions mobility brings.

ALTERNATIVES

The methodological approach I present here has proved useful over the years but there are at least two other approaches that deserve mention, especially once we begin to look beyond tutorial CALL. First, despite some of the limitations and biases in checklists, they have persisted over the years.  In fact, the methodological framework above may be rather awkward to use in its raw form, and translating it into a checklist format for a specific combination of teacher fit and learner fit considerations (representative of a teacher's own language learning approach, course design, and student characteristics for example) provides a practical instantiation of its intent. Reinders & Pegrum (2015) have produced an interesting checklist-style framework for mobile learning (MALL) apps and tasks based heavily in a socio-cultural perspective: https://unitec.researchbank.ac.nz/bitstream/handle/10652/2991/Reinders%20and%20Pegrum.pdf.

Another general approach, that of building a framework on theoretical principles derived from SLA research, is seen in the work of Chapelle (2001): see https://www.lltjournal.org/item/2368 for a review. She identifies six general evaluation criteria, usable not only for software but more broadly for CALL tasks: language learning potential, meaning focus, learner fit, authenticity, impact, and practicality. It is important to note that these criteria are relevant for both judgmental purposes and for evaluating outcomes. In line with the latter, another TESOL Technology Standard, Goal 3, Standard 3, references the need to evaluate "specific student uses of technology" for effectiveness. An example application of Chapelle's framework can be seen at https://journals.equinoxpub.com/index.php/CALICO/issue/view/1942 (pp. 93-138) .  Unlike the methodological framework, which was developed originally for courseware evaluation and requires some adaptation to accommodate other types of CALL activities, Chapelle's framework was designed for what she refers to more generally as "CALL tasks", encompassing a broader set of options. J. B. Son (2005) has offered criteria specifically for the evaluation of websites, in particular the notion of authority: http://eprints.usq.edu.au/820/1/Son_ch13_2005.pdf.

DEVELOPMENT AND IMPLEMENTATION CONSIDERATIONS

Development, evaluation, and implementation are part of an integrated process yielding supportable CALL materials, tasks and activities. Implementation considerations are relevant during the evaluation process, but they become crucial when deciding how best to use software that is available. Some of the key questions to address in implementation are the following. 

- What is the setting in which the students will be using the software (classroom, lab, home, etc.?)
- What kinds of training or preparatory activities are warranted?
- What kinds of follow-up activities either in or out of class will there be?
- Given the options provided by the program, how much control will the teacher exert, and how much control will be left to the learner?

Whether they are done in class together, in a lab with individuals or pair working on computers, or outside of class at a computer cluster, the student's own computer, or even on a mobile device like a cell phone, computer exercises should be clearly linked to the rest of the course. This does not mean they have to be fully integrated. Arguably, activities with CALL courseware can be supplementary or complementary to the classroom part of the course (including the virtual classroom in an online setting), required or optional, and still be useful. However, the instructor needs to be sure that learners see the connections and that the computer work is compatible in terms of content, level, and approach to the rest of the course material and activities. For a more detailed description of the components to consider in implementation and their interrelationships, see Hubbard (1996).

RESOURCES FOR EVALUATION

Besides the evaluation framework presented here, it is common to see evaluation checklists or other procedures. Here are a few examples.

CALICO's Learning Technology Review Guidelines for apps and online learning: https://www.equinoxpub.com/home/wp-content/uploads/2018/04/CALICO_LearningTechnologyReviewGuidelines.pdf (based on the methodological framework)
Guide for Using Software (http://www.cal.org/caela/esl_resources/digests/SwareQA.html) in the Adult ESL Classroom by Susan Gaer
A Place to Start in Selecting Software (http://www.deborahhealey.com/cj_software_selection.html) by Deborah Healey & Norm Johnson
ICT4LT evaluation form: downloads as a doc - http://www.ict4lt.org/en/evalform.doc
For mobile learning: Rosell-Aguilar's taxonomy and evaluation framework at https://search.proquest.com/docview/1899102616 (requires institutional ProQuest access)

SUGGESTED ACTIVITY. Visit the CALICO website at http://www.equinoxpub.com/journals/index.php/CALICO. The reviews can be found by issue in the archives, but many can also be located using the search term "software review" on the journal site. Find an interesting-looking piece of software and read the review, noting 1) what you can learn from it and 2) any questions that arise that might help inform your own evaluation process. If you feel energetic, try two or three. You should note the difference between a published review intended for a wide audience and your own evaluation, which should be situated with respect to your own approach, your students' abilities and needs, and the environment of your class.

REFERENCES

Chapelle, C. A. (2001). Computer Applications in Second Language Acquisition. Cambridge: Cambridge University Press.

Clark, R.C. and Mayer, R.E. 2003. e-Learning and the Science of Instruction. San Francisco: John Wiley & Sons.

Healey, D., Hanson-Smith, E., Hubbard, P., Ioannou-Georgiou, S., Kessler, G., and Ware, P. (2011). TESOL Technology Standards: Description, Implementation, Integration. Alexandria, VA: TESOL.

Hubbard, P. (1987) "Language Teaching Approaches, the Evaluation of CALL Software, and Design Implications" in Smith, W.F. (ed.) Modern Media in Foreign Language Education: Theory and Implementation. Lincolnwood IL: National Textbook.

Hubbard, P. (1988) "An Integrated Framework for CALL Courseware Evaluation" CALICO Journal 6.2: 51-74.

Hubbard, P. (1992) "A Methodological Framework for CALL Courseware Development" in Pennington, Martha and Stevens, Vance (eds.) Computers in Applied Linguistics: An International Perspective. Clevedon UK: Multilingual Matters.

Hubbard, P. (1996) "Elements of CALL Methodology: Development, Evaluation, and Implementation" in Pennington, Martha (ed.) The Power of CALL. Houston: Athelstan.

Hubbard, P. (2006). "Evaluating CALL Software," in Lara Ducate and Nike Arnold (eds.) Calling on CALL: From Theory and Research to New Directions in Foreign Language Teaching. San Marcos, TX: CALICO.

Hubbard, P. (2011). "Evaluation of Courseware and Websites," in Lara Ducate and Nike Arnold (eds.) Present and Future Promises of CALL: From Theory and Research to New Directions in Foreign Language Teaching, Second Edition. San Marcos, TX: CALICO.

Hubbard, P. (2019).  "Evaluation of Courseware/Tutorial Apps and Online Resource Websites. In N. Arnold & L. Ducate (Eds.) Engaging Language Learners through CALL Sheffield, UK: Equinox.

Levy, M. (1997). Computer-Assisted Language Learning: Context and Conceptualization. New York: Oxford.

Phillips, M. (1985). "Logical Possibilities and Classroom Scenarios for the Development of CALL," in Christopher Brumfit, Martin Phillips, and Peter Skehan (eds.) Computers in English Language Teaching: A View for the Classroom. Oxford: Pergamon.

Richards, J. and Rodgers, T. (1982). Method: Approach, design, and procedure. TESOL Quarterly 16.2.

Rosell-Aguilar, F. (2017). State of the App: A Taxonomy and Framework for Evaluating Language Learning Mobile Applications. CALICO Journal, 34.2.

Home | Unit 1 | Unit 2 | Unit 3| Unit 4| Unit 5| Unit 6 | Unit 7 | Unit 8 | Supplement


Last modified: January 22, 2020  by Phil Hubbard         © Philip Hubbard. See Terms of Use for more information.