Home | Background | Respondents | Discussion | Submit Question | Comments

 

Improving CALL evaluation

Background:  When identifying CALL activities in universities at large, it becomes clear that CALL research bears all the hallmarks and limitations associated with that of the “cottage industry”: it is still too fragmented, its approach is still all too often empirical, based as it is on experience largely unsupported by reliable data and it is poorly recognized as well as funded by academic institutions. Suggestions to strengthen such a weak research base include collaboration between institutions but also adopting a more rigorous, thus a more academically acceptable, approach to research. I believe that evaluation must play a key part in this.

Research question:  How can evaluation be improved across the whole spectrum of CALL research activities to support a rigorous, academically sound, methodological approach?  

Suggested methodology/comments: The CALL community must learn to take evaluation more seriously in order to progress further and gain greater recognition. In so doing, it must learn from other disciplines such as HCI and benefit from this necessary cross-fertilization.

The following should be encouraged / promoted:

The development of greater skills and specialization in the area of evaluation.

The widespread adoption of principled, user-centred, design processes.

The need to pool resources across universities to obtain better and more representative evaluative data.

The need to share, compare and analyze data amongst universities to build a critical mass of information necessary for the establishment of a research base capable of withstanding sound academic scrutiny.  

Contact:  Dominique Hemard   d.hemard@londonmet.ac.uk

Submitted August 2002

Reader Comments: --


Post Comments

Name:


Email:


Comments: