Regional Accreditation Association Assessment Policy Analysis

 
Project Overview
State Self-Study Tools
State and Regional Policies
Assessment Policy Types and Models
Policy Development
Inventory of Instruments and Measurements
Data Collection and Analysis
Publications and Presentations

 

 

Middle States Association of Schools and Colleges  New England Association of Schools and Colleges, Inc. Southern Association of Colleges and Schools
Northwest Association of Schools and Colleges North Central Association of Colleges and Schools Western Association of Schools and Colleges

 


Middle States Association of Schools and Colleges

Commission on Higher Education

Jean Morse, Executive Director
Mary Beth Kait, Assistant Director for Policy Development (responded to our request for information)
3624 Market Street
Philadelphia, PA 19104-2680
215-662-5606
FAX: 215-662-5501

Assessment Timeline
December 1985 Standards for Outcomes Assessment adopted
December 1989 Task force on Outcomes Assessment formed
November 1990 Framework for Outcomes Assessment published, intended to help institutions design, initiate, and conduct effective outcomes assessment programs.
February 1994 Rev. edition of Characteristics of Excellence in Higher Education: Standards for Accreditation (previous editions in 1957, 1971, 1978, 1982, 1988, 1989)
June 1994 Second Task force on Outcomes Assessment formed

1995

Outcomes Assessment Survey of 495 member institutions conducted
February 1996

Commission’s first Policy Statement on Outcomes Assessment draft circulated for approval

July 1996

Second edition of Framework for Outcomes Assessment published
July 1996

Report on the 1995 Outcomes Assessment Survey published

October 1996

First of seven symposia on outcomes assessment for member institutions throughout the Middle States region scheduled for October at Temple University

April 1997 Seventh symposium completed the series

 

Overview of MSA/CHE Assessment for Learning/Teaching Improvement

One of Middle States 16 standards for accreditation requires that the Commission determines an institution’s accreditation by evidence of “policies and procedures, qualitative and quantitative, as appropriate, which lead to the effective assessment of institutional, program, and student learning outcomes." The association’s Framework is based on the major precepts of Characteristics of Excellence which addresses institutional effectiveness and outcomes assessment in the context of accreditation. The Characteristics states that “institutions should develop guidelines and procedures for assessing their overall effectiveness as well as student learning outcomes” (p. 16). According to the first edition (1990) of the Framework the insistence that the improvement of teaching and learning is the primary goal of outcomes assessment follows directly from the Characteristics of Excellence, which states, the “ultimate goal of outcomes assessment is the improvement of teaching and learning” (p. 17).

The 1996 edition of Framework was designed to assist colleges and universities in meeting outcomes assessment requirements of MSA/CHE and is also “designed to enable them to respond to new expectations being expressed in public forums” (p. 1). The emphasis is now dual - enhancing the effectiveness in terms of teaching and learning and the effectiveness of the institution as a whole.

Middle States has made a clear commitment to engaging their member institutions in exploring questions of student learning—what they should learn and how well they are doing it. An institution is effective when it is asking these questions and doing something with the answers they find.


1. Resource Materials to Guide Institutions

Framework (1996) is designed as a tool for assisting institutions in their design, initiation, and conduct of outcomes assessment programs. It includes a bibliography of assessment resources.

The 1996 Report on the 1995 Outcomes Assessment Survey indicated that 57 percent of institutions in the region did not have an institution-wide assessment plan. It further identified nine aspects of assessment which should be completed in order to set the stage for developing a plan. Based on these findings, the Commission has planned to sponsor as many seminars as possible to assist institutions with completing the nine preliminary steps for developing a plan; collegially developing a plan on campus, the continuous administration of assessment plans, and post-assessment strategies (how to use the assessment findings). Other recommendations generated from the findings of the survey include more explicit information sharing with Chairs and members of evaluation teams around issues of the Commission’s assessment expectations, institutional progress, and how to evaluate realistically what institutions might be expected to do in the five years between an evaluation visit and the following periodic review report.

As of October 1996 Training Programs for member institutions were instituted.

2. Emphasis on Students/Student Learning

The Framework emphasizes teaching and student learning and the “ultimate purpose of assessment is to improve teaching and learning” (p. 14). It asks each institution to address the following questions:

What should students learn?
How well are they learning it?
How does the institution know?
Institutional effectiveness is linked to extent and quality of student learning

1996 Policy Statement indicates that institutions must give primary attention to assessment of student learning outcomes.

3. Kinds of Outcomes Measured/Assessment Processes

The Commission does not and will not prescribe methodologies or specific approaches, but there is a clear expectation that the assessment of student learning outcomes is an ongoing institutional process.

In deciding what to measure, three areas of focus were identified: general education, other academic programs, and individual course offerings. Highlighted within these three areas are: cognitive abilities, information literacy, student integration and application by students of their knowledge and skills acquired via program offerings.

Means of possible measurement include proxy measures. For example, to measure a student’s sense of social responsibility, a proxy measure might be their participation in volunteer organizations. Quantitative and qualitative approaches are suggested, as well as the use of both local and standardized instruments. Direct assessment of student learning, a value-added approach using portfolios as an approach is also mentioned.

“Student outcomes assessment is the act of assembling, analyzing and using both qualitative and quantitative evidence of teaching and learning in outcomes in order to examine their congruence with stated purposes and educational objectives and to provide meaningful feedback that will stimulate self-renewal” (Framework, 1996, p. 14).

The methods of assessment which complement cognitive tests and provide indicators of instructional program quality are listed on page 28: course and professor evaluations; student satisfaction inventories; measures of student personal and affective development; retention numbers, faculty peer evaluations, and alumni achievements. “Most campuses have found that longitudinal, multi-measure studies produce more meaningful results...” (p.18).

A broad framework for linking purposes, resources and educational outcomes includes: general education, the major, basic skills, and students’ personal and social development. The general education arena includes abilities underlying the transfer of knowledge by the ability to think critically, solve problems, effectively communicate in written and oral form, display technological competence, have familiarity with mathematics and quantitative analysis, and a range of characteristics indicative of sound judgment and human values (p. 35). “The analysis of student achievement with respect to general education utilizes different measurement objectives for assessing competencies in four broad areas: cognitive abilities (critical thinking, problem solving), content literacy (knowledge of social institutions, science and technology), competence in information management skills and communication, and value awareness (multicultural understanding, moral and ethical judgment)” (p.35-36). In the major, students should be able to demonstrate their ability to integrate their learning.

4. Emphasis on Teaching

Teaching is clearly a part of the assessment/improvement loop. It seems to be a tool to respond to student learning. Teaching itself is not clearly an object of assessment. The Framework indicates the primacy of teaching and learning and includes a diagram linking learning, teaching, assessment and institutional improvement.

The section on Assessment for Improvement discusses applying assessment findings to improve student learning in the classroom and throughout the curriculum as a whole (p.9): findings might lead faculty to pursue curriculum development, pedagogical changes, faculty development initiatives, and the reallocation of resources (p. 11). “Assessment programs should reflect a variety of methods for assessing instructional quality, including traditional and contemporary cognitive tests and other methods of assessment” (p. 28).

5. Emphasis on Institutional Effectiveness/Accountability

The primary goals of outcomes assessment are to “document institutional, programmatic, course-level and individual student success in achieving stated goals” (Framework, 1990). The language used speaks to excellence and using assessment results to plan for improvement.

Framework 1990 refers to Characteristics of Excellence section entitled “Outcomes and Institutional Effectiveness,” which begins with: “the deciding factor in assessing the effectiveness of any institution is evidence of the extent to which it achieves its goals and objectives”.

The 1996 edition emphasizes accountability and identifies as the ultimate goal of outcomes assessment the examination and enhancement of institutional effectiveness. “Four objectives must be met in order to reach this goal. They are to improve teaching and learning, to contribute to the personal development of students, to ensure institutional improvements, and to facilitate accountability. The 1996 edition has a section on the “Current Context” which discusses the intense pressures institutions now face to demonstrate “their accountability, effectiveness and efficiency” (to constituents, the public, parents, and legislators, etc.).

6. Emphasis on Planning by Institution/Institutional Autonomy

While outcomes assessment is one of the standards for accreditation adopted by Middle States, the Commission “believes it is an institution’s prerogative to determine how best to implement assessment. In addition, institutions conceptualize, develop, and implement their outcomes assessment plans over time”(Framework, 1996).

7. Relationship to Higher Education State Department/Councils/Coordinating Boards

MSA/CHE has participated in informal discussions with the Pennsylvania State System of Higher Education and with New Jersey’s Excellence and Accountability Committee to update them on CHE’s activities and to discuss common concerns and effective responses.

8. Training of Accrediting Teams

Findings from 1995 Outcomes Assessment Survey indicated the need for training of chairs and members of evaluation teams on issues of assessment. Accordingly, training/development workshops for team chairs and evaluators were planned for the fall 1997.

9. Diversity

Multicultural understanding is identified as a desired competency in the general education arena. (p. 36, 1996 Framework). Characteristics of Excellence (1994) includes as a characteristic of excellence programs and courses which develop among other abilities the ability to interact effectively in a culturally diverse world (p. 4). Admissions programs should encourage diversity in the student population (p. 8) and student services should be broad enough to meet the special needs of a diverse student body (p. 9).

An environment in which cross-cultural understanding flourishes is essential (p. 10). Institutional working environments should be characterized by justice, equity, and respect for diversity and human dignity (p. 12). Faculty selection should include goals of achieving diversity in areas of race, ethnicity, gender and age (p. 11).

10. Technology

The development of information management skills and information literacy is mentioned in a list of desired outcomes.

11. Evaluation of Assessment

The “Guiding Principles for College Assessment” (Framework, 1996) includes the expectation that “assessment programs include research and analyses on the effects of the assessments upon institutions, students, and the teaching and learning process” (p. 29). The Association itself has assessed its progress with regard to assessment via their 1995 survey of their member institution’s assessment practices.

12. Formative or Summative?

In the 1990 Framework, the Association asked its member institutions to emphasize formative assessment.

The updated 1996 Framework speaks of institutions utilizing two evaluation strategies—formative and summative (p. 7&8), acknowledging the “growing significance of accountability-oriented (summative) assessment for public policy and other purposes” (p.11).

13. Who is involved in assessment?

Assessment is ideally seen as a partnership among faculty, administrators and students.

Materials Received

1. 1990 Framework for Outcomes Assessment
2. 1994 Characteristics of Excellence in Higher Education: Standards for Accreditation
3. 1996 Policy Statement on Outcomes Assessment
4. 1996 Framework for Outcomes Assessment (Rev. Ed.)
5. 1996 Report on the 1995 Outcomes Assessment Survey

 

Top


 

New England Association of Schools and Colleges, Inc.

Commission on Institutions of Higher Education

Charles Cook, Director
Peggy Maki, Associate Director (contact)
209 Burlington Road
Bedford, MA 01730-1433
617-271-0022
FAX: 617-271-0950
cihe@neasc.org

Assessment Timeline
1992 Policy Statement on Institutional Effectiveness
1992

Standards for Accreditation give greater emphasis to assessment

April 1997 Student Outcomes Assessment Project Institutional Survey

Summer 1997

Analysis of survey responses
December 1997 Report on survey findings at annual association workshop

1998 - future

Design of annual forums and publications to assist institutions with student outcomes assessment

 

Overview of NEASC Assessment for Teaching/Learning Improvement

The Policy Statement on Institutional Effectiveness explicitly discusses assessment and emphasizes that “an institution’s efforts and ability to assess its effectiveness and use the obtained information for its improvement are important indicators of institutional quality.” The teaching and learning process is the primary focus of assessment. The association Evaluation Manual states that “one institutional goal of NEASC’s effectiveness criteria is to cultivate within an institution a habit of inquisitiveness about its effectiveness with a corollary commitment to making meaningful use of the results of that curiosity”. According to the Background Paper used in training evaluation team members on issues of assessment, “the assessment of an institution’s effectiveness carefully differentiates between what graduates know and what the institution has done to enable them to learn.”

With regard to institutional effectiveness and assessment the Commission’s expectations are fourfold:

  1. The Commission expects the institution to determine that it has taken cognizance of the need to engage in such efforts;
  2. the Commission expects the institution will have in place or have constructed realistic plans to put in place assessment mechanisms;
  3. the Commission expects that the institution will utilize the results/findings of assessment efforts to inform the decision-making processes; and
  4. the Commission expects that these efforts will occur on an on-going basis and that they will become increasingly systematic, integrated and holistic (Self-Study Guide, 1994).

1. Resource Materials to Guide Institutions

Evaluators and team chairs are trained on issues of assessment. Training includes use of Background Paper, and Planning and Evaluation Session. CIHE does offer fall self-study workshops for all institutions preparing for a comprehensive evaluation within the next two years. One of the intended outcomes of the Student Outcomes Assessment Project is the development of training materials and workshops to assist member institutions in their assessment processes and practices.

2. Emphasis on Students/Student Learning

According to the Policy Statement on Institutional Effectiveness (1992), “assessment’s primary focus is the teaching-learning experience. To the greatest extent possible, therefore, the institutions would describe explicit achievements expected of its students and adopt reliable procedures for assessing these achievements.

The self study manual suggests that documents that serve as examples of institutional studies of learning outcomes be included in those materials gathered and made available to the evaluation team during their campus visit. According to Standard 2.5, information gathered should inform institutional planning especially as it relates to student achievement Standard 4.38 holds that evaluation of student learning or achievement be based upon clearly stated criteria and Standard 10.7 states that institutions are expected to have documentation for any statements/promises regarding learning outcomes.

3. Kinds of Outcomes Measured/Assessment Processes

Outcomes

While the association mandates no specific means of assessing institutional effectiveness, student learning or achievement, or teaching, nor any specific desired outcomes, it does acknowledge that for institutions which measure their effectiveness there are three domains in which they are influential: cognitive, behavioral, and affective learning.

Processes

Documents indicate that outcomes must be clearly stated and that successful efforts will be both qualitative and quantitative. The process should be on-going and incremental. Standard 10.7 states that the institution is expected to have documentation for any statements/promises regarding program excellence, learning outcomes, etc. The Background Paper used for training evaluators lists direct and indirect measures of student learning which have been found to be reliable. They include capstone experiences, portfolio assessment, performance or licensure exams, essay questions scored by outside departments, alumni surveys, and job placement statistics.

4. Emphasis on Teaching

The evaluation Manual speaks more of what not to do (i.e., sitting in on classes will not provide adequate evidence of teaching effectiveness); that it would be better to look at outcomes data (which is not defined) in self-study and other evidence presented by the institution.

According to Standard 4.30, The institution endeavors to enhance the quality of teaching. It encourages experimentation with methods to improve instruction. The effectiveness of instruction is periodically and systematically assessed using adequate and reliable procedures. The results are used to improve instruction.

5. Emphasis on Institutional Effectiveness/Accountability

The Commission identifies institutional effectiveness and accountability as dual purposes/major themes of accreditation (Standards and Evaluation Manual). Assessment and accreditation share the common goal of enabling institutions to reach their full academic potential. Accreditation provides quality assurance and encourages institutions on an ongoing basis to work to increase their effectiveness.

6. Emphasis on Planning by Institution/Institutional Autonomy

The Policy Statement stresses that the Commission will/does not prescribe an assessment formula. Successful assessments efforts are compatible with the institution’s mission and its available resources.

7. Relationship to Higher Education State Department/ Councils/Coordinating Boards

State departments of higher education in states within the Commission’s region are notified annually of institutions being evaluated by the Commission and often a staff member of the department accompanies the NEASC accreditation team as an observer.

8. Training of Accrediting Teams

Training exists and the Student Outcomes Assessment Project is intended to further develop this service.

9. Diversity

Standard 6.2 in the student services area mentions that cocurricular services should adhere to both the spirit and intent of equal opportunity and the institution’s own goals for diversity.

10. Technology

Standard 7.1 addresses the availability of library and information resources (e.g. computer centers) and that the institution ensures that students use these resources as an integral part of their education.

Standard 7.6 states that regular and systematic evaluation of adequacy and utilization of library and information resources is expected as is the use of this information to improve effectiveness.

11. Evaluation of Assessment

According to the Evaluation Manual, the institution should be able to demonstrate that it uses the results of the evaluation of outcomes to enhance the delivery of its services. Additionally, a goal of effectiveness criteria is to develop institutional capacity to verify that it is achieving its purpose. The Student Outcomes Assessment Project is the association’s emerging effort to determine the extent of assessment activity and how the association can be of greater assistance facilitating outcomes assessment on their members’ campuses.

12. Formative or Summative?

The approach appears to be both as assessment data will be collected and used to assure both quality and self-improvement

13. Who is involved in assessment?

According to Standard Two, planning and evaluation are systematic, broad based, interrelated, and appropriate to the institution’s circumstances. They involve the participation of individuals and groups responsible for the achievement of institutional purposes.

Materials Received

  1. 1992 Policy Statement on Institutional Effectiveness
  2. 1992 Standards for Accreditation
  3. 1994 Self Study Guide
  4. June 26, 1996 letter from Charles Cook
  5. March 13, 1997 letter from Peggy Maki
  6. Background paper, Planning and Evaluation Session, New Evaluators’ Workshop
  7. Draft of letter to institutions regarding Student Outcomes Assessment Project
  8. Survey on Enhancing Institutional Effectiveness through Student Outcomes Assessment (draft 3/13/97)
  9. 1996 Policy Manual

Top


 

North Central Association of Colleges and Schools

Commission on Institutions of Higher Education

Steven D. Crow, Executive Director
30 North LaSalle Street, Suite 2400
Chicago, IL 60602-2504
312-263-0456
FAX: 312-263-7462
Info@ncanihe.org

Assessment Timeline
October 1989 Statement on the Assessment of Student Academic Achievement (ASAA)

March 1990

Annual NCA Meeting includes special program to discuss implications of assessment initiative
June 1990

Commission approves comprehensive educational plan to implement the Statement on the ASAA

Fall 1990

NCA Quarterly, "Sharpening the Focus on Assessment: The Regionals and the NCA States”, reports on the beginning of NCA’s assessment initiative
Spring 1991 Four regional meetings held focusing on “Documenting Student Academic Achievement within the Context of Accreditation”

Fall 1991

NCA Quarterly, "Assessing Student Academic Achievement", a progress report on the Commission’s assessment initiative
August 1993 Revision of the Statement of the Assessment of Student Academic Achievement
September 1994

Handbook of Accreditation is published, including Chapter 14, Special Focus: Assessing Student Academic Achievement

June 1995 Deadline for all NCA institutions to submit plan for assessing student academic achievement
February 1996 Revision of Statement on ASSA
March 1996

Majority of institutional assessment plans reviewed by consultant-evaluators

March 1996

Opportunities for Improvement: Advice from Consultant-Evaluators on Programs to Assess Student Learning, NCA Staff paper by Cecilia Lopez, provides information on the impact and expectations of the assessment initiative as culled from the institutional plans
March 1996 Working draft of revised sections of Handbook of Accreditation published. Criteria Three and Four which cover assessment of SAA set much more explicit expectations that assessment for student academic achievement as an “essential component of evaluating overall institutional effectiveness”

Overview of NCA Assessment for Learning/Teaching Improvement

NCA has five Criteria for Accreditation. Numbers Three and Four emphasize the use of assessment in evaluating and improving teaching and learning at member institutions. Criteria Three states that the institution is accomplishing its educational and other purposes. Criteria Four states that the institution can continue to accomplish it purposes and strengthen its educational effectiveness. According to the most recent Statement on Assessment of Student Academic Achievement (February 1996) which is embedded in Criteria Three, the evaluation of overall institutional effectiveness continues to be an essential part of the accreditation process. This Statement reaffirms the Commission position taken October 1989, and repeated in August 1993, that assessing student academic achievement is an essential component of evaluating overall institutional effectiveness.

Of the six regional accrediting associations, NCA has one of the most explicit statements of assessment of student learning and recognition of the link between assessing learning and strengthening teaching. The Association has required all member institutions to submit assessment plans. Those plans have been evaluated and an overall evaluation of the current status of assessment across the region has been made.

1. Resource Materials to Guide Institutions

Regional meetings were first held in spring 1991 to train institution staff and evaluators. Meetings included distribution of Assessment workbook and brief papers from presenters. The papers from the spring workshops were published in Fall 1991 NCA Quarterly and the papers include introduction to assessment of student academic achievement as addressed by the Commission (e.g., Characteristics of an Assessment Program, a framework to guide design of institutional assessment program); practical advice (e.g., A Worksheet to Judge Inclusion of Assessment Data for Accreditation); and institutional case studies from NCA member institutions.

The 1994-1996 Handbook of Accreditation includes Appendices of selected readings on assessment.

The 101st (1996) NCA Annual Meeting included assessment resources for institutions (contact list of member institutions willing to share examples of institutional plans and programs, suggested publications, selected organizations and instruments).

2. Emphasis on Students/Student Learning

The ultimate goal of assessment is the improvement of student learning. Of all the possible outcomes institutions might pursue/study as a means of documenting institutional effectiveness, none are required except for outcomes documenting student academic achievement.

According to the 1989 Statement on the ASAA, “the Commission wants to make clear that all institutions are expected to assess the achievement of their students. With this statement we make explicit the Commission’s position that student achievement is a critical component in assessing overall institutional effectiveness. Our expectation is that an institution has and is able to describe a program by which it documents student academic achievement.”

3. Kinds of Outcomes Measured/Assessment Processes

NCA states that implicit in the values of higher education is the mastery of a rigorous body of knowledge and students’ abilities to conceptualize, analyze, and integrate; use their intellect; examine their values; consider divergent views; and engage with their peers and teachers in the exchange of ideas and attitudes. These values, however, never appear as a list of desired outcomes to be measured. The Working Draft of revised sections of Criteria Three and Four notes that “an appropriate assessment program will document (their emphasis) proficiency in skills and competencies essential for all college-educated adults; completion of an identifiable and coherent undergraduate level general education component; and mastery of the level of knowledge appropriate to the degree attained.”

Additionally in the 1996 Lopez paper, Opportunities for Improvement: Advice from Consultant-Evaluators on Programs to Assess Student Learning, "evaluators recommend that every academic department or other academic unit determine the extent to which it actually contributes to the incremental learning of its students within the three domains: cognitive, behavioral, or affective."

There is an explicit expectation that data from multiple, direct and indirect indicators such as pre- and post-testing, portfolio assessments, alumni and employer surveys will be collected, and that multiple data collection methods will be used. Note, however, that no explicit list of outcomes or indicators of academic achievement are provided.

4. Emphasis on Teaching

All publications explicitly link the assessment of learning with the strengthening of teaching. The 1993 and 1996 Statements assert that “the program to assess student learning should emerge from and be sustained by a faculty and administrative commitment to excellent teaching and effective learning. The assessment initiative was conceived of as a means by which to encourage excellence in the teaching provided students (1993 and 1996 Statements). In both 1993 and 1996 Criteria 3 emphasizes that for assessment to have any real impact on higher education it must directly link student achievement to both the structure and content of the educational program and to the effectiveness of teaching. It is not clear how excellence in teaching is assessed.

5. Emphasis on Institutional Effectiveness/Accountability

The accreditation process is presented as a means of providing public assurance of an institution’s effectiveness and a stimulus to institutional improvement.

6. Emphasis on Planning by Institution/Institutional Autonomy

NCA neither provides a definition of student academic achievement nor prescribes a specific approach to assessment. The only mandate NCA has is that while institutions might utilize a number of institutional outcomes in documenting their effectiveness, all institutions must have and describe a program which documents student academic achievement.

7. Relationship to Higher Education State Department/ Councils/Coordinating Boards

NCA maintains communications and discussions with officers of state governing and coordinating boards.

In 1990 and 1996 NCA surveyed the state higher education agencies of the 19 states in their region, asking states about their expectations for assessment and their awareness of NCA’s initiative assessing student academic achievement; and requesting suggestions for ways in which the states and NCA might work together to link their expectations for assessment.

8. Training of Accrediting Teams

Evaluators were included in 1991 regional workshops introducing the Commission’s commitment to assessing for student academic achievement.

9. Diversity

According to the “Characteristics of an Assessment Program” found in NCA Quarterly, Fall 1991, an assessment program must not restrict or inhibit goals of access, equity, and diversity established by the institution.”

In August 1991 the Commission published a Statement on Access, Equity, and Diversity which includes the statement that “the Commission expects an institution to create and maintain a teaching and learning environment that supports sensitivity to diverse individuals and groups;…discourages acts of racism, sexism, bigotry, harassment, and violence while it teaches students and faculty alike to see in proper perspective the differences that separate and the commonalties that bind all peoples and cultures.”

10. Technology

Criteria 2 expects institutions to have academic resources and equipment adequate to support institutions’ purposes (includes libraries, electronic services and products).

11. Evaluation of Assessment

NCA stated in the NCA Quarterly, Fall 1991 that institutions should include a process for evaluating the assessment program. The Commission is engaged in an evaluation process of how institutions are doing as indicated by the 1996 Lopez paper which provided an overview of where institutions are in their assessment planning and the kinds of advice evaluators-consultants have for institutions based on a review of the institutional assessment plans submitted as of June 1995.

12. Formative or Summative?

“Assessment is not an end in itself, but a means of gathering information that can be used in evaluating the institution’s ability to accomplish its purposes” (1989 Statement).

13. Who is involved in assessment?

NCA holds that a faculty role in and responsibility for the assessment plan is integral to improved student learning. Additionally the importance of institution-wide support of the assessment activities from such entities as the governing board, senior executive officers, president/chancellor is considered essential for ensuring the long-range success of the assessment of student learning.

Materials Received

  1. June 26, 1996 letter from Patricia Thrash, Executive Director (retired as of December 1996)
  2. NCA Commission on Institutions of Higher Education Briefings (several)
  3. State Agency Expectations for Assessment in the North Central Region: A Followup on the 1990 Survey by Patricia Thrash and Leonilia Nakutis (draft 6/25/96)
  4. 101st Annual Meeting Program and List of Meeting Resources on Assessment
  5. Working Draft of Revised Sections of the Handbook of Accreditation: Criteria Three and Four (March 1996)
  6. NCA Quarterly, 65(2), Fall 1990
  7. NCA Quarterly, 66(2), Fall 1991
  8. Handbook of Accreditation 1994-1996

 

Top


Northwest Association of Schools and Colleges

Commission on Colleges

Saundra E. Elman, Executive Director
11130 Northeast 33rd Place, Suite 120
Bellevue, WA 98004
206-827-2005
FAX: 206-827-3395

Assessment Timeline
Not available

 

Overview of NWASC Assessment for Learning/Teaching Improvement

Accreditation Handbook lists 12 Standards for self-study. The Standard most explicitly related to assessment is Standard Five: Educational Program and Its Effectiveness. Within that, Standard 5B: Educational Program Planning and Assessment states that “educational program planning is based on regular and continuous assessment of programs in light of the needs of the disciplines, the fields or occupations for which programs prepare students, and other constituencies of the institution” (1996 Standards). Standard 5B1 notes that institutional assessment programs must be clearly defined, regular, and integrated into institution planning and evaluation mechanisms. Standard 5B3 requires institutions to provide evidence that their assessment activities lead to improvement of teaching and learning.

NWASC has as a policy statement listed in 1994 Accreditation Handbook Policy 25: Educational Assessment which requires institutions to adopt an “assessment scheme” which is in line with their mission and purposes, assesses outcomes, and guides their institutional planning processes.

1. Resource Materials to Guide Institutions

Policy 25 gives illustrative, not prescriptive, examples of outcome measures (e.g. writing, quantitative skills) and assessment processes (e.g., alumni surveys, student satisfaction inventories).

Other materials which provided evidence of how association supports institutions in their assessment activities or trains evaluators for examining assessment practices of institutions not in evidence.

2. Emphasis on Students/Student Learning

Standard Nine (1996) focuses on students, yet the Standard categories focus primarily on student services and programs. Under Academic Credit and Records - Standard 9.C, evaluation of student learning or achievement is mentioned as being based on “clearly stated and distinguishable criteria.”

Policy 25 states that educational effectiveness is defined in terms of the change it brings about in students. The background description in the 1994 edition of the handbook notes a shift from assessment practices which used process measures and input measures toward an appreciation of “the validity and usefulness of using output evaluations and assessment, as well as input measures in attempting to assess educational quality” (p. 179). Outcome assessment is as of this edition viewed as an essential component of the self-study process.

3. Kinds of Outcomes Measured/Assessment Processes

Outcomes
As of the 1996 updated standards, clearer expectations are drawn that institutions will have clearly defined processes for assessing educational programs and that expected learning outcomes for their degree and certification programs will be published. In the introduction to 1996 Standard 5 it is stated that, “the institutions offer collegiate level programs that culminate in identified student competencies...” (p. 16). The association, however, does not identify or mandate what those competencies might or should be.

1994 Handbook includes suggested questions for appraising student learning, which mention looking for evidence of student growth in: problem solving, analysis, synthesis, making judgments, reasoning, communicating, developing integrity and objectivity. But this list of examples does not appear in 1996 revision of the Handbook.

Processes

Required supporting documents for Standard 5: Educational Program Effectiveness include instruments, procedures, and documents demonstrating appraisal of program outcomes as they relate to students (studies of alumni and former students; student satisfaction inventories).

Standard 5B2 requires institutions to publish expected learning outcomes and demonstrate that their students have achieved these outcomes. Additionally, institutions must demonstrate that their assessment activities lead to the improvement of teaching and learning.

1996 Standard I: Institutional Mission, Goals, Planning and Effectiveness requires supporting documentation that provides evidence that demonstrates the analysis and appraisal of institutional outcomes.

4. Emphasis on Teaching

Emphasis in the Standards seems more on evaluation of teaching performance and not the learning outcomes which result from teaching. No evident connection is drawn between teaching and learning.

Standard VII (1994) asks that in the self study analysis schools consider how teaching performance should be evaluated and institutions are asked for evidence that the criteria they use was known and accepted by both evaluators and faculty being evaluated.

Standard 7.5 (1996) calls for regular and systematic evaluation of faculty performance to assure teaching effectiveness. This Standard asks for evaluation forms used and the resulting summary reports of student evaluations of faculty and courses.

5. Emphasis on Institutional Effectiveness/Accountability

Introduction to Standards (1994) indicates that the standards of accreditation of postsecondary institutions describe conditions and principles which characterize educational effectiveness. The purpose of accreditation is both public accountability and program and institutional improvement.

6. Emphasis on Planning by Institution/Institutional Autonomy

Does not appear to be mentioned or at least not emphasized. There are no explicit directions as how to proceed so one could read into this that there exists considerable institutional autonomy.

7. Relationship to Higher Education State Department/Councils/Coordinating Boards

None apparent.

8. Training of Accrediting Teams

None apparent

9. Diversity

A 1994 Affirmative Action Policy is in place urging institutions to develop and apply affirmative action principles. Embedded in Standard Nine: Students is a section on student services states that the institution is expected to foster a supportive learning environment via attention to diverse backgrounds of its students including ethnic, socioeconomic, and religious diversity.

10. Technology

From 1994 to 1996 a shift from planning for technology to using technology is evident. The language shifts from planning for deletions/additions in curriculum programs and distance learning initiatives to use of technology “to extend the boundaries in obtaining information” (4B5). In Standard VA8 the expectation that faculty in partnership with library and information resources personnel ensure that the use of library and information resources are integrated into the learning process is stated.

However, in the required documents supporting this section of self-study, the measures focus on adequacy of facilities, holdings, extent of use, but no clear links between the use of technology and teaching and/or learning outcomes are mentioned.

Standard III: Physical Plant, Materials, and Equipment in 1994 there is no mention of computers or technology in 1996 IIIB includes the expectation that suitable equipment, including computers be provided.

Standard IV: Library and Information Sources includes the following expectations:
IVA1 (1996) presence of computer centers, networks, and telecommunication centers
IVB2 (1996) use of resources in developing abilities of students
IVB5 (1996) use of technology to “extend the boundaries in obtaining information”
IVE (1996) in planning and evaluation section of this Standard, it is indicated that library and information resources planning activities support the teaching and learning function of the institution.

11. Evaluation of Assessment

Standard 1: Institutional Mission, Goals, Planning and Effectiveness
1B4 calls for the use of evaluation activities to improve instructional programs
1B5 calls for the integration of evaluation and planning processes to identify institutional priorities for improvement.

Standard 5B3 (1996) indicates that institutions must demonstrate that its assessment activities lead to improvement of teaching and learning. Standard 5B2 requires institutions to publish expected learning outcomes and demonstrate that their students have achieved these outcomes. Additionally institutions must demonstrate that their assessment activities lead to the improvement of teaching and learning.

But it is not apparent that NWASC is doing anything to gauge the influence of their policies on the assessment practices and policies of the institutions they serve.

12. Formative or Summative?

Assessment is calling for a cycle of improvement, and thus seem more formative in nature.

13. Who is involved in assessment?

Faculty have a central role in planning and evaluating the educational programs (Standard 5B1, 1996).

Documents Received

1. 1994 Accreditation Handbook
2. 1996 Revised Standards, which will be in 1996 Accreditation Handbook (Fall 1996)
3. July, 11 1996 letter from Joseph A. Malik, Executive Director

 

Top


 

Southern Association of Colleges and Schools

Commission on Colleges

James T. Rogers, Executive Director
Materials received from John Orr Dwyer, Associate Executive Director
1866 Southern Lane
Decatur, GA 30033-4097
404-679-4500
FAX: 404-679-4558

Assessment Timeline
1984 Implementation of Institutional Effectiveness Criteria

Overview of SACS Assessment for Learning/Teaching Improvement

SACS has six sections of Criteria for Assessment. Section III, “Institutional Effectiveness” addresses concepts of institutional assessment for instructional practices and learning processes. In order to plan and evaluate the primary educational activities of teaching, research, and public service an institution must: “establish a clearly defined purpose appropriate to collegiate education, formulate educational goals consistent with the institution’s purpose; develop and implement procedures to evaluate the extent to which these educational goals are being achieved; and use the results of these evaluations to improve educational programs, services and operations.” (p. 20 Criteria for Accreditation, 1996). In the Introduction of the Resource Manual on Institutional Effectiveness, the inclusion of this criterion is highlighted as “an expansion of the process to emphasize the results of education and to focus on the extent to which the institution uses assessment information to reevaluate goals, to make essential improvements, and to plan for the future” (p. iii).

SACS led the regional associations in its early (1984) adoption of outcomes assessment as tool for measuring institutional effectiveness. While there is no explicit statement of assessment for learning and teaching, a commitment to gauging institutional effectiveness through the assessment of outcomes, including student learning and undergraduate instruction is clear.

1. Resource Materials to Guide Institutions

Sponsored by a FIPSE grant, SACS has a Resource Manual of Institutional Effectiveness to provide guidelines for interpreting and responding to Section III of Criteria for Accreditation: Institutional Effectiveness. The manual presents philosophy and rationale, an approach to planning and evaluation, and suggestions for managing the process.

2. Emphasis on Students/Student Learning

One of the underlying assumptions shaping the content of the Resource Manual is that for a host of publics external to higher education institutions concern for demonstration of how and in what ways colleges and universities are producing “more competent students” is paramount. Institutions are expected to develop guidelines to evaluate educational effectiveness and mentioned first is concern for the quality of student learning. One of SACS imperatives is that “the institution must evaluate its success with respect to student achievement in relation to purpose, including, as appropriate course completion, state licensing examinations, and job placement rates” (Criteria for Accreditation, p. 21).

3. Kinds of Outcomes Measured/Assessment Processes

Outcomes
Course completion, state licensing and job placement rates were identified as possible measures of institutional effectiveness. Other measures of outcomes are provided as examples, not imperatives, on page 9 of Resource Manual. Among the examples are student achievement in major field and general education; student affective development; and opinions of program quality given by students, alumni, employers and dropouts.

Processes
Use of both qualitative and quantitative means are encouraged; consistent and systematic means of recording and reporting are emphasized. While processes are not explicitly specified by SACS, an institution, however, “must describe its methods of analyzing.”

4. Emphasis on Teaching

Section IV of the Criteria for Accreditation focuses on educational programs. One of the subsections, Section 4.2.4, is on undergraduate instruction and states, “instruction must be evaluated regularly and the results used to ensure quality instruction….Methods of instruction must be appropriate to the goals of each course and the capabilities of the students” (p. 30). Methods of evaluating teaching effectiveness must be varied and may include use of standardized tests and comprehensive examinations, assessment of the performance of graduates in advanced programs or employment, and sampling of the opinions of former students” (p. 31). (These may also be included in kinds of and processes for measuring outcomes).

5. Emphasis on Institutional Effectiveness/Accountability

Explicit steps are articulated for planning and evaluation processes that are aimed at achieving institutional effectiveness. One of the underlying assumptions shaping the content of the Resource Manual is that a host of publics external to higher education institutions are concerned about how and in what ways colleges and universities are producing “more competent students”.

6. Emphasis on Planning by Institution/Institutional Autonomy

The Resource Manual acknowledges that institutions have diverse purposes and accordingly diverse goals, and so will have diverse methods of obtaining and using evaluative information. That there are no universally appropriate procedures and measures for assessing institutional effectiveness is emphasized, as is recognition that individual institutions will have to choose their own paths and procedures.

7. Relationship to Higher Education State Department/ Councils/Coordinating Boards

SACS has a written policy approved by the Commission as of June 1988 regarding the participation of representatives of governing, coordinating, and other state agencies on Commission visiting committees (p. 104 of Policies, Procedures and Guidelines handbook). This policy states that the institution’s governing board must be informed by the institution as to the dates of the committee visit by the Commission on Colleges. The institution should invite a representative of the governing board to be available at the time of the evaluation committee’s visit. The policy recommends sharing of institutional self-study and the visiting committee’s and institution’s response to the committee reports with their state agency.

8. Training of Accrediting Teams

Not evident from materials received.

9. Diversity

Not evident from materials received.

10. Technology

Distance learning is included as a section (4.5) in Section IV: Educational Program Criteria and institutions are expected to formulate clear and explicit goals for these programs and be able to demonstrate that they are achieving these goals. The Policies, Procedures, and Guidelines handbook includes a section on Principles of Good Practice for Electronically Offered Academic Degree and Certificate Programs.

Section V: Educational Support Services includes sections on information technology in the library and information technology resources and systems, explicitly demanding evidence of how technology has been integrated into students’ experience and evidence of student achievement of basic technology competency.

11. Evaluation of Assessment

Appendix A of Resource Manual provides a tool for assessing current practice, and in particular, guidelines for institutions to assess their planning and evaluation procedures. Materials seem to place emphasis on creating a culture of reflection on what institutions are achieving and how they can act on the knowledge. It is not clear whether SACS has engaged in a process of assessing how well its member institutions are fairing with the Criteria for Institutional Effectiveness.

12. Formative or Summative?

There is repeated emphasis on using information gained through assessment and evaluation for improvement.

13. Who is involved in assessment?

Involvement at all levels is expected. “Presidential leadership is essential to initiate and sustain planning and evaluation efforts” (Resource Manual, p.iii). “Institutional leaders have a major role in determining whether planning and evaluation are taken seriously and whether evaluation results are used to make improvements” (p. iv). Expected and desired is a “participatory process involving appropriate representation of constituent groups” (p. 4).

Materials Received

  1. Criteria for Accreditation (1992-1993, 1992-1994, 1996 Editions)
  2. Resource Manual on Institutional Effectiveness (1996)
  3. Policies, Procedures, Guidelines (1996)

 

Top


 

Western Association of Schools and Colleges

Ralph A. Wolff, Executive Director
Mills College Post Office Box 9990
Oakland, CA 94613-0990
510-632-5000
FAX: 510-632-8361
wascr@wasc.mills.edu

Assessment Timeline
1986-1988 WASC regional dialogue on key elements of institutional quality

1988

Adoption of revised accreditation standards. “One of the major new emphases…was the development of a series of accrediting standards calling on institutions to focus on assessment as a means of assuring institutional and program quality and effectiveness.” (WASC Resource Manual)

April 1990

WASC sponsored full-day workshop for Institutional Accreditation Liaison Officers on assessment expectations and techniques

1991

AAHE Assessment Forum program included WASC related workshops
April 1992 Principles of Good Practice, encapsulated in Achieving Institutional Effectiveness Through Assessment: A Resource Manual to Support WASC Institutions is directed to WASC members

February 1995

WASC Task Force Statement on the Purposes of Accreditation (Task Force 1) includes Principle 2c which states that for an institution to be accredited it must demonstrate that is offers degree, credentials and academic credit that meet publicly-stated standards of educational performance. The task force emphasized that with regard to standards in 2c, “our intent was to assure the public that students are actually learning in programs promised by the institutions (not merely “being taught” or “having the opportunity to learn”)
April 1995 WASC Task Force Report on the Role of Accreditation in the Assessment of Student Learning and Teaching Effectiveness (Task Force 2)
October 1995 Quality Assurance Systems (QAS) Worksheet: A New Way to Ask Questions published
Spring 1997 California State University-Sacramento assessment-oriented review slated
Fall 1997 Assessment-oriented visit to CSU-Monterey Bay who is using a major focus on student learning and the creation of an “institutional portfolio” for self study process.
1998-2000

Anticipated comprehensive revision of accreditation standards informed by a series of experimental self-studies.

Overview of WASC Assessment for Learning/Teaching Improvement

In 1988 WASC adopted a completely revised set of accreditation standards, emphasizing the use of assessment “as a means to assure institutional and program quality and effectiveness”. The language of the documents and the explicitly stated goal of this initiative “is to move toward the creation of a ‘culture of evidence’ within institutions such that the asking of questions relating to effectiveness of educational programs and support services is ongoing, and appropriate data are collected to respond.” (Resource Manual, p. 2). Concomitantly, there needs to be real consciousness about why institutions are collecting evidence; what evidence to collect; and how it is used.

Task Force 2 was charged by the Commission to address the role of assessment of student learning and teaching effectiveness in the accreditation process. Their resulting report identifies minimum institutional requirements for assessing student learning and teaching effectiveness, provides examples of integrative questions for the assessment of learning and teaching, and proffers a series of recommendations for further development and support of assessment practices within the Western region.

While emphasizing institutional autonomy, WASC also set clear expectations that its institutions will develop institutional assessment plans, incorporate assessment data in periodic evaluations of the effectiveness of general education program, incorporate assessment techniques into program review techniques, and develop an assessment program to review the co-curricular program of the institution. “The purpose of these four areas of emphasis over the next several years is to embed assessment into existing institutional structures” (p. 7 Resource Manual).

1. Resource Materials to Guide Institutions

"Achieving Institutional Effectiveness Through Assessment" is a resource manual which provides principles of good practices for institutions presenting their assessment plans and findings to the association, and for addressing specific guidelines for four of the association’s accrediting standards (institutional effectiveness, evaluation of general education, program review, and cocurricular educational growth) Appendices include information sources on assessment, an example of how the principles of good practice might be applied assessing an appreciation of cultural diversity, and suggested alternative methods to initiate assessment.

April 1995 Task Force 2 report (assessment of teaching and learning effectiveness) provides model questions (examples, not mandates) for guiding assessment of teaching effectiveness and student learning.

2. Emphasis on Students/Student Learning

Task Force 1 Report states “...our intent with 2c, educational performance standards was to assure the public that students are actually learning in the programs promised by the institutions (not merely “being taught” or “having the opportunity to learn.”)

Task Force 2 Report “supports giving more emphasis to the educational experience of students, anchored in the context of each institution’s mission, as an increasingly more central element of the accrediting process.”

Task Force 2 lists among it general principles for assessing student learning a set of seven minimal requirements: information about students’ entering characteristics; strategies to provide students requisite skills to pursue the curriculum; mechanisms to monitor students’ progress; information about students’ academic achievement, including knowledge and skills; basic retention, graduation and time-to-degree information; information about students’ post-baccalaureate experiences; and evidence through program review that educational goals are achieved. Questions for Assessment of Student Learning are included in Task Force 2 Report (e.g., Do students acquire core competencies in writing, mathematics, critical thinking, technological literacy in their first years of study?). They are provided as “examples, and should not be construed as mandates.”

Quality Assurance Systems (QAS) Worksheet proposes “assessment of learning” as one of three lenses for asking questions about the quality of educational programs. According to Ralph Wolff, Executive Director of WASC, “I’ve tried to develop a format for asking entirely different questions, questions that are learning-based and assessment-based, rather than resource-based.”

3. Kinds of Outcomes Measured/Assessment Processes

Outcomes
Competencies include effective communication, quantitative reasoning, critical thinking, and other competencies judged essential by the institution. (Task Force 2 Guiding Principles I. B5a,b)

From WASC Standards Assessing Institutional Effectiveness: Standard 4 (Educational Programs) B2: “Undergraduate studies ensure, among other outcomes: (a) competence in written and oral communication; (b) quantitative skills; and (c) the habit of critical analysis of data and argument. In addition to these basic abilities and habits of the mind, goals also include an appreciation of cultural diversity.

Processes
Standard 2C (Institutional Effectiveness) discusses the means for evaluating how well and in what ways an institution is accomplishing its goals. An extensive list of procedures and measures is provided, including such suggestions as structured interviews, focus groups, surveys of recent graduates, change in students’ values as measured by standard instruments or self-reporting, and peer evaluation of educational programs.

4. Emphasis on Teaching

Task Force 2, chaired by Pat Cross, cites the need to more meaningfully explore and connect the relationship between teaching effectiveness and student learning. The emphasis on teaching is evident, but so, too, is the acknowledgment that this has been an area which has been overlooked and understudied. This Task Force does offer detailed suggested questions for steering assessment on teaching and notes that for assessing teaching effectiveness and the facilitation of learning, each institution has at a minimum: qualified and appropriately sized faculty to sustain curriculum, adequate physical resources to support instruction, mechanisms in place for systematic review of teaching, a climate/culture which supports good teaching, and mechanisms to recognize and reward good teaching.

5. Emphasis on Institutional Effectiveness/Accountability

The two purposes of accreditation are to provide public assurance with regard to institutional quality and to promote effectiveness and improvement at the institutional level (Task Force 1 Report).

“Task Force 2 agreed with Task Force 1’s suggestions that assessment of student learning be part of both public assurance and institutional purposes, but noted that distinguishing between the two in the case of assessment of student learning and teaching effectiveness may be difficult. Task Force 2 believes that the assessment of teaching and learning should be included in the public function of accreditation. The Guiding Principles found in Task Force 2 Report provide the basis of that function. Nevertheless, Task Force 2 believes the overriding spirit of the implementation of these principles is to support institutions and share and encourage good practices.”

6. Emphasis on Planning by Institution/Institutional Autonomy

Task Force 2 expresses the conviction that in the final analysis institutions must be responsible for developing their own assessment programs to support their distinctive missions and to provide the information needed for the continuous improvement of their own educational programs, But WASC does establish minimal requirements for assessing student learning and teaching effectiveness. Task Force 2 developed Guiding Principles for the assessment of teaching and learning effectiveness which include the acknowledgment that, “member institutions are in the best position to define their standards for student learning and teaching effectiveness in relationship to their unique circumstance, and member institutions are in the best position to identify measures, strategies, and procedures for assessment of student learning and teaching effectiveness. No single method strategy, model or approach is universally appropriate for assessing teaching and learning.”

7. Relationship to Higher Education State Department/Councils/Coordinating Boards

Not evident from materials received.

8. Training of Accrediting Teams

WASC Resource Manual states that “all comprehensive evaluation teams now have at least one member with experience in assessment to review institutional assessment efforts and to work with the evaluation team in searching out evidence in support of institutional assertions of quality. Finally the Commission’s assessment initiative has been one of the major areas of emphasis at all self-study workshops and all training workshops for new evaluators, continuing evaluators and team chairs.” (p. 8).

9. Diversity

Included among basic outcomes of an undergraduate education is the appreciation of cultural diversity (WASC Standard 4: Educational Programs 4b2). The institution is responsible for creating and maintaining an environment that is characterized by concern and responsiveness to ethnic, socioeconomic, religious diversity; to special needs of a diverse student body. (Standard 7A: Co-curricular educational growth)

WASC Resource Manual mentions diversity: p. 7 “Assessment techniques can be instrumental in determining the quality of student experience at the institution, particularly for different groups, e.g. racial and ethnic minorities, majority students, or returning adult students”. Principles of Good Practice include a reporting of assessment results that reflects “ the diversity of the student population and authenticity of individual student experiences” (p.22).

10. Technology

Task Force 2 includes guiding questions that ask whether technology is being effectively utilized in teaching and whether students are acquiring core competencies in technological literacy in their first years of study.

11. Evaluation of Assessment

Task Force 2/1995 Commentary Section of Report indicates “...we recognized that the institutions within the WASC region are already doing a great deal of data collection and assessment activity. Yet, there exists uncertainty and discomfort with current assessment efforts: are we assessing or evaluating the right things; are we using the best methods; are we gaining the maximum value for our assessment activities; can we not learn from one another?

12. Formative or Summative?

Repeated expectation that assessment be used for improving educational practices of institution.

13. Who is involved in assessment?

Expectation that faculty of the institution will be directly involved in assessment efforts, establishing assessment goals and determining what questions should be answered (p.6 Resource Manual).

Materials Received

  1. Letter in response to our request from Ralph Wolff, Executive Director (dated July 8, 1996)
  2. Statement on the Purposes of Accreditation from Task Force 1
  3. Report of Task Force 2: The Role of Accreditation in the Assessment of Student Learning and Teaching Effectiveness with attached memorandum from Patricia Cross (April 6, 1995)
  4. Quality Assurance Systems Worksheet: A New Way to Ask Questions (DRAFT)
  5. Achieving Institutional Effectiveness Through Assessment: A Resource Manual to Support WASC Institutions (April 1992).

 

Top

 

 

On this page

The State Higher Education Assessment Questionnaire SHEAQ

Case Study Research

On this page

Middle States Association of Schools and Colleges

New England Association of Schools and Colleges, Inc.

Southern Association of Colleges and Schools

Northwest Association of Schools and Colleges

North Central Association of Colleges and Schools

Western Association of Schools and Colleges

 

Other Links

State Reports

Return to Parent Page

 

 
   
© 2003, National Center for Postsecondary Improvement, headquartered at the
Stanford Institute for Higher Education Research