Conceptual Framework for State Analysis
The five state cases are presented in a process framework, which includes the inputs, processes, outcomes, and impacts of each states assessment policy. It incorporates an analysis of the context for the policy, including its historical and political inputs, its effects, how successful it was, as well as the policy type. This framework guided our processes for data collection and individual case analysis.
The primary component of the conceptual framework is the examination of the stages in the development of assessment policy. A process model facilitated the description and analysis of the critical factors comprising the formation and enactment of each state's assessment policy. Palumbo illustrated that examining the development of the policy over time is critical because public policy is considered a process of government activity that takes place over many months and years rather than merely a single event, decision, or outcome. He also presented a five-stage model for the process of policy formation, outlining critical events that serve to move policies through each stage.
Our model is adapted from the six stages outlined by Anderson and his colleagues, which presented stages using similar critical events, but also supplied the stages with descriptors. Andersons original framework identified six stages in the policy process for any policy domain: (1) problem identification; (2) agenda setting; (3) policy formulation; (4) policy adoption; (5) policy implementation; and (6) policy evaluation. Because the agenda setting stage was found not to be applicable to the state higher education assessment policy domain, we did not include it as part of this analysis and interpretation. Our policy process model for higher education identifies five stages in the development process for state assessment policies:
The case narratives for this component of the conceptual framework outline the policies in each of the states, and describe the events in their development, the dynamics surrounding them, and the influence of important policy actors on the final design. Also, the discussion examines how the policy moved from formulation to implementation, obstacles in the process, as well as reflections on what evaluation efforts revealed and what changes were considered.
In the second component of the framework, states were compared along six dimensions that describe the critical content from the case studies. These dimensions flow from the questions outlined in the conceptual framework and seek to provide greater detail on the crucial aspects of the policies. The dimensions also allow for a better cross-case analysis as well as for the examination of the connections between levels of assessment policy. The six dimensions for comparison and analysis are:
In addition to these, a conclusions section is presented that discusses the overall effectiveness of the state policy, what changes might be appropriate for improvement, and what aspects of these policies were successful.
Regional Association Approach
The regional accreditation associations were selected in order to study their policies and standards for the emphases they place on improving student learning and achievement as a requirement for accreditation. The standards and criteria of the associations are analyzed along the following six dimensions:
The discussion of the regional associations provides a review of the associations focus on assessment for improving learning and teaching, the types of outcomes measured and processes used, and institutional accountability vs. institutional autonomy. Also, to understand the associations engagement with assessment, the analysis also includes the relationship of association to state higher education department, its willingness to work with institutions to meet the criteria, and the associations efforts to evaluate its assessment program.
The third component of the conceptual framework is analytical. It examines the outcomes of the policies in light of the stated objectives. In earlier research, Nettles & Cole (1999a; 1999b) showed how the states seek to meet a variety of objectives with their assessment policies, from improving student learning to holding institutions accountable for their effectiveness. The objectives for assessment policy and accreditation standards are significant because they reflect policymakers perceptions of the academic results and standards of performance that colleges and universities should be achieving. Assessment policy objectives also reveal priorities that have consequences for institutional behaviors/decisions.
Objectives only tell half of the policy story, however. Equally important (and revealing) is an analysis of the intended and unintended outcomes. While a state may have stated objectives for its assessment policy, those objectives are not always achieved, or if they are, there may also be additional outcomes. This distinction between stated policy objectives and outcomes is important, particularly for understanding the dynamics of the policy process at the state level. This distinction has also been addressed in the policy analysis literature. An effort has been made to distinguish between intentional analyses, which focuses on what was or is intended by a policy, and functional analyses, which focuses on what actually happened as a result of a policy.
Our goal is to compare the intended and the actual outcomes of the policies, while also attempting to describe the key factors that led to these outcomes. This component of our framework examines the connection between policy objectives and outcomes. It addresses the following questions:
This policy context element is concerned with the historical, social, and economic inputs related to the policys origin, such how the awareness of a new policy or a change in an existing one was created. Included also were political inputs, such as the governance structure for higher education that was present in a state or the original legislation or political action leading to the development of an assessment policy. The relationships and communications among the governing agencies, the state governments, and the institutions are also key factors in understanding the policy context.
The states and regional accreditation associations have a variety of reasons for adopting assessment policies and standards, and they are designed to meet a variety of objectives. The intentions of a policy include the following: to increase public or fiscal accountability; to improve college teaching or student learning; to promote planning or academic efficiency on campus; to facilitate inter-or intra-state comparisons; and to facilitate the reduction of program duplication.
Although a state may have clearly articulated objectives for its assessment policy, those objectives may or may not always be met in practice. There may be important interactions between the objectives; some may complement one another while others might have been at cross-purposes. A policy could have a design including elements that link objective to successful outcomes, while others might face structural or procedural barriers during implementation that undermine their potential. Alternatively, a policy might produce unintended or unexpected outcomes, thereby creating new problems or exacerbating old ones. The distinction between policy objectives and outcomes is significant for understanding the best methods for developing and implementing policy.
Our conclusions reflect on the performance of the policy and its effects on assessment practices among institutions. The intent is to determine whether states had been successful in improving teaching and learning, and the reasons for the outcomes. Identifying the relevant factors allows us to highlight the lessons that might be applicable to other states. This analysis considers the interactions between the various policy actors and the differing levels at which policy is affected, e.g., the state, regional, and institutional levels.
 Nettles, Cole, & Sharp (1997).
 Nettles, M.T.& Cole, J.J.K. (1999b). State higher education assessment policy: Findings from second and third years. Stanford, CA: National Center for Postsecondary Improvement.
 Nettles, M. and Cole, J. (1999a). States and public higher education: Review of prior research and implications for case studies. NCPI Deliverable #5130. Palo Alto, CA: National Center for Postsecondary Improvement.
 Dubnick, M., & Bardes, B. (1983). Thinking about public policy: A problem-solving approach. New York: Wiley.
 Lowi, T. (1972). American business, public policy, case studies, and political theory. World Politics, 16, 677-715.
 Palumbo, D. J. (1988). Public policy in America: Government in action. San Diego, CA: Harcourt Brace Jovanovich, Publishers.
 Anderson, J. E., Brady, D.W., Bullock, III, C.S., & Stewart, Jr., J. (1984). (2nd Ed.). Public policy and politics in America. Monterey, CA: Brooks/Cole.
 Dubnick & Bardes, 1983.
On this page
The three components
© 2003, National Center for Postsecondary Improvement, headquartered at the
Stanford Institute for Higher Education Research