Evaluation in Education and Human Services

Latest release: December 1, 2013
Series
45
Books

About this ebook series

When George Bernard Shaw wrote his play, Pygmalion, he could hardly have foreseen the use of the concept of the self-fulfilling prophecy in debates about standardized testing in schools. Still less could he have foreseen that the validity of the concept would be examined many years later in Irish schools. While the primary purpose of the experimental study reported in this book was not to investigate the Pygmalion effect, it is inconceivable that a study of the effects of standardized testing, conceived in the 1960s and planned and executed in the 1970s, would not have been influenced by thinking about teachers' expectations and the influence of test information on the formation of those expectations. While our study did pay special attention to teacher expectations, its scope was much wider. It was planned and carried out in a much broader framework, one in which we set out to examine the impact of a standardized testing program, not just on teachers, but also on school practices, students, and students' parents.
The Effects of Standardized Testing
Book 1 · Dec 2012 ·
0.0
When George Bernard Shaw wrote his play, Pygmalion, he could hardly have foreseen the use of the concept of the self-fulfilling prophecy in debates about standardized testing in schools. Still less could he have foreseen that the validity of the concept would be examined many years later in Irish schools. While the primary purpose of the experimental study reported in this book was not to investigate the Pygmalion effect, it is inconceivable that a study of the effects of standardized testing, conceived in the 1960s and planned and executed in the 1970s, would not have been influenced by thinking about teachers' expectations and the influence of test information on the formation of those expectations. While our study did pay special attention to teacher expectations, its scope was much wider. It was planned and carried out in a much broader framework, one in which we set out to examine the impact of a standardized testing program, not just on teachers, but also on school practices, students, and students' parents.
Program Evaluation: A Practitioner’s Guide for Trainers and Educators
Book 2 · Dec 2012 ·
4.0
Please glance over the questions that follow and read the answers to those that are of interest. Q: What does this manual do? A: This manual guides the user through designing an evaluation. A: Who can use it? A: Anyone interested or involved in evaluating professional trammg or inservice education programs. The primary users will be staff members who are doing their own program evaluation-maybe for the first time. (Experienced evaluators or other professional educators can find useful guides and worksheets in it.) Q: If I work through this manual, what will I accomplish? A: You will develop one or more evaluation designs, and perhaps you'll also use the designs to evaluate something to make it better or to document its current value. Q: What is an evaluation design? A: An evaluation design is a conceptual and procedural map for getting important information about training efforts to people who can use it, as shown in the graphic below.
Program Evaluation: A Practitioner’s Guide for Trainers and Educators
Book 3 · Dec 2012 ·
0.0
Please glance over the questions that follow and read the answers to those that are of interest. Q: What does this manual do? A: This manual guides the user through designing an evaluation. A: Who can use it? A: Anyone interested or involved in evaluating professional trammg or inservice education programs. The primary users will be staff members who are doing their own program evaluation-maybe for the first time. (Experienced evaluators or other professional educators can find useful guides and worksheets in it.) Q: If I work through this manual, what will I accomplish? A: You will develop one or more evaluation designs, and perhaps you'll also use the designs to evaluate something to make it better or to document its current value. Q: What is an evaluation design? A: An evaluation design is a conceptual and procedural map for getting important information about training efforts to people who can use it, as shown in the graphic below.
Program Evaluation: A Practitioner’s Guide for Trainers and Educators
Book 4 · Dec 2012 ·
0.0
Please glance over the questions that follow and read the answers to those that are of interest. Q: What does this manual do? A: This manual guides the user through designing an evaluation. A: Who can use it? A: Anyone interested or involved in evaluating professional trammg or inservice education programs. The primary users will be staff members who are doing their own program evaluation-maybe for the first time. (Experienced evaluators or other professional educators can find useful guides and worksheets in it.) Q: If I work through this manual, what will I accomplish? A: You will develop one or more evaluation designs, and perhaps you'll also use the designs to evaluate something to make it better or to document its current value. Q: What is an evaluation design? A: An evaluation design is a conceptual and procedural map for getting important information about training efforts to people who can use it, as shown in the graphic below.
Evaluation Models: Viewpoints on Educational and Human Services Evaluation
Book 6 · Dec 2012 ·
0.0
Attempting fonnally to evaluate something involves the evaluator coming to grips with a number of abstract concepts such as value, merit, worth, growth, criteria, standards, objectives, needs, nonns, client, audience, validity, reliability, objectivity, practical significance, accountability, improvement, process, pro duct, fonnative, summative, costs, impact, infonnation, credibility, and - of course - with the tenn evaluation itself. To communicate with colleagues and clients, evaluators need to clarify what they mean when they use such tenns to denote important concepts central to their work. Moreover, evaluators need to integrate these concepts and their meanings into a coherent framework that guides all aspects of their work. If evaluation is to lay claim to the mantle of a profession, then these conceptualizations of evaluation must lead to the conduct of defensible evaluations. The conceptualization of evaluation can never be a one-time activity nor can any conceptualization be static. Conceptualizations that guide evaluation work must keep pace with the growth of theory and practice in the field. Further, the design and conduct of any particular study involves a good deal of localized conceptualization.
Systematic Evaluation: A Self-Instructional Guide to Theory and Practice
Book 8 · Dec 2012 ·
0.0
Conducting Educational Needs Assessments
Book 10 · Dec 2012 ·
0.0
What goals should be addressed by educational programs? What priorities should be assigned to the different goals? What funds should be allocated to each goal? How can quality services be maintained with declining school enrollments and shrinking revenues? What programs could be cut if necessary? The ebb and flow of the student population, the changing needs of our society and the fluctuation of resources constantly impinge on the education system. Educators must deal with students, communities, and social institutions that are dynamic, resulting in changing needs. It is in the context of attempting to be responsive to these changes, and to the many wishes and needs that schools are asked to address, that needs assessment can be useful. Needs assessment is a process that helps one to identify and examine both values and information. It provides direction for making decisions about programs and resources. It can include such relatively objective procedures as the statistical description and analysis of standardized test data and such subjective procedures as public testimony and values clarification activities. Needs assessment can be a part of community relations, facilities planning and consolidation, program development and evaluation, and resource allocation. Needs assessment thus addresses a xiii XIV PREFACE broad array of purposes and requires that many different kinds of procedures be available for gathering and analyzing information. This book was written with this wide variation of practices in mind.
Decision-Oriented Educational Research
Book 11 · Dec 2012 ·
0.0
Decision-Oriented Educational Research considers a form of educational research that is designed to be directly relevant to the current information requirements of those who are shaping educational policy or managing edu cational systems. It was written for those who plan to conduct such research, as well as for policy makers and educational administrators who might have such research conducted for them. The book is divided into three main parts. Part I is background. Chapter 1 describes some of the basic themes that are woven throughout subsequent chapters on decision-oriented research. These themes include the impor tance of taking a systems view of educational research; of understanding the nature of decision and policy processes and how these influence system re search; of integrating research activities into the larger system's processes; of the role of management in the research process; of researchers and managers sharing a sense of educational purposes; and of emphasizing system improvement as a basic goal of research process. Chapter 2 is a discussion of the background of the research activities that form the bases of this book. Our collaboration with the Pittsburgh public school system is described, as are the methods and structure we used to build the case histories of our work with the district. Part II, encompassing chapters 3 through 9, addresses basic generaliza tions about decision-oriented educational research that we have derived from our experiences.
Instrument Development in the Affective Domain
Book 12 · Jun 2013 ·
0.0
Critical Perspectives on the Organization and Improvement of Schooling
Book 13 · Dec 2012 ·
0.0
Major "paradigm shifts"-replacing one "world view" with another regarding what constitutes appropriate knowledge do not happen over night. Centuries usually intervene in the process. Even minor shifts admitting alternative world views into the domain of legitimate knowledge producing theory and practice-require decades of controversy, especially, it seems to us, in the field of education. It has only been in the last 20 years or so that the educational research community has begun to accept the "scientific" credibility of the qualitative approaches to inquiry such as participant observation, case study, ethnogra phy, and the like. In fact, these methods, with their long and distinguished philosophical traditions in phenomenology, have really only come into their own within the last decade. The critical perspective on generating and evaluating knowledge and practice-what this book is mostly about-is in many ways a radical depar ture from both the more traditional quantitative and qualitative perspec tives. The traditional approaches, in fact, are far more similar to one another than they are to the critical perspective. This is the case, in our view, for one crucial reason: Both the more quantitative, empirical-analytic and qualitative, interpretive traditions share a fundamental epistemological commitment: they both eschew ideology and human interests as explicit components in their paradigms of inquiry. Ideology and human interests, however, are the "bread and butter" of a critical approach to inquiry.
School-Based Evaluation: A Guide for Board Members, Superintendents, Principals, Department Heads, and Teachers
Book 14 · Dec 2012 ·
0.0
Evaluating Educational and Social Programs: Guidelines for Proposal Review, Onsite Evaluation, Evaluation Contracts, and Technical Assistance
Book 15 · Dec 2012 ·
0.0
During the past two decades, evaluation has come to play an increasingly important role in the operation of educational and social programs by national, state, and local agencies. Mandates by federal funding agencies that programs they sponsored be evaluated gave impetus to use of evaluation. Realization that evaluation plays a pivotal role in assuring program quality and effectiveness has maintained the use of evaluation even where mandates have been relaxed. With increased use --indeed institutionalization --of evaluation in many community, state, and national agencies, evaluation has matured as a profession, and new evaluation approaches have been developed to aid in program planning, implementation, monitoring, and improvement. Much has been written about various philosophical and theoretical orientations to evaluation, its relationship to program management, appropriate roles evaluation might play, new and sometimes esoteric evaluation methods, and particular evaluation techniques. Useful as these writings are, relatively little has been written about simple but enormously important activities which comprise much of the day-to-day work of the program evaluator. This book is focused on some of these more practical aspects that largely determine the extent to which evaluation will prove helpful.
Alternative Approaches to the Assessment of Achievement
Book 16 · Dec 2012 ·
0.0
Ingrained for many years in the science of educational assessment were a large number of "truths" about how to make sense out of testing results, artful wisdoms that appear to have held away largely by force of habit alone. Practitioners and researchers only occasionally agreed about how tests should be designed, and were even further apart when they came to interpreting test responses by any means other than categorically "right" or "wrong." Even the best innovations were painfully slow to be incorporated into practice. The traditional approach to testing was developed to accomplish only two tasks: to provide ranking of students, or to select relatively small proportions of students for special treatment. In these tasks it was fairly effective, but it is increasingly seen as inadequate for the broader spectrum of issues that educational measurement is now called upon to address. Today the range of questions being asked of educational test data is itself growing by leaps and bounds. Fortunately, to meet this challenge we have available a wide panoply of resource tools for assessment which deserve serious attention. Many of them have exceptionally sOphisticated mathematical foundations, and succeed well where older and less versatile techniques fail dismally. Yet no single new tool can conceivably cover the entire arena.
Evaluating Business and Industry Training
Book 17 · Dec 2012 ·
0.0
In the abstract, training is seen as valuable by most people in business and industry. However, in the rush of providing training programs "on time" and "within budget," evaluation of training is frequently left behind as a "nice to have" addition, if practical. In addition, the training function itself is left with the dilemma of proving its worth to management without a substantive history of evaluation. This book is designed to provide managers, educators, and trainers alike the opportunity to explore the issues and benefits of evaluating business and industry training. The purpose is to motivate more effective decisions for training investments based on information about the value of training in attaining business goals. Without evaluation, the value of specific training efforts cannot be adequately measured, the value of training investments overall cannot be fully assessed, and the contributions of the training function to the corporation's goals cannot be duly recognized. Articles are grouped into three sections, althou~h many themes appear across sections. The first section estabhshes the context of training evaluation in a business organization. The second section emphasizes evaluation of training products and services; and the third section discusses costs and benefits of evaluation, and communication and use of evaluation results in decision making. In Section I, the context of training evaluation is established from a variety of perspectives. First, training and trainin~ evaluation are discussed in the context of corporate strateglc goals.
Evaluation of Continuing Education in the Health Professions
Book 18 · Dec 2012 ·
0.0
Phil R. Manning "Can you prove that continuing education really makes any difference?" Over the years, educators concerned with continuing education (CE) for health professionals have either heard or voiced that question in one form or another more than once. But because of the difficulty in measuring the specific effects of a given course, program, or conference, the question has not been answered satisfactorily. Since CE is costly, since CE is now mandated in some states for re-registration, and since its worth has not been proven in for mal evaluation research, the pressure to evaluate remains strong. The question can be partially answered by a more careful definition of continuing education, particularly the goals to be achieved by CEo Another part of the answer depends on the development of a stronger commitment to evaluation of CE by its providers. But a significant part of the answer might be provided through the improvement of methods used in evaluation of continuing education for health professionals. To address this last concern, the Development and Demonstration Center in Continuing Education for the Health Professions of the Univer sity of Southern California organized and conducted a meeting of academi cians and practitioners in evaluation of continuing education. During a three-day period, participants heard formal presentations by five invited speakers and then discussed the application of the state of the art of educa tional evaluation to problems of evaluation of continuing education for health professionals.
Evaluation in Decision Making: The case of school administration
Book 19 · Dec 2012 ·
0.0
This book is about the practice of decision making by school principals and about ways to improve this practice by capitalizing on evaluation dimensions. Much has been written on decision making but surprisingly little on decision making in the school principalship. Much has been also written on evaluation as well as on evaluation and decision making, but not much has been written on evaluation in decision making, especially decision making in the principalship. This book presents two messages. One is that decision making in the principalship can be studied and improved and not only talked about in abstract terms. The other message is that evaluation can contribute to the understanding of decision making in the principalship and to the improvement of its practice. In this book we call for the conception of an evaluation-minded principal, a principal who has a wide perspective on the nature of evaluation and its potential benefits, a principal who is also inclined to use evaluation perceptions and techniques as part of his/her decision-making process. This book was conceived in 1985 with the idea to combine thoughts about educational administration with thoughts about educational evaluation. Studies of decision making in the principalship had already been on their way. We decided to await the findings, and in the meantime we wrote a first conceptual version of evaluation in decision making. As the studies were completed we wrote a first empirical version of same.
Test Policy and the Politics of Opportunity Allocation: The Workplace and the Law
Book 22 · Dec 2012 ·
0.0
Bernard R. Gifford In the United States, the standardized test has become one of the major sources of information for reducing uncertainty in the determination of individual merit and in the allocation of merit-based educational, training, and employment opportunities. Most major institutions of higher education require applicants to supplement their records of academic achievements with scores on standardized tests. Similarly, in the workplace, as a condition of employment or assignment to training programs, more and more employers are requiring prospective employees to sit for standardized tests. In short, with increasing frequency and intensity, individual members of the political economy are required to transmit to the opportunity marketplace scores on standardized examinations that purport to be objective measures of their and potential. In many instances, these test scores are the abilities, talents, only signals about their skills that job applicants are permitted to send to prospective employers. THE NATIONAL COMMISSION ON TESTING AND PUBLIC POLICY In view of the importance of these issues to our current national agenda, it was proposed that the Human Rights and Governance and the Education and Culture Programs of the Ford Foundation support the establishment of a ''blue ribbon" National Commission on Testing and Public Policy to investigate some of the major problems as well as the untapped opportunities created by recent trends in the use of standardized tests, particularly in the workplace and in schools.
Test Policy and Test Performance: Education, Language, and Culture
Book 23 · Dec 2012 ·
0.0
Bernard R. Gifford In the United States, the standardized test has become one of the major sources of information for reducing uncertainty in the determination of individual merit and in the allocation of merit-based educational, training, and employment opportunities. Most major institutions of higher education require applicants to supplement their records of academic achievements with scores on standardized tests. Similarly, in the workplace, as a condition of employment or assignment to training programs, more and more employers are requiring prospective employees to sit for standardized tests. In short, with increasing frequency and intensity, individual members of the political economy are required to transmit to the opportunity marketplace scores on standardized examinations that purport to be objective measures of their abilities, talents, and potential. In many instances, these test scores are the only signals about their skills that job applicants are permitted to send to prospective employers. THE NATIONAL COMMISSION ON TESTING AND PUBLIC POLICY In view of the importance of these issues to our current national agenda, it was proposed that the Human Rights and Governance and the Education and Culture Programs of the Ford Foundation support the establishment of a ''blue ribbon" National Commission on Testing and Public Policy to investigate some of the major problems, as well as the untapped opportunities, created by recent trends in the use of standardized tests, particularly in the workplace and in schools.
Creative Ideas For Teaching Evaluation: Activities, Assignments and Resources
Book 24 · Apr 2013 ·
0.0
In 1976, the first session on the teaching of evaluation was held at an annual meeting of evaluators. A few hardy souls gathered to exchange ideas on improving the teaching of evaluation. At subsequent annual meetings, these informal sessions attracted more and more participants, eager to talk about common teaching interests and to exchange reading lists, syllabuses, assignments, and paper topics. The ses sions were irreverent, innovative, lively, and unpredictable. Eventually the group for malized itself with the American Evaluation Association as the Topical Interest Group in the Teaching of Evaluation (TIG: TOE). As word of TIG: TOE's activities spread, instructors from all over the country clamored for assistance and advice. It became apparent that a handbook was need ed, a practical interdisciplinary guide to the teaching of evaluation. Donna M. Mertens, a long-standing member of TIG: TOE and an accomplished teacher of evaluation, volunteered to edit the book, and her skills, sensitivity, and experience in the craft of teaching are apparent throughout.
Constructing Test Items
Book 25 · Dec 2012 ·
0.0