As soon as the Center was notified of the grant award for the project described in Chapter 2, staff met with the evaluation specialist to discuss the focus, timing, and tentative cost allocation for the two evaluations. They agreed that although the summative evaluation was 2 years away, plans for both evaluations had to be drawn up at this time because of the need to identify stakeholders and to determine evaluation questions. The evaluation specialist requested that the faculty members named in the proposal as technical and subject matter resource persons be included in the planning stage. Following the procedure outlined in Chapter 5, the evaluation questions were specified through the use of Worksheets 1 through 5.
As the staff went through the process of developing the evaluation questions, they realized that they needed to become more knowledgeable about the characteristics of the participating institutions and especially the way courses in elementary preservice education were organized. For example, how many levels and sections were there for each course, and how were students and faculty assigned? How much autonomy did faculty have with respect to course content, examinations, etc.? What were library and other material resources? The evaluator and her staff spent time reviewing catalogues and other available documents to learn more about each campus and to identify knowledgeable informants familiar with issues of interest in planning the evaluation. The evaluator also visited three of the seven branch campuses and held informal conversations with department chairs and faculty to understand the institutional context and issues that the evaluation questions and the data collection needed to take into account.
During these campus visits, the evaluator discovered that interest and participation in the project varied considerably, as did the extent to which deans and department chairs encouraged and facilitated faculty participation. Questions to explore these issues systematically were therefore added to the formative evaluation.
The questions initially selected by the evaluation team for the formative and summative evaluation are shown in Exhibits 13 and 14.
Exhibit 13. Goals, stakeholders, and evaluation questions for a formative evaluation |
||
(implementation-related) |
|
|
1. To attract faculty and administrators interest and support for project participation by eligible faculty members | Did all campuses participate? If not, what were the reasons? How was the program publicized? In what way did local administrators encourage (or discourage) participation by eligible faculty members? Were there incentives or rewards for participation? Did applicants and nonapplicants, and program completers and dropouts, differ with respect to personal and work-related characteristics (age, highest degree obtained, ethnicity, years of teaching experience, etc.) | granting agency, center administrators, project staff |
2. To offer a state of the art faculty development program to improve the preparation of future teachers for elementary mathematics instruction | Were the workshops organized and staffed as planned? Were needed materials available? Were the workshops of high quality (accuracy of information, depth of coverage, etc.)? | granting agency, project sponsor (center administrators), other administrators, project staff |
3. To provide participants with knowledge concerning new concepts, methods, and standards in elementary math education | Was the full range of topics included in the design actually covered? Was there evidence of an increase in knowledge as a result of project participation? | center administrators, project staff |
4. To provide followup and encourage networking through frequent contact among participants during the academic year | Did participants exchange information about their use of new instructional approaches? By e-mail or in other ways? | project staff |
5. To identify problems in carrying out the project during year 1 for the purpose of making changes during year 2 | Did problems arise? Are workshops too few, too many? Should workshop format, content, staffing be modified? Is communication adequate? Was summer session useful? | granting agency, center administrators, campus administrators, project staff, participants |
Exhibit 14. Goals, stakeholders, and evaluation questions for a summative evaluation |
||
|
|
|
1. Changes in instructional practices by participating faculty members | Did faculty who have experienced the professional development change their instructional practices? Did this vary by teachers or by students characteristics? Did faculty members use the information regarding new standards, materials, and practices? What obstacles prevented implementing changes? What factors facilitated change? | granting agency, project sponsor (center), campus administrators, project staff |
2. Acquisition of knowledge and changes in instructional practices by other (nonparticipating) faculty members | Did participants share knowledge acquired through the project with other faculty? Was it done formally (e.g., at faculty meetings) or informally? | granting agency, project sponsor (center), campus administrators, project staff |
3. Institution-wide change in curriculum and administrative practices | Were changes made in curriculum? Examinations and other requirements? Expenditures for library and other resource materials (computers)? | granting agency, project sponsor (center), campus administrators, project staff, and campus faculty participants |
4. Positive effects on career plans of students taught by participating teachers | Did students become more interested in classwork? More active participants? Did they express interest in teaching math after graduation? Did they plan to use new concepts and techniques? | granting agency, project sponsor (center), campus administrators, project staff, and campus faculty participants |
This step consisted of grouping the questions that survived the prioritizing process in step 1, defining measurable objectives, and determining the best source for obtaining the information needed and the best method for collecting that information. For some questions, the choice was simple. If the project reimburses participants for travel and other attendance-related expenses, reimbursement records kept in the project office would yield information about how many participants attended each of the workshops. For most questions, however, there might be more choices and more opportunity to take advantage of the mixed method approach. To ascertain the extent of participants' learning and skill enhancement, the source might be participants, or workshop observers, or workshop instructors and other staff. If the choice is made to rely on information provided by the participants themselves, data could be obtained in many different ways: through tests (possibly before and after the completion of the workshop series), work samples, narratives supplied by participants, self-administered questionnaires, indepth interviews, or focus group sessions. The choice should be made on the basis of methodological (which method will give us the "best" data?) and pragmatic (which method will strengthen the evaluation's credibility with stakeholders? which method can the budget accommodate?) considerations.
Source and method choices for obtaining the answers to all questions in Exhibits 13 and 14 are shown in Exhibits 15 and 16. Examining these exhibits, it becomes clear that data collection from one source can answer a number of questions. The evaluation design begins to take shape; technical issues, such as sampling decisions, number of times data should be collected, and timing of the data collections, need to be addressed at this point. Exhibit 17 summarizes the data collection plan created by the evaluation specialist and her staff for both evaluations.
The formative evaluation must be completed before the end of the first year to provide useful inputs for the year 2 activities. Data to be collected for this evaluation include
In addition, the 25 year 1 participants will be assigned to one of three focus groups to be convened twice (during month 5 and after the year 1 summer session) to assess the program experience, suggest program modifications, and discuss interest in instructional innovation on their home campus.
Exhibit 15. Evaluation questions, data sources, and data collection methods for a formative evaluation |
||
|
|
|
1. Did all campuses participate? If not, what were the reasons? How was the program publicized? In what way did local administrators encourage (or discourage) participation by eligible faculty members? Were there incentives or rewards for participation? Did applicants and nonapplicants, and program completers and dropouts, differ with respect to personal and work-related characteristics (age, highest degree obtained, ethnicity, years of teaching experience, etc.)? | project records, project director, roster of eligible applicants on each campus, campus participants | record review; interview with project director; rosters of eligible applicants on each campus (including personal characteristics, length of service, etc.), participant focus groups |
2. Were the workshops organized and staffed as planned? Were needed materials available? Were the workshops of high quality (accuracy of information, depth of coverage, etc.)? | project records, correspondence, comparing grant proposal and agenda of workshops | project director, document review, other staff interviews, |
3. Was the full range of topics included in the design actually covered? Was there evidence of an increase in knowledge as a result of project participation? | project director and staff, participants, observers, | participant questionnaire, observer notes, observer focus group, participant focus group, work samples |
4. Did participants exchange information about their use of new instructional approaches? By e-mail or in other ways? | participants, analysis of messages of listserv | participant focus group |
5. Did problems arise? Are workshops too few, too many? Should workshop format, content, staffing be modified? Is communication adequate? Was summer session useful? | project director, staff, observers, participants | interview with project director and staff, focus group interview with observers, focus group with participants |
Exhibit 16. Evaluation questions, data sources, and data collection methods for summative evaluation |
||
|
|
|
1. Did faculty who have experienced the professional development change their instructional practices? Did this vary by teachers or by students characteristics? Do they use the information regarding new standards, materials, and practices? What obstacles prevented implementing changes? What factors facilitated change? | participants, classroom observers, department chair | focus group with participants, reports of classroom observers, interview with department chair |
2. Did participants share knowledge acquired through the project with other faculty? Was it done formally (e.g., at faculty meetings) or informally? | participants, other faculty, classroom observers, department chair | focus groups with participants, interviews with nonparticipants, reports of classroom observers (nonparticipants classrooms), interview with department chair |
3. Were changes made in curriculum? Examinations and other requirements? Expenditures for library and other resource materials (computers)? | participants, department chair, dean, budgets and other documents | focus groups with participants, interview with department chair and dean, document review |
4. Did students become more interested in classwork? More active participants? Did they express interest in teaching math after graduation? Did they plan to use new concepts and techniques? | students, participants | self-administered questionnaire to be completed by students, focus group with participants |
Exhibit 17. First data collection plan |
||
|
|
|
|
||
record review |
|
Once a month during year/during month 1; update if necessary |
|
|
At the end of months 3, 6, 10 |
|
|
Two observers at each workshop and summer session |
|
|
Brief questionnaire to be completed at the end of every workshop |
|
|
The year 1 participants (n=25) will be assigned to one of three focus groups that meet during month 5 of the school year and after summer session. |
|
|
One meeting for all workshop observers during month 11 |
|
||
|
|
Two observations for participants each year (classroom months 4 and 8); one observation for nonparticipants; for 2-year project; a total of 96 observations
(two observers at all times) |
|
|
The year 2 participants (n=25) will be assigned to one of three focus groups that meet during month 5 of school year and after summer session. |
|
|
One focus group with all classroom observers (4-8) |
|
|
One interview during year 2 |
|
|
Personal interview during year 2 |
|
|
During year 2 |
|
|
Towards the end of year 2 |
|
|
Questionnaires to be completed during year 1 and 2 |
|
|
One interview towards end of year 2 |
|
|
During year 1 and year 2 |
The summative evaluation will use relevant data from the formative evaluation; in addition, the following data will be collected:
The evaluation specialist converted the data collection plan (Exhibit 17) into a timeline, showing for each month of the 2 1/2-year life of the project data collection, data analysis, and report-writing activities. Staff requirements and costs for these activities were also computed. She also contacted the chairperson of the department of elementary education at each campus to obtain clearance for the planned classroom observations and data collection from students (undergraduates) during years 1 and 2. This exercise showed a need to fine tune data collection during year 2 so that data analysis could begin by month 18; it also suggested that the scheduled data collection activities and associated data reduction and analysis costs would exceed the evaluation budget by $10,000. Conversations with campus administrators had raised questions about the feasibility of on-campus data collection from students. The administrators also questioned the need for the large number of scheduled classroom observations. The evaluation staff felt that these observations were an essential component of the evaluation, but they decided to survey students only once (at the end of year 2). They plan to incorporate question about impact on students in the focus group discussions with participating faculty members after the summer session at the end of year 1. Exhibit 18 shows the final data collection plan for this hypothetical project. It also illustrates how quantitative and qualitative data have been mixed.
Exhibit 18. Final data collection plan |
|||
|
|
|
|
1. Interview with project director |
|
Once a month during year 1; twice during year 2 (months 18 and 23) |
|
2. Interview with project staff |
|
At the end of months 3, 6, 10 (year 1); at the end of month 23 (year 2) |
16 interviews |
3. Record review |
|
Month 1 plus updates as needed |
|
4. Workshop observations |
|
Each workshop including summer |
22 observations |
5. Participants evaluation of each workshop |
|
At the end of each workshop and summer |
|
6. Participants focus groups |
|
Months 5,10,17,22 |
|
7. Workshop observer focus groups |
|
Month 10 |
|
8. Classroom observations |
|
Months 4, 8, 16, 20 |
|
9. Classroom observations (nonparticipant classrooms) |
|
Months 8 and 16 |
|
10. Classroom observers focus group |
|
Months 10 and 22 |
observers (4-8) |
11. Interviews with department chairs at 8 branch campuses |
|
Months 9 and 21 |
|
12. Interviews with all year 1 participants |
|
Month 21 |
|
13. Interviews with deans at 7 branch campuses |
|
Month 21 |
|
14. Interviews with 2 nonparticipant faculty members at each campus |
|
Month 21 |
|
15. Student survey |
|
Month 20 |
|
16. Document review |
|
Months 3 and 22 |
|
*Q1 = quantitative; Q2 = qualitative.
It should be noted that due chiefly to budgetary constraints, the priorities that the final evaluation plan did not provide for the systematic collection of some information that might have been of importance for the overall assessment of the project and recommendations for replication are missing. For example, there is no provision to examine systematically (by using trained workshop observers, as is done during year 1) the extent to which the year 2 workshops were modified as a result of the formative evaluation. This does not mean, however, that an evaluation question that did not survive the prioritization process cannot be explored in conjunction with the data collection tools specified in Exhibit 17. Thus, the question of workshop modifications and their effectiveness can be explored in the interviews scheduled with project staff and the self-administered questionnaires and focus groups for year 2 participants. Furthermore, informal interaction between the evaluation staff, the project staff, participants, and others involved in the project can yield valuable information to enrich the evaluation.
Experienced evaluators know that, in hindsight, the prioritization process is often imperfect. And during the life of any project, it is likely that unanticipated events will affect project outcomes. Given the flexible nature of qualitative data collection tools, to some extent the need for additional information can be accommodated in mixed method designs by including narrative and anecdotal material. Some of the ways in which such material can be incorporated in reaching conclusions and recommendations will be discussed in Chapter 7 of this handbook.