TPC-Journal-V2-Issue2

The Professional Counselor \Volume 2, Issue 2 154 Procedures The method used to select participants was a nonrandomized cluster sampling of two districts from among 24 public school districts located in Maryland. Once IRB approval was received, letters were mailed out over the summer and early in the academic year to school counselors of elementary, middle and high schools within each of the two school districts selected for participation. Inclusion of school counselor supervisors assisted in the distribution and administration of this study and increased return rates of completed program audits. The school counselors of each participating school were provided with the program audit from the ASCA National Model (2005), a statement of rationale for the study and a consent form. The school counselors completed the program audit during the months of June through February with instructions to retrospectively evaluate implementation of the school counseling program components at the end of the previous (2009-2010) academic year. Demographic data, graduation rates, attendance and scores from the Maryland State Assessment (MSA) for grades 5, 8 and 10 were obtained from 2009-2010 Maryland Report Cards as retrieved from the Maryland State Department of Education website (http://mdreportcard.org/). The dependent variable of achievement was measured using MSA math and reading scores and defined operationally as the percentage of those students of a given grade not meeting the criterion for passing (i.e., percentage of students receiving only basic scores), separately for the reading and math components. The MSA is administered to students in grades 3–5 at the elementary level, grades 6–8 at the middle school level and during the 10th grade in high school. Fifth grade scores, 8th grade scores and 10th grade scores (English and algebra) were used for these analyses, reasoning that these scores reflected the cumulative intervention of prolonged exposure to the school’s curricular experience. The dependent variable of attendance was defined as the percent of average daily attendance including ungraded students in special education programs (Maryland State Department of Education, 2010). The dependent variable of graduation rate was defined by MSDE as the percentage of students who received a Maryland high school diploma during the school year. More specifically, the graduation rate is calculated by “dividing the number of high school graduates by the sum of the dropouts for grades 9 through 12, respectively, in consecutive years, plus the number of number of high school graduates (MSDE, 2010, para 1).” Since graduation rate and dropout rate in this sample were highly correlated ( r = -.752, p < .001, n = 18), graduation rate was used in the analysis, while dropout rate was excluded as redundant. Analysis The data from the demographic and program audit forms were coded into an SPSS database. The total audit score was used to determine the level of program implementation. Data marked as “N/A” or “none” were coded as 0 to reflect no attempt at implementation, even though the actual audit reported them separately. “In progress” was coded as a 1, “completed” was coded as a 2, and “implemented” was coded as a 3. The total audit score was the simple sum of scores for the 115 responses. Appropriate Pearson family correlation coefficients were applied to analyze relationships between the total audit score (program implementation), student-to-counselor ratio and school outcome measures. Simple linear regression analyses were conducted to determine whether degree of model program implementation was a significant predictor of student outcomes of achievement scores, attendance and graduation rate at each level: elementary, middle and high school. Results Of the 164 schools in the two participating school districts, 115 (70%) returned completed consent, demographic and program audit forms for analysis. Two high schools were eliminated because they were designated alternative schools. Thus, a total participation rate of 113 schools (69%) was obtained. Type I error (α) was set at the .05 level of probability for all analyses. Trends were indicated by probability levels of p < .10. Effect sizes for r or R were interpreted as follows (Cohen, 1988): .10 indicated a small effect; .30 indicated a medium effect; and .50 indicated a large effect. This study provides the first reported analysis of internal consistency of a program audit (ASCA, 2005). Internal consistency was measured using Cronbach’s coefficient alpha. Alphas were calculated to determine the level of internal consistency of the total scale and each of the 17 sections of the program audit on the current total sample ( n = 113), and separately for the elementary ( n = 78), middle ( n = 17) and high school ( n = 18) samples. Table 1 provides a summary of

RkJQdWJsaXNoZXIy NDU5MTM1