TPC Journal-Vol 10- Issue 3-FULL ISSUE

340 The Professional Counselor | Volume 10, Issue 3 used interchangeably without the researcher having to worry about the categorization being affected by a significant rater factor. Interchangeability of raters is what justifies the importance of inter-rater reliability” (p. 4). Consistency ensures that the data collected are realistic for practical use. When interrater reliability is poor, interviews conducted by overly critical raters ( hawks ) naturally lead to negative bias against applicants when compared within the same applicant pool with the scores from interviews rated by less critical raters ( doves ). Epstein and Synhorst (2008) discussed interrater reliability as an approximation in which different people rate the same behavior in the same way. Thus, interrater reliability can also be understood as rater consensus. Purpose of the Present Study Effectively screening and selecting new entrants is one of the hallmarks that distinguishes a profession. Unfortunately, there is a dearth of available literature on assessment tools for rating admissions interviews. Further, lack of information on the reliability of the tools that exist represents a significant deficiency in professional literature (Johnson & Campbell, 2002). The Professional Disposition Competence Assessment—Revised Admission (PDCA-RA; Freeman & Garner, 2017; Garner et al., 2016) is a global rubric designed to assess applicant dispositions in brief graduate program interviews. The PDCA-RA includes a video training protocol developed to facilitate consistency across raters in scoring admissions interviews on dispositional domains. The purpose of the study was to examine the internal stability and the interrater reliability of the PDCA-RA. The rationale for the study was that no similar rubrics assessing dispositions at admissions using training videos were found in published research, suggesting a gap in the literature. Interrater reliability was the key focus of this study because of the importance of interrater reliability for rubrics utilized in situations with multiple raters, a typical scenario for counselor education admissions processes. Method Sample Raters for the study included 70 counselor educators, counseling doctoral students, adjunct faculty, and site supervisors. Counselor educators, doctoral students, and adjunct faculty at two universities were asked to participate in trainings on the new admissions screening tool. Site supervisors providing supervision for practicum and internship students at the two universities were offered training in the PDCA-RA as a component of continued professional development to maintain their supervision status. Training in both instances was free and included professional development credits. Informed consent for participation was obtained from all participants in accordance with ACA ethical codes (ACA, 2014) and IRB oversight at both universities. All participants in the study fully completed the PDCA-RA video-based training. The mean age of the raters was 43.9 ( SD = 11.4, range 24–72). Sixty-four percent identified as female and 36% identified as male. Mean average years of experience indicated as a faculty or field supervisor was 12.2 ( SD = 9.7, range 1–50). Ninety-three percent identified as White/Caucasian, 6% as Latino/a, and 1% as other ethnicity. The counselor educators (27% of the sample) were primarily from two CACREP-accredited counseling programs in the Western United States. Participating universities included one private university and a state research university, both with CACREP-accredited programs. Counselor education doctoral students and adjunct faculty participants comprised 7% of the sample. The doctoral students participated in the training because they were involved as raters of master’s-level

RkJQdWJsaXNoZXIy NDU5MTM1