TPC Journal-Vol 9- Issue 1

The Professional Counselor | Volume 9, Issue 1 5 solely those that were contributed to the open-ended questions posed to the experts in Round 1. The experts evaluated the revised questionnaire in the same manner as in Round 1. Stage 4: Finalize competencies. The authors compiled the final list of competencies based on expert consensus. In accordance with other Delphi study practices (Keeney, Hasson, & McKenna, 2011; Weise et al., 2016), consensus was achieved when at least 70% of the experts either agreed or strongly agreed with the statement and the statement’s median score was 2.5 or lower. The authors chose to further strengthen consensus results by ensuring that a given statement also achieved an interquartile range (IQR) of less than or equal to 1 (Wester & Borders, 2014). Following Ross et al.’s (2014) suggestion, we sent a follow-up email with a final draft of the competencies to each participant. The email contained each of the final 153 statements (Appendix). The authors asked the participants to offer their final remarks about the statements and requested that they respond within a week and received no modifications. Data Analysis Descriptive quantitative analysis. The review of the Delphi process started upon the experts’ completion of Round 1 and was completed following Round 2. One part of the analysis involved quantitative feedback. SPSS was used to measure expert consensus. The data included frequency outputs on the percentage of overall responses to each statement, median, and IQR. According to Dalkey and Helmer (1963), the median response for each statement is a central statistic involved in Delphi processes. IQR is a measure of variability that is less susceptible to outliers than the range. IQR allowed the authors to further increase objectivity and rigor in the validating process to determine final expert statements (Wester & Borders, 2014). IQR also allowed researchers to assess the variability in responses. An IQR of less than or equal to 1 on a 5-point Likert scale indicates a low variability in responses, whereas a score greater than 1 signifies a higher range of variability. Content analysis. Participants’ contributed statements were used to enhance the level of expert consensus with the follow-up questionnaire. The researchers conducted a qualitative content analysis (QCA) for these contributions (Weise et al., 2016). The QCA clearly and systematically categorized statements within the range of the study’s nine CBPR principles. Using NVivo, the authors coded the experts’ statements using the domains of the theoretical coding framework (Schreier, 2012): knowledge, attitudes, skills, and activities. The authors then assigned each of the frame-coded statements to one of the nine CBPR principles. Results The results from Round 1 and Round 2 are presented in the Appendix. A total of 64 statements were omitted between Rounds 1 and 2 because they either did not reach consensus (meeting all three criteria) or represented a repeated item. Of the final 153 competencies, 49 relate to the knowledge domain, 43 relate to the attitudes domain, 31 relate to the skills domain, and 25 relate to the activities domain. These statements were further subcategorized according to the nine CBPR principles (P1–P9) or themes that emerged from the content analysis: 15 statements were related to P1, 12 statements were related to P2, 25 statements were related to P3, 28 statements were related to P4, 18 statements were related to P5, 12 statements were related to P6 and P7, seven statements were related to P8, and 14 statements were related to P9. Certain statements did not fit within the nine CBPR principles. Additionally, there were statements that seemed to fit within multiple categories. Some themes that the authors did not expect emerged

RkJQdWJsaXNoZXIy NDU5MTM1