The Professional Counselor | Volume 13, Issue 3 258 & Lee, 2014). We followed a series of steps suggested by Downe-Wamboldt (1992), which included selecting the unit of analysis, developing and modifying categories, and coding data. Several methods were used to ensure the trustworthiness of this content analysis study (Kyngäs et al., 2020). For credibility, Um and Woodbridge conducted multiple rounds of review on determining an adequate unit of analysis and tracked all discussions and modifications in great detail. For dependability, we calculated interrater reliability coefficients and Wood provided feedback about the results. Um also secured confirmability by utilizing audit trails, which described the specific steps and reflections of the project. Finally, to support transferability, we carefully examined other content analysis articles, reflected core aspects in the current study, and depicted the research process transparently. Coding Protocol After completing the quantitative content analysis, we conducted the qualitative content analysis as Downe-Wamboldt (1992) suggested. In so doing, we applied the inductive category development process suggested by Mayring (2000), which features a systematic categorization process of identifying tentative categories, coding units, and extracting themes from established categories. Specifically, after discussing the research question and levels of abstraction for categories, Um and Woodbridge determined the preliminary categories based on the text of the 18 ICS articles. We practiced coding the data using two articles and then performed independent coding of the remaining articles. Using a constructivist approach, we agreed to add additional categories as needed. Subsequently, the categories were revised until we reached a consensus. In the final step, established categories were sorted into three themes to identify the latent meaning of qualitative materials (Cho & Lee, 2014; Forman & Damschroder, 2007). Regarding validity, the congruence between existing conceptual themes and results of data coding secures external validity, which is regarded as the purpose of content analysis (Downe-Wamboldt, 1992). Interrater Reliability We used various indices of interrater reliability to assess the overall congruence between the researchers who performed the qualitative analysis and ensure trustworthiness. In this study, we used the kappa statistic (κ) suggested by Cohen (1960), which shows the extent of consensus among raters for selecting an article or coding texts (Stemler, 2001). Cohen’s kappa has been used extensively across various academic fields to measure the degree of agreement between raters. More specifically, the kappa statistic was calculated in two phases: 1) after screening articles and 2) after coding the texts according to the categories. The kappa results between Um and Woodbridge were .68 for screening articles and .71 for coding the text, both of which are considered substantial (.61–.80; Stemler, 2004). Results Results of Quantitative Content Analysis Based on our electronic search, we identified a total of 18 ICS articles published between 2006 and 2021 in seven selected counseling journals, including three ACA division journals, one ACA state-branch journal, one ACES regional journal, and two journals from professional counseling associations (see Table 1). Specifically, two articles were published in CES, three in JMCD, one in JCPS, one in JSGW, three in JPC, seven in IJAC, and one in JCLA. Across the 18 ICS articles, a total of 35 researchers were identified as authors or co-authors with six authoring more than one article. According to researchers’ positionality statements in qualitative articles, eight researchers reported that they were previous or current ICSs in the United States. The institutional affiliations of researchers include 22 U.S. universities and two international universities, with three institutional affiliations appearing more than once across the studies.
RkJQdWJsaXNoZXIy NDU5MTM1