TPC-Journal-V3-Issue1

46 When analyzing the second-year and third-year focus group data, if a difference of opinion arose concerning the meaning of a response, we added subjective insight from the perspective of the group facilitator and how they interpreted the participants’ response in the greater context of the focus group. Additionally, if consensus was not met, we reviewed the original transcript to clarify and meet consensus of the meaning of the data (Hill et al., 2005). Core ideas and categories. After categorizing the data by cohort year under domains, we continued to work in dyads and triads to formulate core ideas from the raw data (grouped by domain and not by cohort year). We then determined categories that each core idea fell under. Each core idea was assigned a category, and similar core ideas were collapsed under a category to best represent the data. The categories served an overall purpose to classify and determine frequency in the responses from participants within each domain. Categories were clarified through a cross-analysis (Hill et al., 2005) of the participants’ year in the program, and if more than one member of the cohorts identified a domain as an important task or experience. Figure 1 shows the process of categorizing the data from domains to core ideas and then to categories. In CQR, the number of cases in each category determines the frequency label (Hill et al., 2005). General frequency constitutes all, or almost all, cases; typical constitutes more than half of the cases; and variant for less than half of the cases. For this investigation, each focus group represented one case, so frequency labels were defined as general if the category was present in all three cohort groups, typical if the category was present in two groups and variant if the category only emerged in one cohort’s focus group. We reevaluated the domains based on the frequency of the categories within the domain and the relevance to the research question, resulting in the final domain list (see Table 1) that was agreed on through consensus with the entire research team. Finally, we asked the participants to review the preliminary findings (member check), supporting trustworthiness. Participants supported the findings and did not dispute them. Results The findings of the investigation are described using domains (i.e., topics used to group data; Hill et al., 2005) and categories used to conduct a cross-analysis to support the findings and connect the findings to the research question. Originally, we developed a start list (Miles & Huberman, 1994) of 12 domains based on previous literature and personal experiences (bolded in Table 1). The start list was expanded to 19 domains through the data analysis (not bolded in column 2 of Table 1). Through further examination, a cross-analysis of the core ideas and categories within each domain was conducted (as presented in Figure 1). After the cross-analysis, the auditors reviewed the data and provided feedback. We revisited the 19 domains, identifying eight domains as strong due to their relevance to the research question and the domain being supported through the clarification of categories (as described in the methods section). The eight domains identified as strong were: (a) teaching, (b) supervision of students, (c) conducting research, (d) attending or presenting at a conference, (e) cohort membership, (f) program design, (g) mentoring, and (h) perceived as a counselor educator by faculty (Column 3 of Table 1; Bell et al., 2012). The eight domains and cross-analysis results are presented in Table 2, which exemplifies the frequency of categories created to increase the level of abstraction in the data analysis (Hill et al., 2005). Not all domains had categories (e.g., cohort membership, program design, and being perceived as a counselor educator by others) because the data within these domains was not diverse enough to create separate categories. Domain I: Teaching The teaching domain generated a significant amount of data from the CEDS groups in creating and strengthening their identities as counselor educators. Additionally, the teaching domain produced three contributing categories. Teaching is a core component of a doctoral counselor education program (CACREP, 2009). For this particular university’s program, teaching begins in the first year and peaks in the second year. Teaching was defined as didactic instruction of master’s-level students by doctoral students and occurred in the form of teaching classes, facilitating psycho-educational groups and clinical instruction. The teaching domain was present throughout the first-year, second-year and third-year focus groups with varying emphasis in the categories of (a) teaching experience, (b) contributing factors, and (c) critical interactions with students (see Table 2 for the breakdown and frequencies of the categories within the teaching domain). Therefore, teaching experience during the doctoral program assisted the CEDS in developing their professional identity, specifically their role as counselor educators. The Professional Counselor \Volume 3, Issue 1

RkJQdWJsaXNoZXIy NDU5MTM1