The Pipeline Problem in Doctoral Counselor Education and Supervision

Thomas A. Field, William H. Snow, J. Scott Hinkle

 

The hiring of new faculty members in counselor education programs can be complicated by the available pool of qualified graduates with doctoral degrees in counselor education and supervision, as required by the Council for Accreditation of Counseling and Related Educational Programs (CACREP) for core faculty status. A pipeline problem for faculty hiring may exist in regions with fewer doctoral programs. In this study, the researchers examined whether the number of doctoral programs accredited by CACREP is regionally imbalanced. The researchers used an ex post facto study to analyze differences in the number of doctoral programs among the five regions commonly defined by national counselor education associations and organizations. A large and significant difference was found in the number of CACREP-accredited doctoral programs by region, even when population size was statistically controlled. The Western region had by far the fewest number of doctoral programs. The number of CACREP-accredited master’s programs in a state was a large and significant predictor for the number of CACREP-accredited doctoral programs in a state. State population size, state population density, the number of universities per state, and the number of American Psychological Association–accredited counseling psychology programs were not predictors. Demand may surpass supply of doctoral counselor educators in certain regions, resulting in difficulties with hiring new faculty for some CACREP-accredited programs. An analysis of programs currently in the process of applying for CACREP accreditation suggests that this pipeline problem looks likely to continue or even worsen in the near future.  Implications for counselor education and supervision are discussed.

Keywords: doctoral programs, master’s programs, counselor education and supervision, CACREP, pipeline problem

 

     Counselor education has experienced substantial growth over the past decade. The number of students enrolled in master’s and doctoral programs accredited by the Council for Accreditation of Counseling and Related Educational Programs (CACREP) has increased exponentially. In 2012, there were 36,977 master’s-level students and 2,028 doctoral students in CACREP-accredited programs (CACREP, 2013). By 2018, that number had risen to 52,861 master’s students (43% increase) and 2,917 doctoral students (44% increase; CACREP, 2019b). Counselor education programs have also expanded across the United States, following the merger between CACREP and the Council for Rehabilitation Education (CORE) in 2017 (CACREP, 2017). All 50 states and the District of Columbia now contain counselor education programs accredited by CACREP (CACREP, n.d.), though the number of programs can vary substantially across states (see Appendix).

This enrollment growth in CACREP-accredited master’s programs may be influenced by events that generated a greater need for graduates of master’s CACREP-accredited counselor education programs. In 2010, the U.S. Department of Veterans Affairs (VA) published standards that permitted licensed counselors to work independently within its system (T. A. Field, 2017). Subsequently in 2013, TRICARE, the military insurance for active military and retirees, created a new rule that would permit licensed counselors to join TRICARE panels and independently bill for services (U.S. Department of Defense, 2014). Both rules required candidates to graduate from a CACREP-accredited program as a basis for eligibility. The VA and TRICARE’s requirement for licensed counselors to graduate from CACREP-accredited programs to qualify for independent practice status was in response to a 2010 report issued by the Institute of Medicine, now known as the National Academy of Sciences, Engineering, and Medicine’s Health and Medicine Division. The report recommended that professional counselors have “a master’s or higher-level degree in counseling from a program in mental health counseling or clinical mental health counseling that is accredited by CACREP” (p. 10). The additional legitimization of CACREP by the VA and TRICARE increased interest among counselor education programs to seek and maintain CACREP accreditation, especially for the master’s specialty of clinical mental health counseling (T. A. Field, 2017). In addition, graduation from a CACREP-accredited program has become a requirement for licensure in certain states (e.g., Ohio) within the past few years, following advocacy efforts by counselor leaders (Lawson et al., 2017). Lawson et al. (2017) and Mascari and Webber (2013) have proposed that establishing CACREP as the educational standard for licensure would strengthen the professional identity and place counseling on par with other master’s-level mental health professions that require graduation from an accredited program for licensure. Graduation from a CACREP-accredited program will also become a requirement for certification by the National Board for Certified Counselors (NBCC) as of 2024 (NBCC, 2018). These changes will likely bolster the valuing of CACREP accreditation by prospective students and also result in ever-increasing numbers of counseling programs that seek and maintain CACREP accreditation.

The growth in doctoral student enrollment (44%; CACREP, 2019b) may in part reflect the need for individuals with doctoral degrees to serve as counselor educators for these growing master’s programs. It is also likely due to a major change in faculty qualifications. To advance the professionalization of counseling (Lawson, 2016), the 2009 CACREP standards (2008) required all core faculty hired after 2013 to possess doctoral degrees in counselor education and supervision (CES), preferably from CACREP-accredited programs. From 2013 onward, newly appointed core faculty with doctorates in counseling psychology or other non-counseling disciplines could no longer qualify for faculty positions in CACREP-accredited doctoral CES programs. Lawson (2016) articulated that prior to this standard, an inequity existed whereby psychologists could be recruited for counselor education faculty positions, though counselor educators could not be hired for full-time psychology faculty positions. As a result, the psychology doctorate had a distinct advantage over the CES doctorate in the hiring of new faculty in counseling and psychology faculty positions (Lawson, 2016).

In light of these requirements for new faculty members in counselor education programs to possess doctorates in CES to qualify as core faculty, the hiring of new faculty members may be complicated by the available pool of qualified graduates. While counselor education programs routinely hire faculty from outside of their region, it seems possible that programs in regions with fewer counselor education doctoral programs may have greater difficulty in hiring counselor educators compared with programs in regions with numerous doctoral programs in CES. The extent of regional differences in the number of CES doctoral programs has not previously been quantitatively explored in the extant literature.

Regional Representation of Counselor Education Programs
     Despite the national representation of CACREP-accredited programs and enrollment growth for both master’s and doctoral programs, the number of CACREP-accredited master’s and doctoral programs is not equally distributed and varies substantially by state and by region. Table 1 depicts that the national ratio of CACREP-accredited master’s-to-doctoral counselor education programs is roughly 9:1 (CACREP, n.d.). As seen in Table 1, these ratios vary by region as defined by national counselor education associations and organizations (i.e., North Atlantic, North Central, Rocky Mountain, Southern, Western regions). The North Central, Rocky Mountain, and Southern regions currently have a ratio of master’s-to-doctoral programs that ranges from 3:1 to 5:1. In comparison, the North Atlantic and Western regions have a 9:1 and 18:1 ratio of CACREP-accredited master’s-to-doctoral programs, respectively.

 

Table 1

Regional Representation of CACREP-Accredited Programs (December 2018)

Region    Population CACREP Doctoral Programs CACREP Master’s Programs Ratio of Master’s to Doctoral % States with Doctoral Programs Ratio of Population to Master’s Programs Ratio of Population to Doctoral Programs
North Atlantic 57,780,705          8       75       9:1 36.4 770,409:1 7,222,588:1
North Central 72,251,823        23     104       5:1 69.2 694,729:1 3,141,384:1
Rocky Mountain 14,346,347          8       24       3:1 83.3 597,764:1 1,793,293:1
Southern 119,141,243        44     162       4:1 93.3 735,440:1 2,647,583:1
Western 63,647,316          2       35     18:1 28.6 1,818,495:1 31,823,658:1
Total 327,167,434        85     783       9:1     417,838:1       3,804,272:1

Note. Ratios rounded to closest whole number. Source of CACREP data: https://www.cacrep.org/directory/. Source of U.S. Census data: https://www.census.gov/data/tables/time-series/demo/popest/2010s-national-total.html#par_textimage_2011805803

 

This overall ratio of master’s-to-doctoral programs is likely to increase in the coming years, as a total of 63 master’s programs are in the process of applying for CACREP accreditation compared to only five doctoral programs, as depicted in the Appendix (i.e., 13:1 ratio). This 13:1 ratio exceeds the current 9:1 ratio. As seen in the Appendix, the regions with the highest ratios currently (North Atlantic and Western regions) have at least the same if not greater ratio of master’s-to-doctoral programs currently in the CACREP accreditation process (10:1 and 8:0 respectively), meaning that these unequal ratios will likely remain stable for some time to come. Although population size in states and regions may play some role in this unequal distribution, other factors likely contribute to this phenomenon. No previous literature has examined factors contributing to regional differences in the number of CACREP-accredited doctoral programs.

The confluence of (a) greater numbers of CACREP-accredited master’s programs, (b) greater student enrollment numbers in CACREP-accredited master’s programs, (c) CACREP requirements for hiring faculty to meet faculty–student ratios, and (d) the 2013 CACREP requirement for core faculty to possess doctorates in CES may together result in increased demand for hiring doctoral CES graduates to maintain CACREP accreditation. A pipeline problem may result from demand surpassing supply, with programs struggling to hire qualified doctoral graduates. This imbalance of supply and demand appears most exaggerated for faculty with expertise in school counseling (Bernard, 2006; Bodenhorn et al., 2014). Bodenhorn et al. (2014) expressed concern that the 2013 CACREP requirement for core faculty could limit enrollment in master’s programs. Although enrollment continues to climb in CACREP-accredited programs nationally, it is possible that regions with fewer doctoral programs may limit master’s enrollment because of difficulties with hiring additional core faculty. Programs in regions with fewer doctoral programs may struggle to convince candidates from other regions to relocate to their locale.

In the higher education literature, multiple studies have noted that location and proximity to home appears to be a fairly consistent reason for why prospective doctoral students, and later assistant professors, choose their doctoral programs and faculty positions, making recruitment from outside of a region difficult. Geographic location and proximity to home has been identified as the number one ranked reason for program selection in counselor education programs by master’s and doctoral students (Honderich & Lloyd-Hazlett, 2015) and in higher education doctoral programs (Poock & Love, 2001), and the second-ranked reason in marriage and family therapy doctoral programs (Hertlein & Lambert-Shute, 2007). Prospective students from underrepresented minority backgrounds appear to also consider the importance of community and geographic factors in doctoral program selection (Bersola et al., 2014). In a qualitative study by Linder and Winston Simmons (2015), proximity to family was an important factor in students choosing doctoral programs in student affairs. A qualitative study by Ramirez (2013) also found that proximity to home was a strong predictor of Latinx student choice of doctoral programs.

Very few studies exist into candidate selection of faculty positions at the completion of a doctoral CES program. The published studies that do exist have similarly found that location is again a primary consideration for new assistant professors when selecting their first faculty position. Magnuson et al. (2001) surveyed new assistant professors in counselor education and found that location was a primary factor for more than half of participants. New assistant professors considered proximity to family, geographical features, and opportunities for spouse when selecting their first faculty position (Magnuson et al., 2001). In more recent studies in other academic disciplines, geographic location remained a strong factor (though not the most important factor) for why academic job seekers chose faculty positions in hospitality (Millar et al., 2009) and accounting (Hunt & Jones, 2015). In academic medicine, geographic location was again a key reason for why candidates from underrepresented minority backgrounds selected faculty positions (Peek et al., 2013). It is worth noting that in the Millar et al. (2009) study, international students ranked geographic location as less important than their U.S. counterparts, though they ranked family ties to region as more important. It is possible that the rise of online positions may make location less of a factor in candidate job selection today compared to years past. Follow-up studies are needed to examine the role of geographic location in candidate selection of in-person and online faculty positions.

Although relatively few studies into the selection of faculty roles exist, location appears to be a consistent reason for why prospective doctoral students and later assistant professors choose their doctoral programs and faculty positions. Programs in regions with few doctoral programs may experience multiple layered challenges when hiring faculty. The master’s students in those regions have fewer options for doctoral study closer to home and therefore may need to consider leaving home and family to attend a doctoral program in a different region or attending a program with online or hybrid delivery options. Although online options are becoming more numerous, studies are needed to evaluate the frequency by which online doctoral graduates secure faculty positions versus in-person graduates, as this is currently unknown. It is possible that students may elect not to pursue doctoral study if they are unwilling to relocate, which potentially limits the pipeline of future faculty members who are originally from regions with fewer doctoral programs. Furthermore, doctoral graduates from other regions may have originally chosen their doctoral program in part because of geographical location, which may limit their openness to taking a faculty position in a region that has few doctoral programs. Thus, although counselor education programs in regions with fewer doctoral programs may need to hire candidates outside of the region, candidates from outside of the region may be less willing to move to a region with fewer doctoral programs. This may create difficulties for counselor education programs in regions with fewer doctoral programs that are seeking to fill open core faculty positions.

Purpose of the Study
     The purpose of this study was to begin to address the gap in what is known regarding the extent of regional differences for the number of CACREP-accredited doctoral programs in CES. To date, regional differences in the number of CACREP-accredited doctoral programs have not been studied. The researchers believed that gaining information about regional differences in the number of doctoral programs would be helpful in understanding the nature and extent of the pipeline problem in CES.

Methodology

The guiding research question was as follows: To what extent do regional differences exist in the number of CACREP-accredited doctoral programs in CES? The researchers identified two hypotheses: 1) There are differences in the number of doctoral programs by region even when controlling for population size, and 2) The number of CACREP-accredited master’s programs is a strong predictor of doctoral CACREP-accredited programs by state. Because counselor education programs must already have achieved master’s CACREP accreditation for a full 8 years in order to apply for doctoral CACREP accreditation (CACREP, 2019a), the researchers hypothesized that the number of doctoral programs by region would be directly related to the number of CACREP-accredited master’s programs in the region.

For the purposes of this study, the word program refers to a counseling academic unit housed within an academic institution offering one or more CACREP-accredited master’s counseling specialties that include addiction counseling; career counseling; clinical mental health counseling; clinical rehabilitation counseling; college counseling and student affairs; marriage, couple, and family counseling; rehabilitation counseling; or school counseling. These programs also may offer a doctorate in CES. In this study, master’s programs were tallied by program unit rather than specialization tracks within programs to avoid counting multiples for the same master’s program.

The researchers selected an ex post facto quantitative design to compare doctoral programs by region and state. Data were gathered through four sources: (a) CACREP-accredited master’s and doctoral counselor education programs on the CACREP (n.d.) website; (b) listing of population demographics and population density on the U.S. Census Bureau (2020) website; (c) listing of public and private colleges by state from the National Center for Education Statistics (n.d.) website; and (d) listing of counseling psychology doctoral programs accredited by the American Psychological Association (APA; 2019). Data for variables (b) through (d) were collected to ascertain whether the prediction of the number of CACREP-accredited master’s programs within states was complicated by extraneous variables such as state population size, state population density, number of colleges and universities in the state, and number of APA-accredited counseling psychology programs within states. Counseling psychology doctoral programs were identified as a potential predictor variable because doctoral programs in counseling psychology and CES are often considered competitor programs for resources such as faculty lines, as core faculty cannot be shared between APA- and CACREP-accredited programs (CACREP, 2015). Thus, a preponderance of counseling psychology doctoral programs within a state could potentially limit the number of CES doctoral programs within the same state.

The researchers limited the search to CACREP-accredited programs only because of the 2013 requirement for CACREP-accredited programs to specifically hire doctoral CES graduates. Programs that are not accredited by CACREP may subvert a regional pipeline problem by hiring faculty from related disciplines, such as psychology. For this reason, non–CACREP-accredited programs were excluded from the study. A 2018 CACREP report indicated that 405 programs in the United States were CACREP accredited (CACREP, 2019b). The percentage of counselor education programs in the United States that are CACREP accredited is unknown and most likely differs among states and regions. For example, 98% of master’s counselor education programs were CACREP accredited (52 of 53 programs) in Ohio, with the only non–CACREP-accredited program in the process of working toward accreditation. In comparison, only 24% of master’s counselor education programs in California (23 of 96 programs) were CACREP accredited. The large difference in CACREP representation between California and Ohio can partially be attributed to state regulatory issues. In Ohio, candidates for counseling licensure are required to graduate from CACREP-accredited programs. In contrast, California does not require CACREP accreditation and became the last state to license counselors in 2010 (T. A. Field, 2017). Specialized accreditation appears less common across professions in California. Despite having the most licensed marriage and family therapists (LMFTs) of any state, only 10% (8 of 82) of LMFT preparation programs in California are accredited by the Commission on the Accreditation for Marriage and Family Therapy Education (COAMFTE; n.d.). California is an outlier in the Western region, as 95% (38 of 40) of programs within the other states in that region (Alaska, Arizona, Hawai’i, Nevada, Oregon, Washington) were CACREP accredited.

Data Analysis
     Data were entered into a Microsoft Excel worksheet and organized by the following columns: states, number of CACREP-accredited doctoral programs per state, number of CACREP-accredited master’s programs per state, state population size, state population density, number of colleges and universities per state, and the number of APA-accredited counseling psychology doctoral programs per state, and region. States were organized by regions defined by national counselor education associations and organizations (e.g., North Atlantic region, North Central region). Data from all 50 U.S. states and the District of Columbia were entered into the database.

To test the first and second hypotheses, data were analyzed using SPSS (Meyers et al., 2013). For the first hypothesis, a one-way analysis of co-variance (ANCOVA) for independent samples was selected to compare the number of doctoral programs by region, controlling for population size. The required significance level for the one-way ANCOVA was set to .05. The researchers determined the required sample size for .80 power, per Cohen’s (1992) guidelines. Per G*Power 3 (Faul et al., 2007), a one-way independent-samples ANCOVA requires a sample size of 42 states for .80 power at the .05 alpha level.

To test the second hypothesis, a linear multiple regression analysis (random model) was computed to identify predictor variables for the number of CACREP-accredited doctoral programs by state. Five predictor (i.e., independent) variables were entered into the regression equation. These predictor variables were as follows: (a) the number of CACREP-accredited master’s programs per state, (b) state population size, (c) state population density, (d) number of colleges and universities by state, and (e) number of APA-accredited counseling psychology programs per state. As described above, the presence of an APA-accredited counseling psychology program could potentially reduce the likelihood of a university also offering a CACREP-accredited counselor education program at the same institution. Per G*Power 3 (Faul et al., 2007), a linear multiple regression analysis (random model) requires a sample size of 39 states for .80 power at the .05 alpha level.

To further understand trends in the data regarding the regional representations of CACREP-accredited doctoral programs and CACREP-accredited master’s programs, data were also organized graphically via a data visualization platform (Tableau). These data for the number of programs by state are presented in Figures 1 and 2.

Figure 1

Geographical Representation of CACREP-Accredited Doctoral Programs in the United States

Note: To fit in image, Alaska was scaled down and the geographical locations of Alaska and Hawai’i were moved.

Figure 2

Geographical Representation of CACREP-Accredited Master’s Programs in the United States

Note: Data reflect number of total programs rather than number of specialized tracks per state. To fit in image, Alaska was scaled down and the geographical locations of Alaska and Hawai’i were moved.

 

Results

Table 1 and the Appendix display the number of CACREP-accredited doctoral and master’s programs by both region and state. The researchers used these data to test the hypotheses using inferential statistics.

Differences in CACREP-Accredited Doctoral Programs by Region
     The researchers tested the hypothesis that significant differences existed for the number of CACREP-accredited doctoral programs among the five regions, even when the confounding variable of population size was controlled. The sample size of 51 exceeded the requirement for 80% power at the .05 alpha level (i.e., n = 42). Levene’s test for equality of error variances was not significant, indicating that parametric statistics could be performed without adjustments (A. Field, 2013). A one-way independent-samples ANCOVA for differences in number of programs by region was significant—F(4, 45) = 4.64, p < .05, η2 = .38—and represented a large effect size (Cohen, 1988).

The Southern region had the largest number of CACREP-accredited doctoral programs (n = 45). This was nearly twice the number of CACREP-accredited doctoral programs of the second-ranked region (North Central, n = 23), and more CACREP-accredited doctoral programs than the other four regions combined (n = 41). Compared to the Southern and North Central regions, the other three regions—namely the North Atlantic, Rocky Mountain, and Western regions—had substantially fewer CACREP-accredited doctoral programs. The North Atlantic and Rocky Mountain regions had eight CACREP-accredited doctoral programs each, and the Western region had two. The Southern region had the highest percentage of states with CACREP-accredited doctoral programs at 93% (14 of 15 states).

The number of CACREP-accredited doctoral programs per state was not equally distributed by region. Figure 1 and the Appendix show that in the Southern region, 14 of 15 states had CACREP-accredited doctoral programs, with two states having an especially high number of doctoral programs (i.e., Virginia = 9, Texas = 8). Other Southern region states (i.e., Maryland and South Carolina) only had a single doctoral program. In the North Atlantic region, counselor education programs were concentrated within specific geographic locations. The eight doctoral programs in the region were located within three states (i.e., New Jersey, New York, Pennsylvania) and the District of Columbia. The remaining seven states, including the entirety of New England (i.e., Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, Vermont) have zero CACREP-accredited doctoral programs.

To better understand the relationship between doctoral programs and population size, ratios were computed comparing the population to doctoral and master’s programs by region. Table 1 depicts the ratio for population to doctoral programs by region. Upon further inspection of the data, it appears that population size could explain the number of doctoral programs in a region. For example, the Southern region had by far the greatest number of CACREP-accredited doctoral programs at 45, yet the proportion of programs was roughly equivalent for four of the five regions when considering the population size of those regions. As seen in Table 1, the population of the Southern region was 119 million people, which was 1.65 times the size of the next largest region, the North Central region (72 million). Accordingly, the number of doctoral programs in the Southern region was nearly double the number of programs in the North Central region (45 vs. 23). When examining the ratio of population to CACREP-accredited doctoral programs, the Southern region appears to have a roughly equivalent representation (2.6 million per doctoral program) to two other regions, the Rocky Mountain (1.8 million) and North Central (3.1 million) regions.

The Western region had the largest ratio of population to doctoral programs, at 31.8 million people per doctoral program. This ratio was more than four times greater than the next largest ratio (North Atlantic, 7.2 million per doctoral program) and 10 times the ratio of the other three regions (North Central, 3.1 million; Southern, 2.6 million; Rocky Mountain, 1.8 million). It was therefore evident that the Western region was most underrepresented in the number of CES doctoral programs per region inhabitant.

The Relationship Between CACREP-Accredited Doctoral and Master’s Programs
     A linear multiple regression (random model) was computed to better understand the relationship between the number of CACREP-accredited master’s and doctoral programs per state. Other predictor variables included state population size, state population density, number of colleges and universities per state, and number of APA-accredited counseling psychology programs per state. The sample size of 51 exceeded the requirement for 80% power at the .05 alpha level (i.e., n = 39). Data conformed to homoscedasticity and did not show multicollinearity (A. Field, 2013). Residuals (errors) were equally distributed, and no significant outliers were found (A. Field, 2013). Because these assumptions were met, parametric statistics could be performed without adjustments (A. Field, 2013). The linear multiple regression (random model) variables significantly predicted the number of CACREP doctoral programs: F(5, 44) = 18.55, p < .05, R2 = .68. This represented a large effect size. Notably, only CACREP-accredited master’s programs were a significant predictor variable, with a standardized β coefficient of .85 (p < .05). The other predictor variables were not significant predictors and did not contribute to the multiple regression model. Thus, the presence of CACREP-accredited master’s programs accounted for 68% of the variance in doctoral programs by state.

Data in Table 1 help to elucidate the relationship between CACREP-accredited doctoral and master’s programs. The Southern region by far had the largest number of CACREP-accredited master’s programs (n = 162) and doctoral programs (n = 45). The second largest number of master’s programs was in the region with the second largest number of doctoral programs (North Central; 104 and 23, respectively). Some differences between doctoral and master’s program representation were found; the Rocky Mountain region had the smallest number of master’s programs at 24, which was three times less than the North Atlantic region, despite having the same number of doctoral programs (n = 8).

Figures 1 and 2 further clarify that although a relationship exists between the number of CACREP-accredited doctoral and master’s programs, there are important regional differences. In the West, several states had a relatively high number of master’s programs (e.g., California, Oregon, Washington) despite having one or even zero doctoral programs per state. In the North Atlantic region, New York and Pennsylvania had among the highest number of master’s programs by state, though these two states had relatively fewer doctoral programs. There were no CACREP-accredited doctoral programs and relatively few CACREP-accredited master’s programs in the entirety of New England (i.e., Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, Vermont), which is noteworthy because the area is known for the high number of colleges and universities, as well as high population density.

When reviewing ratios of master’s programs to population in Table 1, the Western region showed a far smaller representation of master’s programs compared to other regions. There were 1.8 million inhabitants per master’s program in the Western region. The Western region had more than double the ratio of the other four regions, who themselves have a fairly equivalent ratio of inhabitants per master’s program, ranging from 597,000 to 770,000.

Discussion

The results indicate a large and significant difference (p < .05, η2 = .38) in the number of CACREP-accredited doctoral programs by region when controlling for the confounding variable of population size. The number of CACREP-accredited master’s programs per state is also a large and significant predictor (standardized β = .85, p < .05) for the number of CACREP-accredited doctoral programs in a state. Other variables, such as state population size, state population density, number of colleges and universities per state, and number of APA-accredited counseling psychology programs, did not predict the number of CACREP-accredited doctoral programs in a state.

The Western region had by far the fewest number of CACREP-accredited doctoral programs, the smallest percentage of states with CACREP-accredited doctoral programs, the largest ratio of CACREP-accredited master’s-to-doctoral programs, and the largest ratio of population size to both master’s and doctoral CACREP-accredited programs. With only two CACREP-accredited doctoral programs in seven states, the Western region may experience a significant pipeline problem. It is worth noting that the number of CACREP-accredited master’s programs has doubled in the Western region since 2009, from 16 to 35 programs (CACREP, n.d.). During the same time period, the Western region has not gained any new CACREP-accredited doctoral programs. From an analysis of in-process programs, it seems that the Western region stands to gain further CACREP-accredited master’s programs but no CACREP-accredited doctoral programs in the near future, exacerbating any existing pipeline problem. In addition, the North Atlantic region has a relative lack of doctoral programs as compared to master’s programs. In the ensuing section, potential reasons for the lack of CACREP-accredited doctoral programs in the Western and North Atlantic regions, along with the potential impact of this problem, are discussed.

CES Doctoral Programs in the Western Region
     The Western state of California was initially an early developer and adopter of counselor education accreditation standards, yet today it has relatively few CACREP-accredited master’s programs relative to population size and has never had a CACREP-accredited doctoral program. The California story is worth exploring in greater depth because it illustrates a further barrier to establishing doctoral CACREP programs in the Western region.

California is a major outlier in this study in that only 24% (n = 23) of 96 master’s degree programs in counseling (i.e., clinical mental health counseling; marriage, couple, and family counseling; school counseling) were CACREP accredited. One explanation for this low number is that it was not until 2010 that California granted licenses to professional counselors (T. A. Field, 2017). As mentioned earlier, licensure requirements (especially those that require CACREP accreditation) can increase the number of CACREP-accredited programs in a state, with Ohio being a notable example. It is also interesting to note that despite California’s long history of granting licenses to marriage and family therapists, COAMFTE (n.d.) was not a strong accreditation competitor to CACREP. As of 2019, only 10% (8) of 82 MFT licensable programs were COAMFTE accredited.

CES Doctoral Programs in the North Atlantic Region
     The North Atlantic region had only eight CACREP-accredited doctoral programs, which were concentrated in three states (i.e., New Jersey, New York, District of Columbia). No CACREP-accredited doctoral programs were in the New England region (i.e., Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, Vermont). The North Atlantic region has several densely populated states, with New York and Pennsylvania being the fourth and fifth most populated states in the United States. The North Atlantic region also had a fairly large number of master’s CACREP-accredited programs (n = 75). As seen in Table 1, the North Atlantic region had roughly the same ratio of CACREP-accredited master’s programs to population size as the Southern region and yet had a ratio of CACREP-accredited doctoral programs to population size that was three times greater than the Southern region’s ratio. The North Atlantic region also had more than double the number of master’s programs than the Western region, despite having a smaller population overall. Considering this larger presence of CACREP-accredited master’s programs, the North Atlantic’s lack of doctoral programs is somewhat surprising.

The reason for the low number of CACREP-accredited doctoral programs in the North Atlantic region can be understood when considering the historical presence of APA-accredited counseling psychology doctoral programs in the region. Although not a predictor for the number of CES doctoral programs nationally, APA-accredited counseling psychology programs appear to be a potential barrier to CES doctoral program establishment in New England especially. Massachusetts had the second largest number of APA-accredited counseling psychology doctoral programs (n = 6), behind only Texas (n = 7; APA, 2019). As stated previously, university administrators may perceive doctoral programs in counseling psychology and CES as competitor programs for faculty lines, as core faculty cannot be shared between APA and CACREP-accredited programs (CACREP, 2015). The large number of counseling psychology doctoral programs in Massachusetts may help explain why there are no CES doctoral programs in New England.

CES Doctoral Programs Across Regions
     Although the Western and North Atlantic regions had the greatest degree of pipeline problem, it is possible that all five regions will be impacted by the pipeline problem in the near future. An analysis of programs currently in the process of applying for CACREP accreditation (designated “in process”) is presented in the Appendix. Across regions, a total of 63 master’s programs were in process, compared to only five doctoral programs. This 12.6:1 ratio is far above the current ratios of the Southern, North Central, and Rocky Mountain regions and is similar to the current ratio for the North Atlantic region. All regions except the Rocky Mountain region appear to be impacted. The Southern region had 31 in-process master’s programs and three in-process doctoral programs (10:1 ratio). The North Central region had 13 in-process master’s programs and one in-process doctoral program (13:1). The North Atlantic region had 10 in-process master’s programs and one in-process doctoral program (10:1). The Western region had eight in-process master’s programs and zero in-process doctoral programs (8:0). The Rocky Mountain region seemed least impacted, with only one in-process master’s program and zero in-process doctoral programs (1:0). Any existing pipeline problem for doctoral-level counselor education faculty therefore seems likely to continue if not worsen in the coming years.

State Laws and Rules Prohibiting Doctoral Programs
     In this study, the number of CACREP-accredited master’s programs is a strong predictor of the number of CACREP-accredited doctoral programs within a state. The relationship between the number of master’s and doctoral CACREP-accredited programs is far weaker in the Western region because of state laws and rules that restrict doctoral study at public universities. The California and Washington state university systems limit doctoral programs to their research-intensive universities. The California Master Plan (California State Department of Education, 1960; Douglass, 2000) restricts doctoral programs to the University of California university system and specifically does not permit Doctor of Philosophy degrees to be offered at the California State University system campuses. This is important because in California all of the counselor education programs at state universities are operated within the California State University system, with no programs offered within the research-intensive University of California system.

A similar dynamic exists within the Washington state educational system, whereby only the research-intensive universities (i.e., University of Washington, Washington State University) may offer doctoral degrees. As in California, master’s counselor education programs within Washington state universities are only operated within the teaching institutions (e.g., Central Washington University, Eastern Washington University, Western Washington University) and no programs are offered at the research-intensive state universities. Unfortunately, one of the first-ever CACREP-accredited doctoral programs was at the University of Washington, which closed its program and lost its CACREP accreditation status in 1988 (CACREP, n.d.).

State political dynamics are a significant barrier to starting new doctoral programs within the Western state public university systems. Because of state laws and regulations, the real need generated by the significant number of master’s counseling programs at teaching-focused and less research-intensive state universities in California and Washington has no real influence on doctoral program development. No new state university doctoral programs are on the horizon or even under consideration. Instead, new doctoral programs in Western states will likely only start at private universities. Unfortunately, these institutions tend to have higher tuition without the advantage of the graduate student funding that their state counterparts generally offer.

Pace (2016) found that institution type (i.e., public vs. private) and enrollment numbers for the institution were predictors of whether the institution had a CACREP-accredited doctoral program. As of 2018, the majority of doctoral programs were housed in public institutions (n = 64), with 19 programs at private institutions (CACREP, n.d.). Of these 19 programs at private institutions, 12 (63%) were at professional or master’s-level universities according to Carnegie classification (The Carnegie Classification of Institutions of Higher Education, 2019). Programs within private colleges and universities represented more than half of all programs (12 of 21 programs; 57%) at non–research-intensive universities (i.e., professional or master’s-level classifications). Private universities with professional and master’s-level classifications who develop doctoral CES programs seem less likely to have the financial support to offer scholarships and tuition waivers to students when compared to research institutions.

Student funding has historically been valued as a core principle of doctoral education. It often provides doctoral students with full-time opportunities to shadow faculty members and develop research self-efficacy (Lambie & Vaccaro, 2011), which is considered the primary focus of doctoral-level counselor education (Adkison-Bradley, 2013). Program faculty in these new private doctoral programs may face heavier workloads given the lack of student funding (e.g., increased teaching and advising loads) and support for faculty research and scholarship. This could potentially limit the research training available to doctoral students at these new institutions, which may hinder the ability for these doctoral students at emerging programs to be adequately prepared for the scholarly work required as a future faculty member. If unaddressed, these programs would not contribute to meeting the growing need for qualified doctoral counselor educators in the Western region, and the pipeline problem would continue.

For example, in Washington, several private universities with CACREP-accredited master’s programs (i.e., Antioch University-Seattle, City University of Seattle, Seattle Pacific University) have recently established doctoral programs in CES. In the three institutions, all new faculty hired after 2013 have completed doctoral degrees in CES from institutions outside of the Western region, with the majority of those doctorates being completed in the Southern region. Although not CACREP accredited at the time of writing, these new doctoral programs appear to be a potential solution to the pipeline problem in the Western region. However, it is worth noting that these three private universities are teaching institutions rather than research institutions, and such programs may need guidance regarding how to include sufficient research training in the doctoral curriculum if the program cannot offer funding to doctoral students and the faculty are not given support to generate faculty-led research and scholarship.

Impact of Doctoral Programs on Regional Professional Identity
     Authors such as Lawson (2016) and Mascari and Webber (2013) have argued that CACREP accreditation strengthens the professional identity of the program and of students within the program. It is unknown whether the number of CACREP-accredited master’s and doctoral programs within a region also strengthens and contributes to professional identity within a region. There are no existing published studies that have comprehensively examined the regional impact of the number of CACREP-accredited master’s and doctoral programs on professional identity. Anecdotally, there appear to be several potential effects from having a lack of CACREP-accredited doctoral programs within a region. CACREP-accredited master’s counseling programs must recruit new faculty hires from outside of the region if there is an insufficient number of candidates available from established doctoral programs within the region. Because the Western region and New England states have a dearth of CACREP-accredited doctoral programs, counselor education programs in those states may need to recruit from outside of their region to find suitable candidates. As mentioned previously, this pipeline problem can make recruiting difficult, as candidates strongly weigh location and closeness to home when selecting doctoral programs (Hertlein & Lambert-Shute, 2007; Honderich & Lloyd-Hazlett, 2015; Poock & Love, 2001) and faculty positions (Hunt & Jones, 2015; Magnuson et al., 2001; Millar et al., 2009). Location appears to be a particularly important consideration for candidates from underrepresented minority backgrounds (Bersola et al., 2014; Linder & Winston Simmons, 2015; Peek et al., 2013; Ramirez, 2013). As a result, prospective doctoral students and faculty members may be unwilling to study or work at a program outside of their home region.

Online CACREP-accredited doctoral programs may create pathways for more students in a region with a lack of doctoral programs to pursue and attain a doctorate in counselor education, which may reduce any existing pipeline problem. Studies are needed to examine comparative hiring rates of online versus in-person programs to ascertain whether graduates of online programs are filling needed faculty positions. Hiring school counselor educators is particularly challenging (Bernard, 2006), and studies are needed that examine the proportion of school counselor educators that graduate from online counseling programs.

Counselor education programs are continually seeking to increase the diversity of their faculty (Cartwright et al., 2018; Holcomb-McCoy & Bradley, 2003; Shin et al., 2001; Stadler et al., 2006). Because prospective doctoral students from minority backgrounds may be more inclined to restrict their applications to doctoral programs within close proximity to their current location (Bersola et al., 2014; Linder & Winston Simmons, 2015; Ramirez, 2013), online doctoral programs appear to be a viable option for students from culturally diverse backgrounds who live in regions with few in-person doctoral programs. Data are needed to support whether online graduates are (a) filling open faculty vacancies in the Western region and New England states, (b) filling school counselor educator positions, and (c) contributing to faculty diversity.

This study represents the first-ever analysis of regional differences in the number of CACREP-accredited doctoral CES programs. Because this was an ex post facto study, the results are non-experimental and thus have the potential for error because of the lack of experimental control and randomization. To mitigate the potential for error, the confounding variable of population size was included in our inferential statistical analyses. Examination of variables such as the demand for counselor education program entry are also important to examine in the future to ascertain whether programs are turning away students because of capacity issues related to faculty hiring. Such studies could appraise application numbers, enrollment numbers, and the program’s ideal yield should capacity not be an issue. Furthermore, a more detailed analysis into the relationship between a state’s educational requirements for licensure (i.e., whether graduates must complete a CACREP-accredited program) and the demand for doctoral counselor educators within a state is important. Lawson et al. (2017) have proposed that advocating for CACREP accreditation as the educational requirement for counselor licensure is important to the advancement of professionalization and professional identity. It is possible that the lack of CACREP-accredited doctoral programs in a state may be a barrier to establishing CACREP as the educational standard for licensure.

Conclusion

A large and statistically significant difference exists in the number of CACREP-accredited doctoral programs by region, even when controlling for population size. The Western region has by far the fewest doctoral programs and thus the greatest need for new doctoral programs. The lack of doctoral programs in the Western region and New England states may present a pipeline problem. The number of CACREP-accredited master’s programs has doubled in the Western region since 2009 while the number of doctoral programs has remained the same. As a result, CACREP-accredited master’s programs in the Western region and New England states may struggle to recruit qualified core faculty from in-region doctoral programs. The ratio of in-process master’s versus doctoral programs suggests that any existing pipeline issue will continue if not worsen in the coming years.

Even though the number of CACREP-accredited master’s programs within a state appears to be a strong independent predictor of CACREP-accredited doctoral programs, new doctoral programs may be difficult to establish because of state regulatory issues, the existence of competing doctoral programs (e.g., counseling psychology), or the lack of research support infrastructure (e.g., smaller teaching loads, funding for doctoral students).

In addition to small, private, teaching-focused institutions that seem to be developing doctoral programs in regions with few CACREP-accredited doctoral programs (e.g., Antioch University-Seattle, City University of Seattle, and Seattle Pacific University in the Western region), online CACREP-accredited doctoral CES programs are a potential solution to training prospective doctoral students in regions with few in-person doctoral programs. Online programs may also help to address any existing specific pipeline issues regarding faculty with school counseling specialties and faculty from culturally diverse backgrounds. Future studies are needed to support whether online CACREP-accredited doctoral programs are helping master’s programs to address these recruitment needs. Additional follow-up studies are also needed to examine the role of geographic location in candidate selection of in-person and online faculty positions, as it is possible that geographic location has less prominence in candidate selection of faculty roles today compared to several decades ago when prior studies in counselor education were conducted (e.g., Magnuson et al., 2001).

 

Conflict of Interest and Funding Disclosure
The authors reported no conflict of interest
or funding contributions for the development
of this manuscript.

 

References

Adkison-Bradley, C. (2013). Counselor education and supervision: The development of the CACREP doctoral standards. Journal of Counseling and Development, 91(1), 44–49. https://doi.org/10.1002/j.1556-6676.2013.00069.x

Altekruse, M. K., & Wittmer, J. (1991). Accreditation in counselor education. In F. O. Bradley (Ed.), Credentialing in counseling (pp. 53–67). Association for Counselor Education and Supervision.

American Psychological Association. (2019). APA-accredited programs. https://www.apa.org/ed/accreditation/programs
/using-database

Bernard, J. M. (2006). Counselor education and counseling psychology: Where are the jobs? Counselor Education and Supervision, 46(1), 68–80. https://doi.org/10.1002/j.1556-6978.2006.tb00013.x

Bersola, S. H., Stolzenberg, E. B., Love, J., & Fosnacht, K. (2014). Understanding admitted doctoral students’ institutional choices: Student experiences versus faculty and staff perceptions. American Journal of Education, 120(4), 515–543. https://doi.org/10.1086/676923

Bodenhorn, N., Hartig, N., Ghoston, M. R., Graham, J., Lile, J. J., Sackett, C., & Farmer, L. B. (2014). Counselor education faculty positions: Requirements and preferences in CESNET announcements 2005-2009. Journal of Counselor Preparation and Supervision, 6(1), 1–16.

California State Department of Education (1960). A master plan for higher education in California: 1960-1975. https://www.ucop.edu/acadinit/mastplan/MasterPlan1960.pdf

The Carnegie Classification of Institutions of Higher Education. (2019). Basic classification description. http://carnegieclassifications.iu.edu/classification_descriptions/basic.php

Cartwright, A. D., Avent Harris, J. R., Munsey, R. B., & Lloyd-Hazlett, J. (2018). Interview experiences and diversity concerns of counselor education faculty from underrepresented groups. Counselor Education and Supervision, 57(2), 132–146. https://doi.org/10.1002/ceas.12098

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Erlbaum.

Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155–159. https://doi.org/10.1037/0033-2909.112.1.155

Commission on Accreditation for Marriage and Family Therapy Education. (n.d.). Directory of COAMFTE accredited programs. Retrieved January 20, 2020, from https://coamfte.org/COAMFTE/Directory_of_Accredited_Programs/
MFT_Training_Programs.aspx
 

Council for Accreditation of Counseling and Related Educational Programs. (n.d.). Find a program. Retrieved January 20, 2020, from https://www.cacrep.org/directory

Council for Accreditation of Counseling and Related Educational Programs. (2008). CACREP 2009 standards. http://www.cacrep.org/wp-content/uploads/2017/07/2009-Standards.pdf

Council for Accreditation of Counseling and Related Educational Programs. (2013). 2012 annual report. http://www.cacrep.org/wp-content/uploads/2019/05/CACREP-2012-Annual-Report.pdf

Council for Accreditation of Counseling and Related Educational Programs. (2015). CACREP 2016 standards. http://www.cacrep.org/wp-content/uploads/2017/08/2016-Standards-with-citations.pdf

Council for Accreditation of Counseling and Related Educational Programs. (2017). CACREP/CORE merger information. https://www.cacrep.org/home/cacrepcore-updates

Council for Accreditation of Counseling and Related Educational Programs. (2019a). CACREP policy document. http://www.cacrep.org/wp-content/uploads/2019/05/2016-Policy-Document-January-2019-revision.pdf

Council for Accreditation of Counseling and Related Educational Programs. (2019b). Annual report 2018. http://www.cacrep.org/wp-content/uploads/2019/05/CACREP-2018-Annual-Report.pdf

Douglass, J. A. (2000). The California idea and American higher education: 1850 to the 1960 master plan. Stanford University Press.

Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39, 175–191. https://doi.org/10.3758/BF03193146

Field, A. (2013). Discovering statistics (4th ed.). SAGE.

Field, T. A. (2017). Clinical mental health counseling: A 40-year retrospective. Journal of Mental Health Counseling, 39(1), 1–11. https://doi.org/10.17744/mehc.39.1.01

Hertlein, K. M., & Lambert-Shute, J. (2007). Factors influencing student selection of marriage and family therapy graduate programs. Journal of Marital and Family Therapy, 33(1), 18–34.
https://doi.org/10.1111/j.1752-0606.2007.00002.x

Holcomb-McCoy, C., & Bradley, C. (2003). Recruitment and retention of ethnic minority counselor educators: An exploratory study of CACREP-accredited counseling programs. Counselor Education and Supervision, 42(3), 231–243. https://doi.org/10.1002/j.1556-6978.2003.tb01814.x

Honderich, E. M., & Lloyd-Hazlett, J. (2015). Factors influencing counseling students’ enrollment decisions: A focus on CACREP. The Professional Counselor, 5(1), 124–136. https://doi.org/10.15241/emh.5.1.124

Hunt, S. C., & Jones, K. T. (2015). Recruitment and selection of accounting faculty in a difficult market. Global Perspectives on Accounting Education, 12, 23–51.

Institute of Medicine. (2010). Provision of mental health counseling services under TRICARE. The National Academies Press. https://doi.org/10.17226/12813

Lambie, G. W., & Vaccaro, N. (2011). Doctoral counselor education students’ levels of research self-efficacy, perceptions of the research training environment, and interest in research. Counselor Education and Supervision, 50(4), 243–258. https://doi.org/10.1002/j.1556-6978.2011.tb00122.x

Lawson, G. (2016). On being a profession: A historical perspective on counselor licensure and accreditation. Journal of Counselor Leadership and Advocacy, 3(2), 71–84. https://doi.org/10.1080/2326716X.2016.1169955

Lawson, G., Trepal, H. C., Lee, R. W., & Kress, V. (2017). Advocating for educational standards in counselor licensure laws. Counselor Education and Supervision, 56(3), 162–176. https://doi.org/10.1002/ceas.12070

Linder, C., & Winston Simmons, C. (2015). Career and program choice of students of color in student affairs programs. Journal of Student Affairs Research and Practice, 52(4), 414–426.
https://doi.org/10.1080/19496591.2015.1081601

Magnuson, S., Norem, K., & Haberstroh, S. (2001). New assistant professors of counselor education: Their preparation and their induction. Counselor Education and Supervision, 40(3), 220–229.
https://doi.org/10.1002/j.1556-6978.2001.tb01254.x

Mascari, J. B., & Webber, J. (2013). CACREP accreditation: A solution to license portability and counselor identity problems. Journal of Counseling & Development, 91(1), 15–25.
https://doi.org/10.1002/j.1556-6676.2013.00066.x

Meyers, L. S., Gamst, G. C., & Guarino, A. J. (2013). Performing data analysis using IBM SPSS. Wiley.

Millar, M., Kincaid, C., & Baloglu, S. (2009). Hospitality doctoral students’ job selection criteria for choosing a career in academia. Hospitality Management, 7. https://repository.usfca.edu/hosp/7

National Board for Certified Counselors. (2018, August). NBCC delays start of new standard to obtain certification. NBCC Visions. https://nbcc.informz.net/NBCC/pages/August2018CACREPdelay?_zs=z9bqb&_zmi=?

National Center for Education Statistics. (n.d.). College navigator. Retrieved January 20, 2020, from https://nces.ed.gov/collegenavigator

Pace, R. L., Jr. (2016). Relationship of institutional characteristics to CACREP accreditation of doctoral counselor education programs [Doctoral dissertation, Walden University]. ScholarWorks. https://scholarworks.waldenu.edu/dissertations/2097

Peek, M. E., Kim, K. E., Johnson, J. K., & Vela, M. B. (2013). “URM candidates are encouraged to apply”: A national study to identify effective strategies to enhance racial and ethnic faculty diversity in academic departments of medicine. Academic Medicine, 88(3), 405–412. https://doi.org/10.1097/ACM.0b013e318280d9f9

Poock, M. C., & Love, P. G. (2001). Factors influencing the program choice of doctoral students in higher education administration. NASPA Journal, 38(2), 203–223. https://doi.org/10.2202/1949-6605.1136

Ramirez, E. (2013). Examining Latinos/as’ graduate school choice process: An intersectionality perspective. Journal of Hispanic Higher Education, 12(1), 23–36. https://doi.org/10.1177/1538192712452147

Shin, R. Q., Smith, L. C., Goodrich, K. M., & LaRosa, N. D. (2011). Attending to diversity representation among Council for Accreditation of Counseling and Related Educational Programs (CACREP) master’s programs: A pilot study. International Journal for the Advancement of Counselling, 33(2), 113–126.
https://doi.org/10.1007/s10447-011-9116-6

Stadler, H. A., Suh, S., Cobia, D. C., Middleton, R. A., & Carney, J. S. (2006). Reimagining counselor education with diversity as a core value. Counselor Education and Supervision, 45(3), 193–206.
https://doi.org/10.1002/j.1556-6978.2006.tb00142.x

U.S. Census Bureau. (2020). National population totals and components of change. https://www.census.gov/data/tab
les/time-series/demo/popest/2010s-national-total.html#par_textimage_2011805803

U.S. Department of Defense. (2014, July 17). TRICARE certified mental health counselors. 32 CFR Part 199. https://www.federalregister.gov/documents/2014/07/17/2014-16702/tricare-certified-mental-health-counselors

Wittmer, J. (2010). Evolution of the CACREP standards. In A. K. Mobley and J. E. Myers (Eds.), Developing and maintaining counselor education laboratories (2nd ed.). Association of Counselor Education and Supervision. https://acesonline.net/wp-content/uploads/2018/11/Developing-and-Maintaining-Counselor-Education-Laboratories-Full-Text.pdf

 

Appendix

CACREP-Accredited and In-Process Programs by State and Region (December 2018)

State Region Population CACREP Doctoral Programs CACREP Master’s Programs Doctoral Programs “In Process” of CACREP Accreditation Master’s Programs “In Process” of CACREP Accreditation
Connecticut North Atlantic   3,572,665   6  1
Delaware North Atlantic      967,171   1
District of Columbia North Atlantic      702,455  1   4  3
Maine North Atlantic   1,338,404   2
Massachusetts North Atlantic   6,902,149   5  1
New Hampshire North Atlantic   1,356,458   2  1
New Jersey North Atlantic   8,908,520  1 12
New York North Atlantic 19,542,209  3 19  4
Pennsylvania North Atlantic 12,807,060  3 21 1
Rhode Island North Atlantic   1,057,315   2
Vermont North Atlantic      626,299   1
  North Atlantic 57,780,705  8 75 1 10
Illinois North Central 12,741,080  5  22   3
Indiana North Central   6,691,878    9   2
Iowa North Central   3,156,145  1    3
Kansas North Central   2,911,505  1    3
Michigan North Central   9,995,915  4    8   1
Minnesota North Central   5,611,179  3    6   1
Missouri North Central   6,126,452  1    7   3
Nebraska North Central   1,929,268    4
North Dakota North Central      760,077  1     2
Ohio North Central 11,689,442  6   24   2
Oklahoma North Central   3,943,079    5
South Dakota North Central      882,235  1    3
Wisconsin North Central   5,813,568    8 1   1
  North Central 72,251,823 23        104 1 13
Colorado Rocky Mountain   5,695,564  3   9
Idaho Rocky Mountain   1,754,208  2   4
Montana Rocky Mountain   1,062,305  1   4
New Mexico Rocky Mountain   2,095,428  1   3  1
Utah Rocky Mountain   3,161,105   3
Wyoming Rocky Mountain      577,737  1   1
  Rocky Mountain 14,346,347  8 24 0  1
Alabama Southern    4,887,871  2 11  1
Arkansas Southern    3,013,825  1   4  1
Florida Southern  21,299,325  5  14  3
Georgia Southern  10,519,475  2  15 2  3
Kentucky Southern    4,468,402  3    9 1
Louisiana Southern    4,659,978  2  15  1
Maryland Southern    6,042,718  1    6  2
Mississippi Southern    2,986,530  2    5
North Carolina Southern  10,383,620  5  18  1
South Carolina Southern    5,084,127  1    7
Tennessee Southern    6,770,010  4  14  7
Texas Southern  28,701,845  8  26  8
Virginia Southern    8,517,685  9  16  4
West Virginia Southern    1,805,832    2
  Southern  119,141,243 45 162 3 31
Alaska Western        737,438   1
Arizona Western     7,171,646   4
California Western   39,557,045 11  6
Hawaii Western     1,420,491   1
Nevada Western     3,034,392   1   1  2
Oregon Western     4,190,713   1   9
Washington Western     7,535,591   8
  Western   63,647,316   2 35 0  8
  Grand Total   327,167,434 86 400 5 63

*Ratios rounded to closest whole number. Source of CACREP data: https://www.cacrep.org/directory/. Source of U.S. Census data: https://www.census.gov/data/tables/time-series/demo/popest/2010s-national-total.html#par_textimage_2011805

 

Thomas A. Field, PhD, NCC, CCMHC, ACS, LPC, LMHC, is an assistant professor at the Boston University School of Medicine. William H. Snow, PhD, is a professor at Palo Alto University. J. Scott Hinkle, PhD, ACS, BCC, HS-BCP, is a core faculty member at Palo Alto University. Correspondence may be addressed to Thomas Field, 72 E Concord St., Suite B-210, Boston, MA 02118, tfield@bu.edu.

Components of a High-Quality Doctoral Program in Counselor Education and Supervision

Jennifer Preston, Heather Trepal, Ashley Morgan, Justin Jacques, Joshua D. Smith, Thomas A. Field

The doctoral degree in counselor education and supervision is increasingly sought after by students, with the Council for Accreditation of Counseling and Related Educational Programs (CACREP) reporting a 27% enrollment increase in just a 4-year span. As new programs are started and existing programs sustained, administrators and faculty may be seeking guidance in how to build a high-quality program. Yet no literature currently exists for how doctoral counseling faculty define a high-quality program. This study used a basic qualitative research design to examine faculty perceptions of high-quality doctoral programs (N = 15). The authors analyzed data from in-depth interviews with core faculty members at CACREP-accredited doctoral programs. Five themes emerged from the data: relationships, mission alignment, development of a counselor educator identity, inclusiveness of diversity, and Carnegie classification. The findings of this study can be important for faculty and administrators to consider when establishing and maintaining a counselor education and supervision doctoral program.

Keywords: doctoral programs, counselor education and supervision, CACREP, faculty perceptions, high-quality

 

Doctoral education in counselor education and supervision (CES) is surging, with both the number of programs and enrollment head count increasing over the past few years. According to the most recent annual report from the Council for Accreditation of Counseling and Related Educational Programs (CACREP), there are currently 85 CACREP-accredited CES doctoral programs (CACREP, 2019b) compared to 63 in 2014 (CACREP, 2017). This constitutes a 35% increase over a 4-year span. In addition, enrollment in CACREP-accredited doctoral programs has increased from 2,291 in 2014 to 2,917 in 2018, a 27% increase (CACREP, 2017, 2019a). The number of doctoral graduates in CES also increased by 35% between 2017 and 2019, from 355 to 479 (CACREP, 2017, 2019a). A registry does not exist for non–CACREP-accredited programs, and thus the exact number of doctoral programs in CES (i.e, CACREP- and non–CACREP-accredited programs) is unknown.

According to Hinkle et al. (2014), students’ motivations to pursue a doctorate in CES include
(a) to become a professor, (b) to be a respected professional with job security, (c) to become a clinical leader, and (d) to succeed for family and community amid obstacles. Student motivations appear tempered by CES departmental culture, mentoring, academics, support systems, and personal and related issues that impact their doctoral experience (Protivnak & Foss, 2009).

While students enter CES programs with one set of motivations, the programs themselves have their own goals for whom they admit, how they train, and what they perceive as a desired outcome to doctoral training. Doctoral programs in CES are considered training grounds for shaping students’ professional (Dollarhide et al., 2013; Limberg et al., 2013) and research identities (Perera-Diltz & Sauerheber, 2017). In addition, mentoring and advising relationships are viewed as important to supporting research motivation and productivity (Kuo et al., 2017).

Given students’ motivations and expectations for career preparation and advancement, it would make sense that they would want to choose a doctoral program that fits their needs. In addition to matching academic needs, it can also be assumed that as consumers of doctoral education, students would want to choose a high-quality doctoral program in CES. Bersola et al. (2014) conducted a study into factors that influenced admitted doctoral students’ (N = 540) choice of program. The students in the study were all from programs and departments located within one university. Both underrepresented minority and majority students cited program reputation, institutional reputation, faculty quality, research quality, and faculty access/availability as primary reasons for their choice of doctoral program. Participants reported these factors as more important to their choice of doctoral program than non–quality-related factors such as cost of living, housing, location, and urbanity (Bersola et al., 2014).

There are many program options for CES doctoral study, but little is known about what constitutes a high-quality program in counselor education apart from CACREP accreditation. Although the perceptions of CES doctoral graduates remain unknown, researchers have utilized data from doctoral graduates across disciplines regarding their satisfaction with their programs (Barnes & Randall, 2012; Morrison et al., 2011). Graduates identified aspects such as academic rigor, funding opportunities, mentoring in meeting program requirements, research skill training, and developing a sense of community as contributing to their satisfaction and perceptions of the doctoral programs (Barnes & Randall, 2012; Morrison et al., 2011).

Despite considerable knowledge of doctoral graduates’ perceptions, little is known about faculty perspectives on these issues (Kim et al., 2015). There is evidence that faculty perceptions of doctoral program quality can differ from alumni perceptions. Morrison et al. (2011) examined program faculty and alumni perceptions of quality doctoral education in the social sciences. Both faculty and alumni considered training in research skills and diversity characteristics of the program as important to quality. However, alumni also tended to place greater emphasis on the importance of faculty support in meeting program requirements and fostering belonging, whereas program faculty placed greater emphasis on the scholarly reputation of faculty when defining doctoral program quality.

Purpose of the Present Study
     Very few studies have explored program faculty perceptions of high-quality doctoral education, and no studies exist in CES specifically. As educators and mentors, faculty who teach in CES programs should be both interested and invested in enhancing educational environments that meet students’ career aspirations as well as advancing the profession. Although industry standards for quality exist (e.g., CACREP standards), there is a need to better understand which components CES faculty believe comprise a high-quality doctoral program in CES. The purpose of this study was to address this gap in knowledge.

Methodology

This particular study was conducted as part of a larger comprehensive qualitative study of CES doctoral programs organized by the last author that followed the basic qualitative research design described by Merriam and Tisdell (2016). In the basic qualitative research paradigm, the research team collects, codes, and categorizes qualitative data using the constant comparative method from grounded theory methodology (Corbin & Strauss, 2015; Merriam & Tisdell, 2016). The researchers first use open coding, followed by categorization using axial coding to identify themes in the data (Merriam & Tisdell, 2016). Data collection continues until data reach saturation and redundancy. Unlike other qualitative traditions, this qualitative design is not employed to develop theory (i.e., grounded theory), capture the essence of a lived experience (i.e., phenomenology), nor describe cultural and environmental observations (i.e., ethnography; Merriam & Tisdell, 2016). Instead, researchers using basic qualitative designs seek to collect and analyze qualitative data for the purpose of answering research questions outside other specialized qualitative focus areas. A qualitative design was selected because the authors shared an underlying philosophical belief in the constructivist position that participants’ reality was socially co-constructed and that all responses should be given importance regardless of frequency (Lincoln & Guba, 2013).

The basic qualitative design was selected because it best fit the purpose of this larger qualitative project. The purpose of the larger qualitative study was to identify current perceptions of doctoral-level counselor educators regarding four major issues pertinent to doctoral counselor education: (a) components of high-quality programs, (b) strategies to recruit and retain underrepresented students, (c) strategies for working with administrators, and (d) strategies for successful dissertation advising. Our study collected and analyzed in-depth interviews with doctoral-level counselor educators to answer a series of research questions that addressed the issues above pertaining to doctoral-level counselor education.

Interview questions were designed to directly answer each research question. The research questions explored in the larger project were as follows: 1) What are the components of high-quality doctoral programs in CES, and what are the most and least important components? 2) Which strategies are doctoral programs using to recruit, support, and retain underrepresented doctoral students from diverse backgrounds, and how successful are those? 3) Which strategies are helpful in gaining initial and ongoing support from administrators when seeking to start a new doctoral program in CES, and how successful are those? and 4) Which strategies help students navigate the dissertation process, and how successful are those?

     This manuscript represents the first of four articles from the larger qualitative project that each addressed one of the research questions listed above. This study therefore examined the first research question and sought to identify the components of high-quality doctoral programs in CES. The interview questions directly addressed this research question and were as follows: 1) How might you define a high-quality doctoral program in CES? and 2) What do you believe to be the most and least important components?

Participants
     Purposeful sampling was used for an initial identification of eligible volunteers (Merriam & Tisdell, 2016) from the limited number of doctoral CES programs in the United States that are CACREP accredited. At the time of writing, 85 CACREP-accredited doctoral CES programs existed (CACREP, 2019b). Information-rich cases were sought to promote visibility to the perception of CES faculty. The sampling method was thus designed to identify and recruit participants who had experiences working in doctoral-level counselor education. Inclusion criteria for the study design were as follows: Participants had to 1) be current full-time core faculty members in CES, 2) who were currently working in a doctoral-level counselor education program with CACREP accreditation. The last author created a database of CES doctoral faculty from the 85 CACREP-accredited programs and recruited faculty interest in the study through email. Faculty initially provided demographic information during a pre-registration phase. The last author reviewed this information to select participants from the pool of eligible volunteers for entry into the study utilizing maximum variation sampling. This sampling technique was employed to gather the perspectives of counselor educators from diverse backgrounds with regard to demographic characteristics and program characteristics. Maximum variation sampling also assisted with avoiding premature saturation (Merriam & Tisdell, 2016). The research team believed that counselor educator perspectives may differ by background. Thus the following criteria were used for selecting participants from among the eligible volunteers: (a) racial and ethnic self-identification, (b) gender self-identification, (c) length of time working in doctoral-level counselor education programs, (d) Carnegie classification of university where the participant was currently working (The Carnegie Classification of Institutions of Higher Education, 2019), (e) region of the counselor education program where the participant was currently working, and (f) delivery mode of the counselor education program where the participant was currently working (e.g., in-person, online).

These six characteristics were selected because of indications in the extant literature of the influence of the above factors on CES faculty experiences and/or trends in doctoral program delivery, which may impact perceptions of what constitutes a high-quality doctoral program. Prior studies have identified the influence of racial and ethnic identity (Cartwright et al., 2018), gender identity (Hill et al., 2005), years of experience in doctoral counselor education (Lambie et al., 2014; Magnuson et al., 2009), Carnegie classification (Lambie et al., 2014), and delivery mode (Smith et al., 2015) on faculty perceptions and experiences.

Once participants responded regarding their interest in the study, the last author purposively selected participants one at a time to ensure adequate variation by these characteristics. Participant selection was predicated on meeting variability requirements between participants regarding the six criteria identified above. For example, the first and second participants were selected because of their differences in gender, years of experience, and Carnegie classification. Subsequent participant selection decisions were made on the basis of variant ethnicity and region. Overall participant characteristics interviews were conducted until data seemed to reach saturation and redundancy. Data reached saturation after 15 interviews. Faculty members who provided demographic information during pre-registration were informed that they had not been invited to participate in the interview portion of the study and were thanked for their participation during pre-registration.

A total of 15 participants were interviewed for the study. All 15 participants were from separate and unique doctoral-level CES programs, with no program represented by more than one participant. With regard to self-identified gender, the sample consisted of seven female participants (46.7%) and eight male participants (53.3%). No participants identified as non-binary or transgender. The majority of participants identified as heterosexual (n = 14, 93.3%), with one participant identifying as bisexual (6.7%). Eleven participants (73.3%) self-identified as Caucasian, with multiracial/multiethnic (n = 1, 6.7%), African American (n = 1, 6.7%), Asian (n = 1, 6.7%), and Latinx (n = 1, 6.7%) ethnic backgrounds  also represented.

The sample was experienced, working as full-time faculty members for an average of 19.7 years (SD = 9.0 years) and a median of 17 years, ranging from 4 to 34 years. Participants spent most of those years working in doctoral-level CES programs (M = 17.3 years, SD = 9.2 years, Mdn = 16 years), ranging from 3 to 33 years. More than half of participants (n = 9, 60%) spent their entire careers working in doctoral-level CES programs. Eight of the participants (53.3%) currently worked at programs in the Southern region, with two participants (13.3%) each from the North Atlantic, North Central, and Western regions. One participant (6.7%) currently worked in the Rocky Mountain region. Five participants (33.3%) had worked in multiple doctoral programs in two or more regions. Twelve participants (80%) currently worked in face-to-face or brick-and-mortar programs, and three participants (20%) currently worked in online or hybrid programs. Regarding Carnegie classification, nine participants (60%) currently worked at Doctoral Universities – Very High Research Activity (i.e., R1) institutions, two participants (13.3%) currently worked at Doctoral Universities – High Research Activity (i.e., R2) institutions, and four participants (26.7%) currently worked at universities with the Master’s Colleges and Universities: Larger Programs designation (The Carnegie Classification of Institutions of Higher Education, 2019).

Positioning
     The last author conducted all interviews with the selected participants. The author had etic status, in that they had not worked in a doctoral-level CES program previously. Because the author was a member of the counselor education community, etic status around the topic of doctoral-level CES was important to bracketing biases during the interview process. The interviewer followed the interview protocol included in the Appendix for all interviews to ensure that data were gathered for each research question to the highest extent possible.

Procedure
     After receiving approval from their IRB, the last author created a database of doctoral-level counselor educator contacts who worked at the CES programs accredited by CACREP. The last author used the CACREP (2019b) website directory for recruitment purposes. Recruitment emails were sent to one faculty member at each of the 85 accredited programs. A total of 34 faculty responded with an interest in being interviewed (40% response rate). Of those 34 faculty, 15 were selected for interviews on the basis of maximal variation.

Interview Protocol
     At the beginning of each interview, participants were asked a series of demographic questions that addressed the characteristics mentioned above (i.e., self-identified race and ethnicity, gender, sexual/affective orientation, years as a faculty member, years working in doctoral-level counselor education programs, number of doctoral programs the participant had worked in, and regions of the programs in which the counselor educator had worked). Participants were asked to self-identify their demographic information at the beginning of the interview to clarify demographic information that had been previously collected during pre-registration, and to ensure that participants were able to adequately self-identify.

Following the demographic section, the interview protocol featured a series of eight in-depth interview questions that addressed the research questions of the larger qualitative study. Interview questions were developed in accordance with Patton’s (2015) recommendations. Per Patton (2015), the interview questions were open-ended, as neutral as possible, avoided “why” questions, and were asked one at a time. The interview protocol was piloted with a faculty member in a doctoral-level CES program prior to the study commencing. Several double-barreled questions were split into two separate questions to ensure that only one question was asked at a time. The interview protocol followed conventions of semi-structured interviewing, with sparse follow-up questions permitted to the main interview questions to ensure understanding of participant responses (Patton, 2015).

Prior to each interview, participants reviewed and signed the informed consent agreement approved by the last author’s IRB. Participants were sent the interview questions ahead of time. Each interview lasted for approximately 60 minutes. All but one interview (i.e., 14 interviews) were recorded using the Zoom online platform built-in recording feature. One interview was recorded via a Sony audio digital recorder instead of the Zoom platform, as the interview occurred in person during a professional conference. All demographic information and recordings were assigned an alphabetical identifier (e.g., A, B, C). The last author was the sole individual who knew the identity of participants attributed to alphabetical identifiers. Participant identity was thus blinded to subsequent transcribers and coders.

Transcription
     All interviews were transcribed verbatim by graduate students at the last author’s university, who had no familiarity with participants. Transcribers received transcription training prior to the study and received further training and direction by the last author prior to and during the transcription process. Once each transcript had been completed in full, the last author reviewed transcripts to ensure accuracy and sent the transcripts to the interviewees to conduct a member check. After member checks had been conducted, sections of transcripts were cut and pasted into separate documents for each of four research teams to code and analyze. The research teams were organized by research question (i.e., components of high quality; recruitment, support, and retention of underrepresented students; working with administrators; successful dissertation advising). Transcribed interviews for each research team were uploaded to separate secure folders in a secure encrypted online data management software system.

Data Analysis
     The last author met with members of all four research teams collectively to ensure consistency in the coding approach. The last author developed several guidance documents for the research teams to use and created instructions for coding the data, which included guidance such as each research team meeting to bracket biases and identify any a priori codes prior to initial coding of the data, following Merriam and Tisdell’s (2016) guidelines. Research teams were instructed to identify emergent in vivo codes using verbatim line-by-line open coding when possible to avoid interpreting data too early during the coding process (Merriam & Tisdell, 2016). The focus of coding was to identify themes within and between participants. The four research teams were instructed to meet weekly over a period of several months to code and analyze data specific to their research question. Research teams coded each of the first three transcripts together as a team during weekly live coding sessions using the Zoom online platform, prior to individual team members coding the remaining transcripts separately. Codes were noted on the transcripts themselves, and then the lead team member compiled the codes into the code book. From there, the categories were developed and reviewed by all team members. Discrepancies in coding were resolved using coding consensus, with the research team documenting how they resolved any discrepancies in coding. Weekly meetings were required even when individual team members were coding separately to facilitate sharing their coding experience, clarifying questions about codes, establishing consensus on any parts of the transcript with complicated coding, and following the coding approach with consistency across coders. The last author created a coding database template that each research team was required to use, to ensure consistency in how coding was documented and categorized. These approaches were designed to improve consistency in coding within and between the four research teams. Each of the four research teams only coded and analyzed data pertinent to their assigned research question.

A coding team chair was identified for each of the four research teams to ensure that the coding and analysis approach was followed consistently and to organize the work of the team. Each research team organized codes into categories and eventually collapsed codes into themes using axial coding after all 15 transcripts had been coded. Themes also were analyzed by demographic and program characteristics of interviewees to assess the potential influence of background characteristics on responses. Each research team recorded memos during collective team meetings and during individual coding of transcripts. The last author also created memos during collective meetings with all four research teams. The last author created memos immediately following interviews, though they refrained from sharing the memos with the research teams to avoid biasing the coding and analysis process. Several research teams used software platforms to analyze the data, and were permitted to select their own software platforms for data analysis.

Researcher Positioning for the Current Study
     For this study, the first five authors comprised the coding team that examined the research question pertinent to the components of high-quality programs. The sixth and last author conducted the interviews and did not code data for the reasons cited above. Among the five coding team members, both etic and emic perspectives were represented. Two of the authors had an emic perspective, as they had previously worked at a doctoral CES program during their faculty career. Three of the authors held an etic perspective as doctoral students who had not yet worked as full-time faculty members. Coding team members were from different counselor education programs to reduce bias.

With regard to other demographic characteristics, four members of the coding team identified as Caucasian, and one member identified as African American. Three team members identified as female, and two identified as male. The team members were from a wide range of programs. One doctoral student was from a very high research-intensive university; one faculty member and two doctoral students were from a research-intensive university; and one faculty member was from a private, nonprofit online university.

Trustworthiness
     Trustworthiness was enhanced through procedures identified in the literature (e.g., Merriam & Tisdell, 2016). Credibility was addressed through considering the positioning of the interviewer and research team members. Emic and etic perspectives were sought for each research team to reduce the potential for bias. The interviewer and research team each bracketed their biases prior to their involvement in the study and continued the process of bracketing throughout the study to reduce bias. One bias the researchers bracketed, for example, was their involvement and experiences as faculty and students in a CES program. All interviewees worked at separate CES programs to avoid overrepresentation of data. Research team members were also from different CES programs to reduce bias in coding and analysis. Emergent, in vivo, verbatim line-by-line open coding was used by each research team to avoid interpreting data too early during the coding process and thus to reduce interpretation bias. The interviewer did not participate in coding the data to minimize bias through being too close to the data. The last author also clearly identified and trained the research teams, with the goal of enhancing consistency. Member checks were used to enhance credibility, and the last author also kept an audit trail of the process. Purposive sampling and thick description was used to ensure adequate representation of perspectives and thus establish adequate transferability and dependability (Merriam & Tisdell, 2016).

Results

Through data analysis, five categories emerged to capture the components that the participants described as critical to ensuring a high-quality doctoral program: relationships, mission alignment, development of a counselor educator identity, inclusiveness of diversity, and Carnegie classification. Each theme is described below, with support provided for each theme via participants’ quotes.

Relationships
     The first major theme we identified from the data was the importance of relationships. This theme appears to be a critical component to having a high-quality program. Participants reported that supportive faculty–student and student–student relationships are important to high quality.

Faculty–Student Relationships
     Participants emphasized the importance of close mentoring relationships between doctoral faculty and doctoral students. Several participants cited the quality of mentoring between faculty and students as the “most important factor” in a high-quality doctoral program. We identified several subthemes that appeared to influence the quality of faculty–student relationships. Smaller cohort sizes, close mentoring, faculty workload, and the match between the student and their dissertation chair all seemed to be important factors in faculty–student relationships. In order to support the faculty and student relationship, attention to cohort sizes and the overall size of the program is considered critical. One participant stated, “If you view your doctoral program as a cash cow, and you’re bringing in a lot of students, I think you’ve lost something.” They further clarified that advising and chairing dissertations for more than two doctoral students per year would lessen the quality of the mentoring experience. Some participants reported that consideration should be given for admitting students who value the close mentoring experience.

Faculty time and resources seem foundational to the establishment of high-quality faculty–student relationships. Faculty reported that they need time to focus on mentoring students. One participant stated that “the amount of time spent between faculty member and doctoral student” strongly influence the quality of the mentoring relationship. Consideration for faculty teaching loads and service expectations is therefore important within the context of having adequate time to devote to mentoring.

Participants also noted the importance of fit between the faculty mentor and their student mentee. High-quality mentoring relationships are predicated on the match between student goals, research interests, and experience levels with their assigned dissertation chair and/or advisor’s own goals, interests, and experiences. One participant reported that “there’s a lot to mentorship,” elaborating that faculty members must mentor students in “how to get involved in a profession; how to develop their voice as a counselor, as a teacher, as a clinical supervisor, as a researcher; and how to manage themselves professionally.”

Student–Student Relationships
     In addition to cohort size, the cohort model was identified as important to facilitating supportive student-to-student relationships during the program. Participants reported that the cohort model facilitated deep, lasting, and “familial” relationships. Strong relationships with other doctoral students in the cohort were crucial during stressful periods. As one participant noted, “In addition to school, life is out there and stuff happens and people go through difficult times, with divorce and deaths and job losses and things like that. And having that support system built in is incredibly important.”

Mission Alignment
     The next theme encompassed the importance of doctoral programs developing and following a mission statement with clearly defined doctoral student outcomes. As one participant stated, “A high-quality doctoral program in counselor ed and supervision has a clarity of purpose and focus. The program knows what its mission is, in terms of the product they want to produce with the doctoral students.” Another participant reported that “a high-quality doctoral program has a really clear mission, so the program knows who they’re trying to prepare and what they do well. And then the program works the mission.” This participant elaborated that although the mission of a doctoral program could vary, high-quality programs ensure execution of the mission regardless of mission type: “So if they’re preparing researchers, they work that mission. If they’re really focusing on preparing people just for teaching institutions, they work the mission.” This theme had several subthemes, including faculty buy-in, the importance of aligning the program’s mission with the university’s mission, and institutional support.

Faculty Buy-In
     Several participants noted that faculty buy-in is essential to executing the mission of the program. This concept was expressed as more than general faculty alignment with the program mission. Faculty buy-in was defined as input, ownership, and commitment to the mission of the program. As one participant reported, high-quality programs have developed a culture whereby “everybody feels like they have some ownership in the doc program, and that everybody has a voice.” A team approach to carrying out the program’s mission and purpose requires doctoral faculty members to “realize that ‘winning’ as a team is providing the best training experience for students” rather than “maximizing their vita for their own promotability or transferability to another institution.” Thus, high-quality programs require faculty members to align their personal goals in order to fulfill the program’s mission.

Without this input, ownership, and commitment, the program is likely to “struggle” because of problematic faculty dynamics such as faculty working in isolation and program leaders (e.g., the program director) “doing all of the work.” Program faculty being aligned with the mission seemed to result in a faculty team that worked together well, could grow together, and supported students in a united way. In the participants’ experience, when faculty had strong relationships and worked together, the quality of student preparation and the overall program quality increased.

Some participants noted that faculty buy-in to a program mission that emphasizes the role of the doctoral program in leading the profession is important. Faculty involvement in professional leadership is thus a key component of the program’s leadership mission. One participant remarked, “[We] held a sense of pride in challenging ourselves to be leaders in the counseling profession,” and noted that “if we’re going to have a strong program, we need to be engaged and involved as faculty.”

Alignment With the University’s Mission
     Participants reported that the counseling department’s or program’s mission statement should be in alignment with the broader university. Participants described how critical it is for the department to feel a connection to the mission of the university and for the students to share that connection. Mission alignment impacts both faculty and student feelings of connectedness to the program and broader university, along with university support and the resulting resources available to students.

Institutional Support
     Participants reported that the program’s alignment with the university’s mission is crucial to securing institutional support for the program. Funding faculty lines, reduced faculty course loads, student graduate assistantships, conference attendance, specialized accreditation, and other aspects of the program are more likely to occur when the university feels the program reflects its own mission and purpose. One participant stated that “you need to garner respect from your program administration.” They elaborated that in order to “resource” the program adequately, the program needs to justify its existence through alignment with the university’s mission and purpose so that the university sees value in the program even when the program is unlikely to be a “money maker.” This financial support is considered crucial to operating a high-quality program. Administration buy-in helps to ensure that faculty members have the necessary resources, which in turn ensures a quality experience. As another participant stated, “I think that capacity and resources are key.”

Development of a Counselor Educator Identity
     The next theme to emerge was the importance of doctoral students developing a strong identity as counselor educators. As one participant said, the mission of a high-quality program is to prepare students “to step into a role as an educator.” Some participants therefore equated high-quality programs with those that intentionally prepared counselor educators. Participants described a variety of curricular and extracurricular experiences within the program that assisted doctoral students to develop a strong professional identity as counselor educators.

Curricular Experiences
     Several participants emphasized the importance of having formal curricular experiences in all three areas of teaching, research, and service as part of the doctoral degree program. As one participant stated, “I think you define your program by how well prepared your students are as evidenced by their success in these areas . . . of faculty activities, which [are] teaching, scholarship, and service.” A sole focus on one of these areas was considered inadequate by several participants. For example, even participants working at research-intensive institutions suggested that a sole focus and overemphasis on research at the expense of teaching and service (i.e., leadership and advocacy) may not assist students to develop broad knowledge and skills as counselor educators. In addition to training students broadly, some participants thought that curricular experiences needed to be rigorous. As one participant stated, “I assume that any high-quality doctoral program is rigorous—that you’re not letting students just do personal growth.”

Some participants also associated the program’s accreditation status (i.e., CACREP accreditation) with assisting students to develop their professional identity. One participant listed CACREP’s five core doctoral standards (i.e., counseling, leadership and advocacy, research, supervision, and teaching) as each being an essential part of the formal doctoral curriculum in counselor education: “I really believe in those five doctoral standards. I believe that those are the areas in which I expect to see scholar leaders at very high levels of competence.”

Extracurricular Experiences
     Participants reported providing a range of extracurricular experiences to engage students in professional identity development. Participants reported assisting students in attending conferences, sharing in publications, co-teaching classes, and providing opportunities for service. One participant stated that “doctoral study also involves writing with faculty. It involves presenting and publishing your own work. It involves being involved in program governance.” Graduate assistantships are also important when they help students to “gain practical experience and meaningful experience.”

These experiences were often part of the “informal curriculum” of the program and were conceptualized by participants as exceeding minimum standards and requirements. Within this theme, it was also recognized that CACREP accreditation standards should be considered the minimum standards and that students need to have experiences beyond the minimum requirements. One participant said that high-quality programs provide experiences beyond “the cookie-cutter bare minimum that CACREP requires” and gave students training that created “pathways towards something that makes you unique in this field, so that you can contribute above and beyond when you get in the classroom.” Another participant said that “it’s going beyond just the course work, it’s going beyond the CACREP standards, that makes a difference.” Participants reported that these extracurricular experiences are components of high-quality programs because they assist students with developing a counselor educator identity.

Graduate Outcomes
     Some participants also placed emphasis on the importance of graduate outcomes in determining a high-quality program. Consistent with the earlier subtheme of curricular experiences, participants felt that high-quality programs ensured that students were skilled in the three areas of research, service, and teaching: “I think if you take a look at your graduates and if, overall, they show strong evidence of success in all three of those areas, I think you have a high-quality doctoral program.” Participants believed that students would lack a “rounded doctoral experience” without these experiences and would not be adequately prepared for future employment as a core faculty member.

Some participants believed that high-quality programs had graduates who were securing faculty positions after graduation. One participant explained that a high-quality doctoral program has positive outcomes related to faculty employment and tenure: “Your students excel, by evidence of being employed in high-quality programs, by getting tenure, and by evidence of quality teaching.”

Inclusiveness of Diversity
     The next theme encompassed the importance of diversity in doctoral counselor education. Participants reported that high-quality programs create a diverse learning community, both in terms of cultural diversity of faculty and students, as well as in diversity of experiences. They have a broad range of faculty teaching courses and allow for a spectrum of viewpoints and perspectives. Participants proposed that students’ engagement with diverse faculty and students is critical to ensuring high quality.

Faculty Diversity
     Several participants reported that high-quality programs have a diverse faculty. This was perceived as central to the student experience. Within this theme, diversity was inclusive of cultural identity, as well as diversity of experiences. Participants indicated that doctoral students need to learn from faculty from diverse cultural backgrounds and diverse professional experiences. According to one participant, “I do think high-quality counselor education programs in particular should not only possess the demographic qualities, but the ideologic qualities of diversity and even professional pursuit of diversity.” This exposure to diversity in faculty backgrounds and experiences is vital to the growth of students, as it exposes them to different perspectives. One participant proposed that high-quality programs intentionally attend to diversity within the faculty and attempt to recruit lecturers and guest speakers from diverse backgrounds and perspectives to address any gaps in faculty diversity: “If you don’t have diversity in faculty, then you make sure to bring in diversity so that it’s not just a bunch of White faculty preparing students in Eurocentric viewpoints.”

Student Diversity
     Participants also indicated that diversity in the student body is critically important to high quality. Program faculty seemed especially responsible for successfully recruiting students from diverse backgrounds and experiences. As one participant indicated, “They should bring diversity of thought, and diversity of experience, and diversity of region. People who bring something to the table beyond your master’s program are critical.” Faculty need to ensure, through admissions, that there is ample representation of diverse backgrounds and experiences within a cohort group. Faculty therefore also need to avoid screening out qualified applicants from diverse backgrounds during admissions.

Carnegie Classification
     The final theme represented participant viewpoints regarding the role of Carnegie classification (i.e., The Carnegie Classification of Institutions of Higher Education) in doctoral program delivery. Participants held a range of views related to Carnegie classification, often stemming from their own institutional work. Participants believed that high-quality programs reflected the classification of their institution, as aligning with the institutional mission was associated with institutional support (similar to the mission alignment theme). There were two dimensions within this category: institutional type and Carnegie classification, and focus areas impacted by Carnegie classification.

Institution Type and Carnegie Classification
     Participants acknowledged that a variety of doctoral program types exist in CES. As one participant stated, “When you talk about a doctoral program in counseling, you can have a doctoral program in a heavy research university with a Research 1 Carnegie classification. You can also have a more practice-oriented PhD.” Participants perceived that doctoral program types often reflect the type of institution where the doctoral program resides. Doctoral programs that emphasize research primarily exist at research universities, whereas doctoral programs that emphasize teaching primarily exist within teaching institutions.

Carnegie classification seemed important in determining the type of doctoral program that was offered at the institution. Participants at high and very high research-intensive universities (i.e., R2 and R1 Carnegie classifications) typically reported that their institution offered research-oriented doctoral programs, whereas participants working at doctoral/professional universities and master’s-level universities reported that their institution typically offered teaching-oriented doctoral programs. Carnegie classification thus was a strong influence on the type of CES doctoral program offered at the institution. As one participant said, “I think the Carnegie classification is actually pretty critical. Because the Carnegie classification, alongside state politics, determine where the ship of the institution is heading. And the counseling program needs to mirror the ship.”

Participants reported that the university’s expectations for faculty promotion and tenure were influenced by institutional type and Carnegie classification. These expectations shaped faculty activities. One participant explained that “at a Research 1 university, there’s a huge expectation for securing grants and publishing and refereed journal articles. At a lower level there’s less pressure to do that. And then at a teaching university, there’s hardly any pressure.” University expectations for tenure and promotion thus shaped faculty activities, which in turn affected the program faculty’s approach to training doctoral students. For example, faculty members who were more involved in research seemed more likely to value research training in the doctoral program in which they worked: “So what we are good at is preparing students to be researchers. There’s a sense of trying to focus hard on helping students develop research competencies, because that is what the program faculty is focused on.”

This mirroring between the institution’s classification and the doctoral program type is important to securing institutional financial support in the form of faculty lines, student assistantships, and so forth. Without this mirroring, the program is at risk of lacking institutional support, which would have an impact on its quality. Thus, the quality of the program is predicated on the program’s alignment with the institutional mission (as mentioned in the earlier theme of mission alignment), and the institutional mission is itself associated with the institution’s Carnegie classification.

Focus Areas Impacted by Carnegie Classification
     As mentioned above, the degree to which doctoral programs focus on research during the program seems to vary by university classification. Participants from research-intensive universities (i.e., R1 or R2 designation) valued research training above other elements of the curriculum. In contrast, participants from teaching institutions (i.e., Master’s Colleges and Universities: Larger Programs designation) valued training in teaching and supervision and did not believe that research training should dwarf other aspects of training. Some participants proposed that research and publication should have a reduced emphasis in order for teaching and leadership to have a central focus in program delivery. Even though the emphasis on research varied by institution type, participants seemed to value the production of quality research regardless of institutional classification. Several participants reported that a high-quality doctoral program goes “above and beyond” CACREP minimum requirements in a manner that “expands counseling knowledge” and “allows for rigorous, quality research and really contributes uniquely to the profession.” Several participants at different types of institutions spoke to the importance of doctoral students publishing during their time in the program and early in their careers.

Leadership training was also cited as an important component of high-quality programs across participants regardless of their institution and thus seemed to be a common theme for both research- and teaching-oriented institutions. Participants who valued leadership training during doctoral study worked in both research-intensive and teaching-focused institutions. As one participant from an R1 institution stated, “Our graduates need to be able to build programs, to run them successfully, to teach and train students in a way that they also produce the best clinicians that can go into the field.” This participant added that high-quality programs therefore train students “beyond the publish-or-perish paradigm.”

Discussion

This study was part of a larger qualitative project that explored the perceptions of CACREP-accredited program faculty (N = 15) regarding topics pertinent to doctoral education. In this study, a research team composed of the first five authors analyzed faculty descriptions of perceived components of a high-quality doctoral program. The research team identified five categories that emerged from the data: relationships, mission alignment, development of a counselor educator identity, inclusiveness of diversity, and Carnegie classification. With regard to participant characteristics, differences in responses were related to the Carnegie classification of the participant’s current institution of employment. Contrary to previous research, no differences in participant perceptions were found by gender identity, racial/ethnic identity, length of time working at a doctoral program, region, or delivery mode.

Consistency and Divergence in Themes by Institutional Type and Classification
     Across these themes, consistencies and divergences were found regarding how participants perceived high quality. Divergences appeared to be influenced by institutional type and Carnegie classification.

Consistency in Themes by Institutional Type and Classification
     Regardless of institutional type and classification, participants broadly supported the importance of faculty–student mentoring relationships, student–student supportive relationships, having a clear mission statement that includes faculty buy-in and commitment, program and institutional mission alignment, securing university financial support for faculty lines and student assistantships among other costs, establishing a learning community with faculty and students who possess diversity in cultural background and ideological thought, helping students to develop a counselor educator identity, and producing high-quality research.

These findings are consistent with the extant literature. Studies into doctoral student experiences both in CES and across higher education have previously reported that faculty–student mentoring, student–student support systems, departmental culture, and curricula impact the quality of the student experience (Protivnak & Foss, 2009). Kuo et al. (2017) found that mentoring and advising relationships were pivotal for research motivation and producing quality research during doctoral study. Similarly, Perera-Diltz and Sauerheber (2017) suggested that developing research competencies was an important component of doctoral study in counselor education. Professional identity development is another important component of doctoral training (Dollarhide et al., 2013; Limberg et al., 2013). The inclusiveness of varying aspects of diversity within the students, faculty, and curriculum is an important finding and one that is echoed within the counseling profession’s code of ethics and professional standards (e.g., American Counseling Association, 2014; CACREP, 2015).

There is scant literature in CES that focuses specifically on the clarity of the mission, mission alignment with the university, and faculty buy-in to the mission. Adkison-Bradley (2013) broached the idea of faculty buy-in through the concept of visionary thinking, proposing that faculty members possessing this type of thinking are more likely to advocate or “buy in” to the program’s mission and work to sustain a resource-rich and quality program.

Divergence in Themes by Institutional Type and Classification
     The main divergence involved the importance of research in relation to training in teaching. Participants from research-intensive programs placed more emphasis on research training at the expense of other focus areas. When considering the importance of mission alignment with the institution’s classification and mission, it seems possible that high quality can be defined somewhat differently, based on institution type. For example, a research-intensive university should have a greater emphasis on research training, as it needs to reflect the overall mission of the university (i.e., research focused). If a doctoral program at a research-intensive university does not have a strong research emphasis, it may not be of high quality because of the potential impacts to university financial support. In contrast, a teaching university (e.g., Master’s Colleges and Universities: Larger Programs designation) can focus more on teaching than research training and still be of high quality because the institution does not have a research emphasis and therefore the program’s mission of emphasizing teacher training is in alignment with the university’s mission. From this study, it seems important that faculty members therefore consider institutional mission and the degree of institutional emphasis on research training when seeking to start or sustain a doctoral program in counselor education.

Implications for Administrators and Program Faculty
     The resulting themes from this study move us closer to identifying the components that contribute to high-quality doctoral programs in CES. It appears that when programs can (a) facilitate supportive faculty–student and student–student relationships, (b) create a clear mission that faculty are committed to and that aligns with and supports the broader institution, (c) establish a diverse learning community, (d) assist students to develop a professional identity as counselor educators, (e) ensure the production of quality research, and (f) provide leadership training during doctoral study, they will be of high quality.

Results from this study highlight several key components of high-quality doctoral programs. Our
findings mirror some of the essential elements of the CACREP standards. Thus, supporting and sustaining these quality elements through regular re-accreditation cycles is paramount. However, these findings could also support other areas of focus in program evalutaion. For example, administrators and faculty members should be intentional when designing a mission statement that aligns with the broader institutional mission and has a clear plan for recruiting and retaining a diverse learning community, developing professional identity, and providing leadership opportunities. Recent research has identified program evaluation training lacking in counselor education programs for doctoral students (Sink & Lemich, 2018), suggesting a need for increased attention in this area.

Implications for Prospective Doctoral Students
     For students seeking programs, they are advised to appraise whether programs provide supportive mentorship and formal and informal learning opportunities, have a curricula focus that best fits their goals especially with regard to research preparation, and prioritize both faculty and student diversity. Burkholder (2012) suggested that student persistence and retention was bolstered by faculty communicating a genuine personal interest in students. Students who perceive a humanistic atmosphere from counselor education faculty are more likely to persist in counseling programs (Burkholder, 2012). Students should therefore consider their own academic and personal interests and needs and whether the program meets these. Hoskins and Goldberg (2005) also reported previously that the match between student interests and program offerings was an important predictor of doctoral student persistence.

Consideration for institution type and classification also appears important to prospective doctoral student decision making. For example, a student who wishes to develop a research identity may be best suited for a doctoral program at a research-intensive university that prioritizes research, whereas a doctoral program at a teaching institution may be a better fit for a student who has less proclivity toward research and who is seeking to develop specialized teaching competencies. Hinkle et al. (2014) previously reported that students typically sought doctoral study to become a professor or clinical leader, which seems consistent with how participants in this study identified focus areas of high-quality doctoral programs.

Lastly, faculty members should be sensitive to the needs of doctoral students as they engage in multiple roles and relationships such as co-teaching, supervising master’s students, and the dissertation process (Baltrinic et al., 2016; Dickens et al., 2016; Dollarhide et al., 2013). This is especially important for students from diverse backgrounds (e.g., minority race/ethnicity and sexual/affective orientation), who are often engaged in their communities and have more roles to balance (Cartwright et al., 2018).

Limitations and Implications for Future Studies

There were several limitations to this study despite the research team’s intention to perform a rigorous inquiry. The researchers’ bias and reactivity, which are common threats to validity in qualitative research (Bickman & Rog, 2008), were potential influencers at several study stages. Therefore, the research team, which consisted of two counselor educators and three doctoral students with doctoral program experience, attempted to establish trustworthiness and eliminate threats to validity by bracketing biases, taking methodological notes, and using consensus coding.

Limitations may have also impacted the transferability of study findings. As with most qualitative studies, the sample was small (N = 15) and could even be considered small for the chosen method of inquiry according to some sources (Creswell & Poth, 2017; Morse, 1994). Therefore, the findings may not be fully transferable (i.e., generalizable) to other CES doctoral faculty. When using maximal variation sampling, a research team intentionally seeks to identify extreme differences in participant characteristics to avoid early redundancy (Suri, 2011). This can result in over- or underrepresentation of overall sample demographic characteristics compared to the population.

Intriguingly, the sample in this study was adequately representative of faculty and program characteristics. For example, the sample was overrepresented by faculty who self-identified as White (73.3%), which closely mirrored CACREP (2019a) data regarding faculty racial/ethnic composition across CACREP-accredited programs (73.6% White faculty). Regarding Carnegie classification, 73.3% of participants worked at research-intensive (i.e., R1 or R2) institutions. This was consistent with institutional classification of CACREP-accredited doctoral programs. As of 2019, 71.8% of CACREP-accredited doctoral programs were at R1 and R2 institutions. Another potential area of overrepresentation was participant experience as a faculty member. Participant experience ranged from 4 to 34 years, with an average of 19.7 years (SD = 9.0). This average seemed fairly high. Unfortunately, the exact number of years of experience of core faculty in CACREP-accredited programs is unknown, which limits analyses regarding the sample representation of years of experience relative to the overall population of doctoral-level counselor educators.

The current study examined faculty perceptions of components of high-quality doctoral programs in CES. It would be important for future studies to survey current students or recent graduates of these doctoral programs to ascertain their perspectives on these components. As consumers of this advanced degree, students may have important perspectives on this issue. In addition, the current study only interviewed faculty who worked in CACREP-accredited CES programs. As accreditation standards define curriculum, these faculty may have been largely influenced by program components that are required by the current iteration of the CACREP standards. Faculty who work in non–CACREP-accredited programs may have different perceptions about what constitutes a high-quality doctoral program in CES.

Conclusion

The number of CACREP-accredited CES doctoral programs, enrolled doctoral students, and doctoral graduates have increased substantially within a fairly short (i.e., 4-year) period (CACREP, 2017, 2019a). As doctoral programs are increasingly developed and maintained, administrators and faculty may benefit from insights about how to build a program that is of high quality. By attending to high quality, a counselor education doctoral program is likely to provide a more optimal experience for the students who choose to enter the program. The findings from this study therefore may be important for administrators and faculty to consider when creating or attempting to sustain a doctoral program in CES.

 

Conflict of Interest and Funding Disclosure
The authors reported no conflict of interest
or funding contributions for the development
of this manuscript.

References

Adkison-Bradley, C. (2013). Counselor education and supervision: The development of the CACREP doctoral standards. Journal of Counseling & Development, 91(1), 44–49. https://doi.org/10.1002/j.1556-6676.2013.00069.x

American Counseling Association. (2014). ACA code of ethics. https://www.counseling.org/Resources/aca-code-of-ethics.pdf

Baltrinic, E. R., Jencius, M., & McGlothlin, J. (2016). Coteaching in counselor education: Preparing doctoral students for future teaching. Counselor Education and Supervision, 55(1), 31–45. https://doi.org/10.1002/ceas.12031

Barnes, B. J., & Randall, J. (2012). Doctoral student satisfaction: An examination of disciplinary, enrollment, and institutional differences. Research in Higher Education, 53(1), 47–75. https://doi.org/10.1007/s11162-011-9225-4

Bersola, S. H., Stolzenberg, E. B., Love, J., & Fosnacht, K. (2014). Understanding admitted doctoral students’ institutional choices: Student experiences versus faculty and staff perceptions. American Journal of Education, 120(4), 515–543. https://doi.org/10.1086/676923

Bickman, L., & Rog, D. J. (Eds.). (2008). The SAGE handbook of applied social research methods (2nd ed.). SAGE.

Burkholder, D. (2012). Returning counselor education doctoral students: Issues of retention, attrition, and perceived experiences. The Journal of Counselor Preparation and Supervision, 4(2), 6–23.

The Carnegie Classification of Institutions of Higher Education. (2019). Basic classification description. http://carnegieclassifications.iu.edu/classification_descriptions/basic.php

Cartwright, A. D., Avent-Harris, J. R., Munsey, R. B., & Lloyd-Hazlett, J. (2018). Interview experiences and diversity concerns of counselor education faculty from underrepresented groups. Counselor Education and Supervision, 57(2), 132–146. https://doi.org/10.1002/ceas.12098

Corbin, J., & Strauss, A. (2015). Basics of qualitative research: Techniques and procedures for developing grounded theory (4th ed.). SAGE.

Council for Accreditation of Counseling and Related Educational Programs. (2015). 2016 CACREP standards. http://www.cacrep.org/wp-content/uploads/2017/08/2016-Standards-with-citations.pdf

Council for Accreditation of Counseling and Related Educational Programs. (2017). Annual report 2016. http://www.cacrep.org/wp-content/uploads/2019/05/CACREP-2016-Annual-Report.pdf

Council for Accreditation of Counseling and Related Educational Programs. (2019a). Annual report 2018. http://www.cacrep.org/wp-content/uploads/2019/05/CACREP-2018-Annual-Report.pdf

Council for Accreditation of Counseling and Related Educational Programs. (2019b). Find a program. https://www.cacrep.org/directory

Creswell, J. W. & Poth, C. N. (2017). Qualitative inquiry and research design: Choosing among five traditions (4th ed). SAGE.

Dickens, K. N., Ebrahim, C. H., & Herlihy, B. (2016). Counselor education doctoral students’ experiences with multiple roles and relationships. Counselor Education and Supervision, 55(4), 234–249.
https://doi.org/10.1002/ceas.12051

Dollarhide, C. T., Gibson, D. M., & Moss, J. M. (2013). Professional identity development of counselor education doctoral students. Counselor Education and Supervision, 52(2), 137–150. https://doi.org/10.1002/j.1556-6978.2013.00034.x

Hill, N. R., Leinbaugh, T., Bradley, C., & Hazler, R. (2005). Female counselor educators: Encouraging and discouraging factors in academia. Journal of Counseling & Development, 83(3), 374–380.
https://doi.org/10.1002/j.1556-6678.2005.tb00358.x

Hinkle, M., Iarussi, M. M., Schermer, T. W., & Yensel, J. F. (2014). Motivations to pursue the doctoral degree in counselor education and supervision. The Journal of Counselor Preparation and Supervision, 6(1), 1–19. https://doi.org/10.7729/61.1069

Hoskins, C. M., & Goldberg, A. D. (2005). Doctoral student persistence in counselor education programs: Student–program match. Counselor Education and Supervision, 44(3), 175–188.
https://doi.org/10.1002/j.1556-6978.2005.tb01745.x

Kim, B., Stallings, R. P., Merlo, A. V., & Lin, A. W.-C. (2015). Mentoring in criminology and criminal justice doctoral education: Doctoral program coordinators’ perspectives. Journal of Criminal Justice Education, 26(4), 390–407. https://doi.org/10.1080/10511253.2015.1049630

Kuo, P. B., Woo, H., & Bang, N. M. (2017). Advisory relationship as a moderator between research self-efficacy, motivation, and productivity among counselor education doctoral students. Counselor Education and Supervision, 56(2), 130–144. https://doi.org/10.1002/ceas.12067

Lambie, G. W., Ascher, D. L., Sivo, S. A., & Hayes, B. G. (2014). Counselor education doctoral program faculty members’ refereed article publications. Journal of Counseling & Development, 92(3), 338–346.
https://doi.org/10.1002/j.1556-6676.2014.00161.x

Limberg, D., Bell, H., Super, J. T., Jacobson, L., Fox, J., DePue, M. K., Christmas, C., Young, M. E., & Lambie,
G. W. (2013). Professional identity development of counselor education doctoral students: A qualitative investigation. The Professional Counselor, 3(1), 40–53. https://doi.org/10.15241/dll.3.1.40

Lincoln, Y. S., & Guba, E. G. (2013). The constructivist credo. Left Coast Press, Inc.

Magnuson, S., Norem, K., & Lonneman-Doroff, T. (2009). The 2000 cohort of new assistant professors of counselor education: Reflecting at the culmination of six years. Counselor Education and Supervision, 49(1), 54–71. https://doi.org/10.1002/j.1556-6978.2009.tb00086.x

Merriam, S. B., & Tisdell, E. J. (2016). Qualitative research: A guide to design and implementation (4th ed.). Wiley.

Morrison, E., Rudd, E., Zumeta, W., & Nerad, M. (2011). What matters for excellence in PhD programs? Latent constructs of doctoral program quality used by early career social scientists. The Journal of Higher Education, 82(5), 535–563. https://doi.org/10.1353/jhe.2011.0029

Morse, J. M. (1994). Designing funded qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed., pp. 220–235). SAGE.

Patton, M. Q. (2015). Qualitative research and evaluation methods: Integrating theory and practice (4th ed.). SAGE.

Perera-Diltz, D., & Sauerheber, J. D. (2017). Mentoring and other valued components of counselor educator doctoral training: A Delphi study. International Journal of Mentoring and Coaching in Education, 6(2), 116–127. https://doi.org/10.1108/IJMCE-09-2016-0064

Protivnak, J. J., & Foss, L. L. (2009). An exploration of themes that influence the counselor education doctoral student experience. Counselor Education and Supervision, 48(4), 239–256.
https://doi.org/10.1002/j.1556-6978.2009.tb00078.x

Sink, C. A., & Lemich, G. (2018). Program evaluation in doctoral-level counselor education preparation: Concerns and recommendations. American Journal of Evaluation, 39(4), 496–510. https://doi.org/10.1177/1098214018765693

Smith, R. L., Flamez, B., Vela, J. C., Schomaker, S. A., Fernandez, M. A., & Armstrong, S. N. (2015). An exploratory investigation of levels of learning and learning efficiency between online and face-to-face instruction. Counseling Outcome Research and Evaluation, 6(1), 47–57. https://doi.org/10.1177/2150137815572148

Suri, H. (2011). Purposeful sampling in qualitative research synthesis. Qualitative Research Journal, 11(2), 63–75. https://doi.org/10.3316/QRJ1102063

 

Appendix

Interview Protocol

  1. For context, please briefly describe how you self-identify and your background. This information will be aggregated; individual participant responses will not be associated with any quotes in subsequent manuscripts.
    Gender:
    Sexual/Affective Orientation:
    Race and Ethnicity:
    Years as a Faculty Member in a Counselor Education Program:
    Years as a Faculty Member in a Doctoral Counselor Education Program:
                Number of Doctoral Counselor Education Programs You Have Worked In:
    Regions of Doctoral Counselor Education Programs You’ve Worked In:
  2. How might you define a “high-quality” doctoral program?
  3. What do you believe to be the most important components? The least important?
  4. How have you helped students to successfully navigate the dissertation process?
  5. Which strategies has your program used to recruit underrepresented students from diverse backgrounds? How successful were those?
  6. Which strategies has your program used to support and retain underrepresented students from diverse backgrounds? How successful were those?
  7. What guidance might you provide to faculty who want to start a new doctoral program in counseling with regards to working with administrators and gaining buy-in?
  8. What guidance might you provide to faculty who want to sustain an existing doctoral program in counseling with regards to working with administrators and gaining ongoing support?
  9. Last question. What other pieces of information would you like to share about running a successful, high-quality doctoral program?

 

Jennifer Preston, PhD, NCC, LPC, is a program director and department chair at Saybrook University. Heather Trepal, PhD, LPC-S, is a professor at the University of Texas at San Antonio. Ashley Morgan is a doctoral candidate at the University of Texas at San Antonio. Justin Jacques, ACS, LPC, CAC II, is a counselor at Johns Hopkins University. Joshua D. Smith, PhD, LCMHCA, LCASA, is a counselor at the Center for Emotional Health in Concord, North Carolina. Thomas A. Field, PhD, NCC, CCMHC, ACS, LPC, LMHC, is an assistant professor at the Boston University School of Medicine. Correspondence may be addressed to Jennifer Preston, Saybrook University, 55 Eureka Street, Pasadena, CA 91103, jpreston@saybrook.edu.

A Q Methodology Study of a Doctoral Counselor Education Teaching Instruction Course

Eric R. Baltrinic, Eric G. Suddeath

Many counselor education and supervision (CES) doctoral programs offer doctoral-level teaching instruction courses as part of their curriculum to help prepare students for future teaching roles, yet little is known about the essential design, delivery, and evaluation components of these courses. Accordingly, the authors investigated instructor and student views on the essential design, delivery, and evaluation components of a doctoral counselor education teaching instruction (CETI) course using Q methodology. Eight first-year CES doctoral students and the course instructor from a large Midwestern university completed Q-sorts, which were factor analyzed. Three factors were revealed, which were named The Course Designer, The Future Educator, and The Empathic Instructor. The authors gathered post–Q-sort qualitative data from participants using a semi-structured questionnaire, and the results from the questionnaires were incorporated into the factor interpretations. Implications for incorporating the findings into CES pedagogy and for designing, delivering, and evaluating CETI courses are presented. Limitations and future research suggestions for CETI course design and delivery are discussed.

Keywords: teaching instruction course, Q methodology, pedagogy, counselor education, doctoral students

 

Counselor education doctoral students (CEDS) need teaching preparation as part of their doctoral training (Hall & Hulse, 2010; Orr et al., 2008), including the completion of formal courses in pedagogy, adult learning, or teaching (Barrio Minton & Price, 2015; Hunt & Weber Gilmore, 2011; Suddeath et al., 2020). Teaching instruction courses may occur within or outside of the counselor education curriculum. Within counselor education, counselor education teaching instruction (CETI) courses are those doctoral-level seminar or semester-long curricular experiences designed to provide CEDS with the basic foundational knowledge for effective teaching (Association for Counselor Education and Supervision [ACES], 2016). CETI courses are cited as an important foundational training component for preparing CEDS for success in fulfilling future teaching roles (ACES, 2016). Additionally, simply possessing expert knowledge in one’s field (e.g., counseling) is not sufficient to support student learning in the classroom (ACES, 2016; Waalkes et al., 2018), a reality recognized in counselor education some time ago by Lanning (1990).

To increase the attention to and strengthen the rigor of teaching preparation, the Council for Accreditation of Counseling and Related Educational Programs (CACREP) developed standards for fostering students’ knowledge and skills in teaching through curricular and/or experiential training (CACREP, 2015). Specifically, within the CACREP (2015) teaching standards, CEDS need to learn “instructional and curriculum design, delivery, and evaluation methods relevant to counselor education” (Section 6, Standard B.3.d.). Although programs may use teaching internships (Hunt & Weber Gilmore, 2011), structured teaching teams (Orr et al., 2008), coteaching (Baltrinic et al., 2016), and teaching mentorships (Baltrinic et al., 2018) to address standards and train CEDS for their future roles as educators, teaching coursework is cited as the most common preparation practice (Barrio Minton & Price, 2015; Suddeath et al., 2020; Waalkes et al., 2018). Despite our knowledge that teaching coursework is commonly used for teaching preparation (Barrio Minton & Price, 2015; Suddeath et al., 2020), little is known about how counselor educators design and deliver these courses within counselor education. Although a few studies in counselor education and supervision address teaching coursework (e.g., Suddeath et al., 2020; Waalkes et al., 2018), it is in a cursory way or as one part of a broader inquiry into teacher preparation processes.

Perceived Effectiveness of CETI Courses
     Ideally, teaching coursework, whether offered within counselor education specifically or not, should provide doctoral students with a basic framework for effective teaching. Unfortunately, as previously mentioned, little is known about what constitutes a CETI course. Moreover, the few studies that address this training component suggest inconsistency in its perceived value and effectiveness. For example, early research by Tollerud (1990) and Olguin (2004) found no difference in terms of teaching self-efficacy between those with and without coursework, regardless of the number of courses taken. Similarly, in Hall and Hulse’s (2010) study examining counselor educators’ doctoral teaching preparation and perceived preparedness to teach, participants found their teaching coursework least helpful for preparing them to teach. To improve the effectiveness of their coursework, participants in Hall and Hulse’s study indicated a desire for multiple courses with a greater focus on the practical aspects of teaching, approaches for teaching adult learners, and more opportunities to engage in actual teaching during the course.

In a recent study by Waalkes et al. (2018), participants expressed similar sentiments reporting a general lack of emphasis and rigor in teacher preparation as compared to other core areas of development and especially for teaching coursework. Specific deficiencies included a lack of emphasis on pedagogy and teaching strategies and a discrepancy between their teaching coursework and their actual teaching responsibilities as current counselor educators (Waalkes et al., 2018). Given their experience, participants indicated a desire for greater integration of doctoral-level teaching coursework throughout their programs as well as “philosophy and theory, pedagogy/teaching strategies, understanding developmental levels of students, course design, assessment, and setting classroom expectations” (Waalkes et al., 2018, p. 73).

Unlike Tollerud (1990) and Olguin (2004), Suddeath et al. (2020) found that formal teaching coursework significantly predicted increased self-efficacy toward teaching. Furthermore, participants indicated that formal coursework strengthened their self-efficacy toward teaching slightly more than their fieldwork in teaching experiences. However, it is unclear from this study what aspects of the CEDS’ coursework contributed to increased self-efficacy. In a study by Hunt and Weber Gilmore (2011), CEDS identified elements such as the creation of syllabi, exams, rubrics, and a philosophy of teaching and receiving support and feedback from instructors and peers as most helpful in their coursework experiences. Those who did not find the course helpful expressed a desire for more opportunities to engage in actual teaching. Overall, the literature addressing the relative effectiveness of teaching coursework suggests the need to (a) improve teaching courses, (b) connect teaching courses to additional teaching experiences, and (c) make it a meaningful and impactful experience for CEDS.

Instructor Qualities and Course Delivery
     Counselor education research also suggests that instructor qualities and course delivery influence the learning experiences of counseling students (Malott et al., 2014; Moate, Cox, et al., 2017; Moate, Holm, & West, 2017). Regarding instructor qualities, two recent studies examining novice counselors’ instructor preferences within their didactic (Moate, Cox, et al., 2017) and clinical courses (Moate, Holm, & West, 2017) found that, overall, participants preferred instructors who were kind, supportive, empathic, genuine, and passionate about the course. Likewise, Malott et al. (2014) reported that instructors who were caring, which included characteristics such as respect, interest, warmth, and availability, were “essential in motivating learning” (p. 295). Moate and Cox (2015) also emphasized the importance of cultivating a supportive and safe learning environment for increasing students’ active participation and engagement in their learning.

Regarding course delivery, overall participants in didactic and clinical courses preferred instructors who were pragmatic and connected course material to their actual work as counselors (Moate, Cox, et al., 2017; Moate, Holm, & West, 2017). Within didactic courses specifically—which included career counseling, theories, ethics, and diagnosis—Moate, Cox, et al. (2017) emphasized students’ lack of preference for instructors who primarily utilized lecture or PowerPoint for instruction. This relates to the topic of teacher-centered versus learner-centered approaches. Those who use teacher-centered approaches utilize lecture as the primary mode of delivery and focus on the transmission of content through lecture from the experienced expert to the inexperienced novice, which may foster passive learning (Moate & Cox, 2015). In contrast, those who use learner-centered approaches emphasize shared responsibility for learning, which encourages active learning and application of course content through collaborative learning activities to tap into the collective knowledge of the group as well as supporting students’ active engagement and application of course content (Malott et al., 2014; Moate & Cox, 2015).

Although Moate, Cox, et al. (2017) and Moate, Holm, and West (2017) focused on master’s-level versus doctoral-level students, their findings suggested the importance of instructor qualities and approaches as well as student perspectives within course design and delivery. Moate, Cox, et al. (2017) and Moate, Holm, and West (2017) did not link instructor qualities to the training they received within doctoral CETI coursework, but having an understanding of these connections may aid doctoral instructors’ design and delivery of CETI courses to better meet student needs.

Regarding instructor qualities and approaches to course delivery within doctoral CETI courses specifically, our literature search identified two studies that minimally addressed these components. Participants in the studies of both Waalkes et al. (2018) and Hunt and Weber Gilmore (2011) emphasized the importance of feedback from professors and classmates within CETI courses for strengthening their preparedness to teach. Neither study described exactly how this feedback supported their preparedness to teach, the type of feedback received, or the instructor’s approach to delivering feedback.

The Current Study
     Teaching preparation is an essential component of CEDS’ training (ACES, 2016), as teaching and related responsibilities (a) consume a greater proportion of time than any other responsibility of a counselor educator (Davis et al., 2006) and (b) impact CEDS’ confidence and feelings of preparedness to teach (Hall & Hulse, 2010; Suddeath et al., 2020). Still, some findings suggest a lack of rigor concerning teaching preparation compared to other core doctoral training areas (e.g., research and supervision; Waalkes et al., 2018). Although teaching preparation research in general is gaining momentum, there are no findings clarifying what components of formal coursework most support students’ development as teachers. In fact, findings are mixed regarding its effectiveness (e.g., Suddeath et al., 2020; Waalkes et al., 2018). Furthermore, no in-depth research exists on how counselor educators implement formal teaching courses within counselor education or how those teaching courses are designed and delivered by counselor educators and experienced by CEDS. Yet, our experience tells us and research confirms (e.g., Waalkes et al., 2018) that counselor education programs increasingly require CEDS to engage in CETI courses as one way to develop teaching competencies, with some citing it as the most widely utilized way in which programs train CEDS to teach (ACES, 2016; Barrio Minton & Price, 2015; Suddeath et al., 2020).

As variability exists in how respective programs deliver CETI courses (Hunt & Weber Gilmore, 2011), we studied a single CETI course as a way to illustrate an example of common issues and potential discrepancies faced by students and instructors engaged in a doctoral CETI course. We examined this course, taking into account both experienced instructor and novice student views, to (a) reveal common views on ideal course design, delivery, and evaluation components among participants navigating a common curriculum; (b) identify any similar or divergent views between the instructor and students; and (c) determine how to design course content and instruction to meet the future needs of students. The study was guided by the research question: What are instructor and student views on the essential design, delivery, and evaluation elements needed for a CETI course?

Method

     Q methodology is a unique research method containing the depth of qualitative data reduction and the objective rigor of by-person factor analysis (Brown, 1993). Researchers have effectively utilized this method in the classroom setting to facilitate personal discovery and to increase subject matter understanding (Watts & Stenner, 2012). Specifically, students’ self-perspectives are investigated and then related to other students’ views, which are then related to nuances within their own views (Good, 2003). Q methodology has also been effectively used as a pedagogical exercise to examine subjectivity in intensive samples of participants (McKeown & Thomas, 2013). Focusing on intensive samples, and even single cases, allows researchers to retain participants’ frames of reference while concurrently revealing nuances within their views, which may be lost within larger samples (Brown, 2019). Yet, the rigor of findings from intensive samples derived from Q factor analysis remains.

We selected Q methodology for the current study versus a qualitative or case study approach (Stake, 1995) to reveal common and divergent viewpoints in relation to common stimulus items (i.e., a Q sample composed of ideal design, delivery, and evaluation of CETI course components from the literature). We also wanted both the instructor and students participating in the sampled doctoral CETI course to provide their subjective views on the optimal design, delivery, and evaluation components of a doctoral CETI course, while incorporating the rigorous features of quantitative analysis (Brown, 1980).

Concourse and Q Sample
     Specific steps were taken to develop the Q sample, which is the set of statements used to assist participants with expressing their views during the Q-sorting process. The first step is selecting a concourse, which is a collection of opinion statements about any topic (Stephenson, 2014). Many routes of communication contribute to the form and content of a concourse (Brown, 1980). The concourse for this study was composed of statements taken by the authors from select teaching literature and documents (e.g., ACES, 2016; McAuliffe & Erickson, 2011; West et al., 2013).  After carefully searching within these sources, researchers selected statements specifically containing teaching experts’ views on essential components for teaching preparation, in general, and CETI courses in particular. The concourse selection process resulted in over 240 concourse statements, which was too many for the final Q sample (Brown, 1970, 1980).

Second, the concourse of statements was reduced by the first author using a structured deductive Q sample design shown in Table 1 (Brown, 1970). Data reduction using a structured design results in a reduction of concourse statements into a manageable Q sample (McKeown & Thomas, 2013). Accordingly, data reduction proceeded with the removal of unclear, fragmented, duplicate, or unrelated statements until there were eight items for each of the types, resulting in the structured 48-item sample shown in the Appendix.

 

Table 1

Structured Q Sample

Dimensions Types N
1. Design a. Materials
(Items 4, 5, 10, 13, 14, 23, 28, 39)
b. Experiences

(Items 3, 22, 24, 25, 36, 37, 43, 45)

2
2. Delivery

 

c. Content
(Items 2, 15, 17, 18, 26, 27, 35, 38)
d. Process

(Items 6, 8, 12, 30, 32, 41, 44, 46)

 

2

3. Evaluation e. Formative
(Items 7, 20, 21, 29, 33, 40, 42, 47)
f. Summative

(Items 1, 9, 11, 16, 19, 31, 34, 48)

 

2

*Q-set = D (Criteria) (Replications); D ([1₂] [2₂] [3₂]) (n); D (2) (2) (2); D = 8 combinations;
D (2) (2) (2) (6 replications); D = 48 statements for the Q sample.

 

Third, the 48-item Q sample was then evaluated by three expert reviewers using a content validity index (Paige & Morin, 2016). Expert reviewers who had a minimum of 10 years of experience as counselor educators, had designed and delivered doctoral CETI courses, had published frequently on teaching and learning, and were familiar with Q methodology were solicited by the first author. Accordingly, expert reviewers rated each of the 48 items on a 4-point scale using three criterion questions: 1) Is the statement clear and unambiguous as read by a counselor educator? 2) Is the statement clear and unambiguous as read by CEDS? and 3) Is the statement distinct from the other statements listed here? Items receiving a score of 3 (“Mostly”) or 4 (“Completely”) were included; items receiving a score of 2 (“Somewhat”) were reviewed and modified by the authors for appropriateness; items receiving a score of 1 (“Not at all”) were discarded from the sample. After the three expert evaluators completed the content validity index, the authors refined the Q sample by rewriting two items to improve clarity, eliminating one duplicate item, and adding an item the reviewers thought important. For the final step, two of the experts completed Q-sorts to assure the final Q sample facilitated the expression of views on supervisee roles. The results of these two pilot Q-sorts were not included in the data analysis.

Participant Sample
     Researchers followed McKeown and Thomas’ (2013) recommendations for selecting an intensive participant sample (i.e., fewer than 20 participants), which included a combination of purposeful and convenience sampling strategies (Patton, 2015) to obtain participants for the study. We purposefully selected the doctoral CETI course and the instructor because it was offered within a reputable, CACREP-accredited doctoral program; developed by a counselor educator known for teaching excellence and professional contributions; and taught and refined in an on-campus, in-person program by that same instructor for over 16 years. Additionally, the participants engaged in the course at the time of investigation constituted a convenience sample of eight first-year CEDS. Participants collectively represented a group of individuals holding similar theoretical interests and the ability to provide insight into the topic of investigation (Brown, 1993).

All nine participants were from a large, top-ranked counselor education program located in the Midwest. Seven of the students identified as White cisgender females, and one as a cisgender Asian male. Four student participants were in the 25 to 30-year-old range, and four were in the 31 to 35-year-old range. The instructor was in the 50 to 55-year-old range, who identified as a White cisgender male. None of the student participants reported having previous teaching experience.

Data Collection
     After obtaining IRB permission, the first author collected the initial consent, demographic, Q-sort, and post–Q-sort written data from the students and instructor using a semi-structured questionnaire. The nine participants (n = 8 students; n = 1 instructor) were each asked to rank-order the 48 items in the Q sample along a forced choice grid from most agree (+4) to most disagree (-4). The conditions of instruction used for the students’ and instructor’s Q-sorts stemmed directly from the research question. After completing this Q-sort, participants were asked by the first author to provide written responses, using a semi-structured questionnaire, for the top three items with which they most (+4) and least (-4) agreed and were asked to comment on any other items of significance.

The first author asked the course instructor to respond in writing to three questions, in addition to those prompts contained in the semi-structured questionnaire. This was done to add nuance and context to the results. The additional questions and highlights from the instructor’s responses are shown in Table 2.

Data Analysis

Nine Q-sorts completed by participants were each entered into the PQMethod software program V. 2.35 (Schmolck, 2014). A correlation matrix was then generated reflecting the “nature and extent of relationships” among all the participants’ Q-sorts in the data set (Watts & Stenner, 2012, p. 111). The correlation matrix served as the basis for factor analysis, which was completed using the centroid method (Brown, 1980). Essentially, factor analysis allows researchers to examine the correlation matrix for patterns of similarity among the participants’ Q-sorts. In the current study, we were interested in similar and divergent patterns among the instructor’s and students’ Q-sorts on essential doctoral CETI course components. In other words, data analysis in Q studies is possible because all participants rank-order a Q sample of similar items, which allows researchers to inter-correlate those Q-sorts for subsequent factor analysis.

Given the low number of participants, we initially extracted five factors from the correlation matrix,  which yielded fewer significant factor loadings (i.e., a correlation coefficient reflecting the degree to which a participant’s Q-sort correlates with the factor). Therefore, we extracted three factors, which yielded a higher number of factor loadings. The three factors were rotated using the varimax method, which we selected because (a) we had no preconceived theoretical notions regarding the findings, (b) we were blind to participant identifying information in the data, and (c) we intended to obtain dominant views among participants within the same course (Watts & Stenner, 2012). The varimax factor rotation method helps researchers to identify individual factor loadings “whose positions closely approximate those of the factor” (Watts & Stenner, 2012, p. 142). In Q methodology, a factor is a composite or ideal Q-sort to which individual participants correlate (Watts & Stenner, 2012). Overall, data analysis steps yielded a 3-factor solution containing at least two significant factor loadings on each factor, which is the minimum suggested number of factor loadings for a factor to hold significance (Brown, 1980). Notably, the final 3-factor solution contained significant factor loadings for all nine of the study participants, which suggests the rigor of the collective viewpoints (i.e., factors) discussed in the results.

 

Table 2

Summary of Instructor Responses

Interview Question Interview Responses (Factor A Exemplar)
1. What is important for planning, delivering, and evaluating doctoral-level counselor education teaching instruction courses? I think of the different elements that go into teaching and I think these are the things that students need to be exposed to, such as: developing a teaching philosophy, creating a syllabus, evaluating other instructors’ syllabi, making selections on textbooks, looking at equity in the classroom, backwards design of curriculum, having a small group teaching experience, having a large group teaching experience, using experiences in the classroom for developing reflective practice, and reviewing essential readings in the teaching field. I also think it is essential that we teach students how to use online platforms, so they have exposure and, to what degree we can, competency, to online platforms.
2. What are some significant lessons learned over the past 16 years as an instructor of a counselor education teaching instruction course? This course is a change in pace for most students in my program. For that reason, students generally seem excited about this course. Having them excited about taking the course makes teaching the course a pure joy. Along with the excitement, students bring a level of naïveté to the topic. They have been students, but they do not have a lot of exposure to being a teacher. In my field of counseling, students at the doctoral level have exposure to counseling, so they come in with a level of exposure and expertise in that area, but in teaching it seems all new to them. And that makes a course fun for me.

 

I believe the hardest thing for students to learn is to set aside their own passions and misconceptions about what their students need to know in service of what they must know to be an effective counselor. What their passions are and what students need to know are not always the same thing. I notice students are generally apprehensive about their performance when it comes to teaching. I have to constantly remind myself that it doesn’t come automatically to them as it does to me, having taught many years. So I have to reintroduce myself to the idea of performance anxiety in the classroom. That’s where I think the in-class reflective practice piece fits in nicely for them. They get a chance to think and talk through their anxiety about teaching.

3. What role does a counselor education teaching instruction course serve for preparing doctoral students to teach? I can’t imagine a program that does not have a teaching instruction course, preferably taught within the program, that would be able to adequately prepare students for future faculty roles. Most of my career has been to emphasize the need for good faculty instruction on teaching in the counseling field.

 

Results

The data analysis revealed three significantly different viewpoints (i.e., Factors A, B, and C) on the essential design, delivery, and evaluation elements needed for a doctoral CETI course. All participants in the study were significantly associated with one of the three factors. Specifically, one student participant and the course instructor were significantly associated with Factor A (i.e., had factor loadings of .37 or higher; .50 and .84, respectively). Five of the eight student participants were significantly associated with Factor B (.72, .70, .66, .78, and .60, respectively). Two of the eight student participants were significantly associated with Factor C (.75 and .87, respectively). Select participant quotes from participants’ post-sort questionnaires were incorporated into the factor interpretations below to provide contextual details for each factor.

Factor A: The Course Designer
     Factor A is most distinguished by the view that CETI courses should result in students having the ability to design their own counseling courses, which differs from Factors B and C (Item 37; +4, 0, 0, respectively). This pervasive opinion is contained in the instructor’s semi-structured questionnaire response to Item 37:

I cannot imagine the purpose of having a course for teaching in counselor education without the purposeful outcome being to create a course. The ability to do course development, to me, is the skillset that doctoral graduates should have from a teaching course.

The student associated with this factor added, “I want this course to help me be successful, which means I have to practice . . . making a syllabus, working with students . . . the basis of the entire course is to learn to teach!” Learning how to design evaluations of the teaching and learning process (Item 48, +2) is also considered an essential CETI course component for Factor A. For Factor A, CETI courses need to include discussions about selecting textbooks (Item 14, +2) and opportunities to learn about classroom management (Item 18, +2). There was even stronger agreement that CETI courses need to include information about designing a syllabus (Item 39, +3) and constructing related course objectives (Item 33, +3), which would culminate in a plan for actual teaching experiences (Item 35, +3). Given the preference for technical and design elements in CETI courses, the authors have named Factor A The Course Designer.

Factor A placed less emphasis on the developmental level (Item 25, -3) and cultural differences (Item 38, -1) of students as essential components of a CETI course. But that does not suggest these elements are unimportant, as one participant illustrated: “All instructors need to be mindful of students’ cultural differences. Learning can only be effective in an environment conducive of understanding students’ differences.” Importantly, the Factor A view was not limited to just design and technical components. In fact, Factor A, like B and C, viewed having some type of teaching experience as an essential element of a CETI course (Item 46; +4, +4, +1, respectively).

Factor B: The Future Educator
     The Factor B viewpoint, which the authors named The Future Educator, placed importance on the use of interactive (Item 6, +4) and experiential (Item 45, +3) activities, more so than course design, as essential elements of a CETI course. In contrast to Factors A (-4) and C (-4), Factor B participants believed in the helpfulness of teaching to their peers (Item 44, +2). However, Factor B was most distinguished from Factors A (+1) and C (-1) in its belief that CETI courses should prepare students for future faculty roles (Item 43, +4). Collectively, individuals on this factor all agreed that the role of a CETI course was to help them be successful as future faculty members, and as one student stated, “Students need to be prepared for future faculty roles including teaching, so students need to be prepared to teach.”

     Factor B differed from Factors A and C on the importance of evaluation of students’ learning (Item 20, -1) and textbook selection (Item 14, -2), but agreed that videotaping students’ experiences is not an essential component of CETI courses (Item 11, -4). Regarding Item 11, participants noted, “Video recordings may not demonstrate the entire experience, including feelings and opinions of students and teachers.” Additionally, CEDS noted that being video-recorded could potentially “make students in the class act differently,” and, “if there is live evaluation” contained in a CETI course, “including guided reflection and time to process feedback, then video isn’t necessary.” This is an interesting finding given that many of the participants were trained in counseling programs that used video work samples as the basis for supervision feedback related to counseling skills development.

Factor C: The Empathic Instructor
     Factor C represented a preference for instructor qualities and intentional communication (i.e., delivery) more so than design issues (Factor A) or future faculty preparation (Factor B). For instance, Factor C participants believed that instructors of CETI courses should be passionate about teaching (Item 30, +4), compared to -1 and 2 for Factors A and B, respectively. As one student put it, “I feel as though passion fuels everything else in the course: effort, preparation, and availability of the instructor. Passion is everything.” According to Factor C, CETI instructors should be approachable (Item 32, +4), model and demonstrate how to provide feedback for future student encounters (Item 26, +3), and check in often with students to determine their level of understanding (Item 21, +3). However, when designing, delivering, and evaluating CETI courses, Factor C participants highlighted the developmental level (Item 25, +2) and cultural differences (Item 38, +4) of students, which contrasts with Factors A and B. Factor C simply placed higher importance on these items compared to the other factors.

Factor C was also distinguished by what is not essential for a CETI course, such as planning for a teaching experience (Item 35, -1), processing fellow classmates’ teaching experiences (Item 29, -3), and being able to design evaluations of teaching and learning (Item 48, -4), which, as one participant stated, are “usually dictated by the institution where you are employed.” Factor C placed less emphasis on specific feedback (i.e., content-oriented) instructors provide to students on their teaching (Item 42, -1) in favor of the instructor’s approachability. As one participant described, “There is not growth without feedback . . . if the instructor is approachable then the student will feel as if they can approach the instructor with any concerns, including any items on this Q sample.” Given the preference for instructor qualities and communication, the authors have named Factor C The Empathic Instructor.

Consensus
     Despite the distinguishing perspectives contained in each individual factor, significant areas of consensus existed among factors with respect to particular Q sample items. For example, Factors A, B, and C believed that designing a syllabus is an important aspect of a CETI course (Item 39; +3, +3, and +2, respectively). All three factors commonly acknowledged that CETI course instructors ought to consider the pedagogy used for course delivery (Item 10; 0, +1, and +1, respectively), and that CETI courses should prepare doctoral students for teaching internships (Item 22; 0, +1, 0). CETI courses should address classroom management issues as well (Item 18; +2, +1, and 0, respectively). Finally, CETI courses should contain intentional student engagement efforts (Item 3; +2, +1, and +2) with regular and relevant discussions (Item 8; +1, +3, and +2, respectively).

Consensus among factors also existed around the non-essential elements of a CETI course. Specifically, all three factors expressed that midterm (Item 16; -3, -3, and -2, respectively) and final course exams (Item 19; -3, -4, and -3, respectively) were not essential components of a CETI course. One male participant summarized this point: “I think students’ progress can be evaluated by exploring what students think they learn, how much insight they gain, and how they plan to apply what they learn in the class, rather than using exams or pre/post-tests.” Similarly, another female participant cited, “Exams will not show progress in teaching skills. You need real life experiences and discussion.” Overall, participants across factors believed that exams promote memorization of content more so than the fair and commensurate evaluation of teaching knowledge and skills. In other words, they believed that CETI courses should be more experiential in nature.

Discussion

The purpose of this study was to gain insight into the essential design, delivery, and evaluation elements needed for a CETI course. The results produced three unique views on this topic. In addition, although participants’ views varied, with Factor A emphasizing the technical components of creating a course, Factor B emphasizing experiential components and future faculty roles, and Factor C emphasizing the character and qualities of the instructor, there were several areas of consensus. Specifically, participants across all three factors agreed on the importance of CETI courses for (a) preparing CEDS for teaching internships (Hunt & Weber Gilmore, 2011; Orr et al., 2008; Waalkes et al., 2018); (b) using pedagogy to guide CETI course delivery (ACES, 2016; Waalkes et al., 2018); (c) designing syllabi (Hall & Hulse, 2010; Hunt & Weber Gilmore, 2011); and (d) developing teaching skills such as classroom management, engaging students, and facilitating class discussions (Hall & Hulse, 2010; Hunt & Weber Gilmore, 2011; Waalkes et al., 2018). As indicated above, these points of consensus align with previous counselor education literature, including participants’ desire for CETI courses to prepare them for teaching as counselor educators (Baltrinic et al., 2016).

An expected finding within Factor C is the influence of the instructor’s qualities (e.g., approachability and passion) and delivery (e.g., seminar format) on participants’ views of the CETI course (Moate, Cox, et al., 2017). The instructor delivered the course in a seminar format emphasizing student leadership for content sharing and de-emphasizing the use of lecture, which relates to consensus factor scores on Item 40, “In a teaching course, I should be evaluated on my ability to do a lecture.” However, it is unclear from the data how participants understood the purpose or role of lectures for engaging students in the classroom. It is notable to mention, however, that participants delivered counseling content to master’s-level students as part of their teaching experiences for the course and would thus benefit from feedback on their performance.

Many have suggested that utilizing lecture as the principal mode of delivery fosters passive learning and does not necessarily support students’ engagement in course content or development of decision-making, problem-solving, or critical-thinking skills (e.g., Malott et al., 2014; Moate & Cox, 2015). Participants in Waalkes et al.’s (2018) study indicated that their training primarily equipped them to lecture, which they reported did not fully prepare them for their roles as educators. Although Moate and Cox (2015) do not recommend utilizing lecture as the only method for helping students engage with course content, both they and Brookfield (2015) emphasized the false dichotomy that exists between teacher-centered approaches, which are typically characterized by lecturing, and learner-centered approaches, which often rely on using discussions as a primary mode of teaching.

Rather than dismissing lectures entirely, instructors can utilize lectures to provide a broad overview of the course content, to explain difficult or complex concepts with frequent examples, to generate students’ engagement and interest in a topic, and/or to model the types of skills and dispositions instructors would like to foster in students (Brookfield, 2015; Malott et al., 2014; Moate & Cox, 2015). Thus, lectures can serve as a starting point to model and frame course content for further discussion and application using other teaching methods (Moate & Cox, 2015). Overall, we believe that it is important for students to possess a variety of teaching methods for engaging students with course content and understand when and how to apply various methods effectively, which requires CETI instructor feedback and support.

Surprising results included participants’ low rankings of Item 12 regarding the importance of role-playing, of Item 7 regarding the importance of peer feedback, and of Item 11 regarding the use of video recordings of teaching—this latter finding contrasts with participant responses in Hunt and Weber Gilmore’s (2011) study, who found “sharing and critiquing a video of us teaching” an especially valuable component of their coursework (p. 147). Current counselor education research consistently affirms the importance and reported desire for formal coursework to incorporate practical teaching components related to the actual work of a counselor educator (Hall & Hulse, 2010; Hunt & Weber Gilmore, 2011). Instructors who employ learner-centered approaches often emphasize the role of peers and the use of peer feedback to enhance student learning (Moate & Cox, 2015). It could be that participants assumed that role-plays pertain to practicing counseling-related interventions. As such, it may prove helpful if counselor educators consider situational uses for role-plays, such as a way of managing difficult situations in the classroom (e.g., classroom management), or for addressing sensitive topics related to multicultural concerns, among others (Hunt & Weber Gilmore, 2011). Instructors can model how to facilitate these skills, which can be followed up with dyadic or triadic student role-plays.

Additionally, participants did not place importance on peer feedback over the instructor’s feedback or learning how to provide feedback to their future students in the instructor role. Instead, participants favored feedback from the instructor on their own teaching skills, the proposition here being that instructors can provide feedback from a position of experience, more so than peers who do not have teaching experience. It is plausible that CEDS attending CETI courses need feedback about how to provide feedback and perceive this as an important teaching skill (Hunt & Weber Gilmore, 2011). This is important because students in CETI courses are likely (a) learning the course-related content and (b) learning the pedagogy for delivering counseling-related content in their future classrooms (ACES, 2016).

Implications
     Findings support two important implications for counselor educators, the first of which is illustrated by the instructor from this study: “What students’ passions are and what students need to know are not always the same thing.” One can reasonably expect discrepancies between the perceptions of the instructor and those of students as evidenced by some participants’ dissatisfaction with the content and delivery of their CETI courses (e.g., Hall & Hulse, 2010; Waalkes et al., 2018). However, we encourage counselor educators as they teach to consider students’ views (i.e., factors) even if they feel their own views and curriculum support best practice. We also acknowledge that some instructors may have limited autonomy in the construction of CETI course syllabi and assignments because of accreditation requirements.

In thinking about the implications for counselor educators, to the extent possible, tailoring a CETI course to the reported preferences/needs of the students seems essential for preparing them for future teaching (Waalkes et al., 2018) as well as for increasing student engagement (e.g., Moate & Cox, 2015). For example, counselor educators can incorporate technology, curricular, and course design elements into CETI courses (Factor A). Counselor educators can link teaching experiences to future faculty roles by exploring them in the context of accreditation requirements, their impact on tenure and promotion practices (Davis et al., 2006), and managing teaching loads in the context of other duties and institutional demands (Silverman, 2003; Factor B). Finally, counselor educators can incorporate Factor C views into their CETI courses by attending to the instructor qualities, modeling passion, demonstrating approachability, and frequently checking in on students’ progress (Malott et al., 2014). Additionally, the authors suggest that counselor educators incorporate aspects of all three factors into their own teaching practice and link the CETI course to future supervised teaching experiences such as teaching practicum or internships as suggested by Waalkes et al. (2018).

Second, counselor educators should obtain and incorporate CEDS’ perspectives early when designing, delivering, and evaluating CETI courses, which can be helpful for investigating (formally or informally) the impact of those instructional strategies and curriculum on CEDS’ teaching skill development and is recommended as a best practice by Malott et al. (2014). It is common practice to collect student opinions of instruction at the end of the semester, and many instructors collect ongoing data on how students are progressing in the semester. Q methodology could be used in ways similar to this study to help instructors positively influence CEDS’ learning. Additionally, counselor educators could utilize Q methodology to identify factors and use those factors to improve their own performance, to design other teaching-related courses, and to affect CEDS’ classroom experiences and learning outcomes. Counselor educators could also compare their CETI courses with other instructors’ courses to see trends or use Q methodology to identify factors within or between CETI courses over time.

Limitations and Future Research
     Q methodology studies gather and rigorously analyze data to reveal common viewpoints among participants. Factors do not generalize in Q studies the same way as findings from traditional factor analysis (i.e., R methodology; Brown, 1980). Rather, factors are simply collections of opinion, the structure of which may or may not exist in other counselor education settings. However, CETI instructors can test this proposition by having students in other CETI courses complete Q-sorts with the current Q sample or by developing and testing relevant Q samples of their own design. In fact, because the Q sample was used in one class, researchers are encouraged to test propositions with larger samples across programs to see if the factors exist in multiple settings. Finally, because the participants in the current study were a convenience sample from a brick-and-mortar program composed mostly of White females within a single course, participant diversity was lacking. Future studies could examine the views of students of color and international students in larger samples across multiple courses and multiple formats (e.g., online and hybrid programs).

Additional conditions of instruction could be added to expand teaching instruction viewpoints using a single-case design approach (Baltrinic et al., 2018). Supporting Q findings with qualitative information from in-depth interviews from student and instructor factor exemplars would add more nuance to the existing factors as well. Finally, following in our footsteps, researchers could develop and administer their own teaching instruction Q-sorts before beginning a CETI course to tailor the development and delivery of the course to the needs of their students. This would allow CETI instructors to develop studies, which may reveal idiosyncratic and shared experiences (Stephenson, 2014) related to programs’ CETI course design, delivery, and evaluation.

Conclusion
     We proposed in this article that doctoral CETI courses offer a starting point for CEDS’ teaching preparation. We elaborated further that despite accreditation guidelines and the anecdotal experiences of counselor educators in various programs, little is known about what specifically to include in a CETI doctoral course. Counselor educators and CEDS alike can honor course variability, anecdotal experiences, and academic freedoms, while providing some structure to their CETI courses. This goal can be achieved by acknowledging that CETI course design, delivery, and evaluation include professional-level, student, and instructor perspectives. The Q factors in the current study revealed one way to include multiple perspectives and to identify preferred and recognizable CETI course components.

 

Conflict of Interest and Funding Disclosure
The authors reported no conflict of interest
or funding contributions for the development
of this manuscript.

 

References

Association for Counselor Education and Supervision. (2016). ACES Teaching Initiative Taskforce best practices in teaching in counselor education report 2016. https://acesonline.net/wp-content/uploads/2018/11/ACES-Teaching-Initiative-Taskforce-Final-Report-2016.pdf

Baltrinic, E. R., Jencius, M., & McGlothlin, J. (2016). Coteaching in counselor education: Preparing doctoral students for future teaching. Counselor Education and Supervision, 55(1), 31–45. https://doi.org/10.1002/ceas.12031

Baltrinic, E. R., Moate, R. M., Hinkle, M. G., Jencius, M., & Taylor, J. Z. (2018). Counselor educators’ teaching mentorship styles: A Q methodology study. The Professional Counselor, 8(1), 46–59.
https://doi.org/10.15241/erb.8.1.46

Barrio Minton, C. A., & Price, E. (2015, October). Teaching the teacher: An analysis of teaching preparation in counselor education doctoral programs. Presentation session presented at the meeting of the Association for Counselor Education and Supervision Biannual Conference, Philadelphia, PA.

Brookfield, S. D. (2015). The skillful teacher: On technique, trust, and responsiveness in the classroom (3rd ed.). Jossey-Bass.

Brown, S. R. (1970). On the use of variance designs in Q methodology. The Psychological Record, 20, 179–189. https://doi.org/10.1007/BF03393928

Brown, S. R. (1980). Political subjectivity: Applications of Q methodology in political science. Yale University Press.

Brown, S. R. (1993). A primer on Q methodology. Operant Subjectivity, 16(3/4), 91–138.
https://doi.org/10.15133/j.os.1993.002

Brown, S. R. (2019). Subjectivity in the human sciences. The Psychological Record, 69, 565–579.
https://doi.org/10.1007/s40732-019-00354-5

Council for Accreditation of Counseling and Related Educational Programs. (2015). 2016 CACREP standards. http://www.cacrep.org/wp-content/uploads/2017/08/2016-Standards-with-citations.pdf

Davis, T. E., Levitt, D. H., McGlothlin, J. M., & Hill, N. R. (2006). Perceived expectations related to promotion and tenure: A national survey of CACREP program liaisons. Counselor Education and Supervision, 46(2), 146–156. https://doi.org/10.1002/j.1556-6978.2006.tb00019.x

Good, J. M. M. (2003). William Stephenson, quantum theory, and Q methodology. Operant Subjectivity, 26(4), 142–156. https://doi.org/10.15133/j.os.2003.009

Hall, S. F., & Hulse, D. (2010). Perceptions of doctoral level teaching preparation in counselor education. The Journal of Counselor Preparation and Supervision, 1(2), 2–15. https://doi.org/10.7729/12.0108

Hunt, B., & Weber Gilmore, G. (2011). Learning to teach: Teaching internships in counselor education and supervision. The Professional Counselor, 1(2), 143–151. https://doi.org/10.15241/bhh.1.2.143

Lanning, W. (1990). An educator/practitioner model for counselor education doctoral programs. Counselor Education and Supervision, 30(2), 163–169. https://doi.org/10.1002/j.1556-6978.1990.tb01193.x

Malott, K. M., Hall, K. H., Sheely-Moore, A., Krell, M. M., Cardaciotto, L. (2014). Evidence-based teaching in higher education: Application to counselor education. Counselor Education and Supervision, 53(4), 294–305. https://doi.org/10.1002/j.1556-6978.2014.00064.x

McAuliffe, G. J., & Erickson, K. (Eds.). (2011). Handbook of counselor preparation: Constructivist, developmental, and experiential approaches. SAGE.

McKeown, B., & Thomas, D. B. (2013). Q methodology (2nd ed.). SAGE.

Moate, R. M., & Cox, J. A. (2015). Learner-centered pedagogy: Considerations for application in a didactic course. The Professional Counselor, 5(3), 379–389. http://doi.org/10.15241/rmm.5.3.379

Moate, R. M., Cox, J. A., Brown, S. R., & West, E. M. (2017). Perceptions of helpfulness of teachers in didactic courses. Counselor Education and Supervision, 56(4), 242–258. https://doi.org/10.1002/ceas.12083

Moate, R. M., Holm, J. M., & West, E. M. (2017). Perceived helpfulness of teachers in clinical courses. The Professional Counselor, 7(2), 155–168. https://doi.org/10.15241/rmm.7.2.155

Olguin, D. L. C. (2004). Determinants of preparation through perceptions of counseling and teaching self-efficacy among prospective counselor educators (Order No. 3127784). Available from ProQuest Dissertations & Theses Global.

Orr, J. J., Hall, S. F., & Hulse-Killacky, D. (2008). A model for collaborative teaching teams in counselor education. Counselor Education and Supervision, 47(3), 146–163. https://doi.org/10.1002/j.1556-6978.2008.tb00046.x

Paige, J. B., & Morin, K. H. (2016). Q-sample construction: A critical step for a Q-methodological study. Western Journal of Nursing Research, 38(1), 96–110. https://doi.org/10.1177/0193945914545177

Patton, M. Q. (2015). Qualitative research and evaluation methods (4th ed.). SAGE.

Schmolck, P. (2014). PQMethod (Version 2.35). [Computer Software]. http://schmolck.userweb.mwn.de/qmethod  

Silverman, S. (2003). The role of teaching in the preparation of future faculty. Quest, 55(1), 72–81.
https://doi.org/10.1080/00336297.2003.10491790

Stake, R. E. (1995). The art of case study research. SAGE.

Stephenson, W. (2014). General theory of communication. Operant Subjectivity: The International Journal of Q Methodology , 37(3), 38–56. https:// doi.org/10.15133/j.os.2014.011

Suddeath, E., Baltrinic, E., & Dugger, S. (2020). The impact of teaching preparation practices on self-efficacy toward teaching. Counselor Education and Supervision, 59(1), 59–73. https://doi.org/10.1002/ceas.12166

Tollerud, T. R. (1990). The perceived self-efficacy of teaching skills of advanced doctoral students and graduates from counselor education programs (Order No. 9112495). Available from ProQuest Dissertations & Theses Global.

Waalkes, P. L., Benshoff, J. M., Stickl, J., Swindle, P. J., & Umstead, L. K. (2018). Structure, impact, and deficiencies of beginning counselor educators’ doctoral teaching preparation. Counselor Education and Supervision, 57(1), 66–80. https://doi.org/10.1002/ceas.12094

Watts, S., & Stenner, P. (2012). Doing Q methodological research: Theory, method and interpretation. SAGE.

West, J. D., Bubenzer, D. L., Cox, J. A., & McGlothlin, J. M. (Eds.). (2013). Teaching in counselor education: Engaging students in learning. Association for Counselor Education and Supervision.

 

Appendix

College Teaching Q Sample Statements and Factor Array

# Q Sample Statement A B C
1 Peers should be able to review the courses I develop as part of a teacher training course. -1 -2 -2
2 Teacher training courses should have case examples. -2  0  1
3 Designing student engagement is important for a course on teaching.  2  1  2
4 Courses in teacher training should have relevant technology resources.  1 -2 -2
5 Learning how to assess students’ learning is important in a teaching course.  3  0  2
6 Courses in teacher training should have interactive activities.  0  4  1
7 I should have student feedback for the classes I teach while a student in a teacher
training course.
-2  0  2
8 Teacher training courses should have relevant discussion.  1  3  2
9 Teacher training courses should have student feedback mechanisms for the instructor.  0  0  0
10 A teaching course should consider the pedagogy used for course delivery.  0  1  1
11 I believe that my teaching should be videoed in my teacher training course. -1 -4 -1
12 Having role-plays on teaching is important for a teaching course. -4 -3  0
13 Teaching instruction courses should incorporate adult learning theories.  0 -1  0
14 Selecting a textbook is an important part of learning in a teaching course.  2 -2  1
15 Content in teacher training courses should be up to date. -1  1 -1
16 Teacher training courses should have midterm evaluations of my work in the course. -3 -3 -2
17 Teacher training courses should have breakout groups. -3 -3 -3
18 Teacher training courses should address classroom management.  2  1  0
19 Teacher training courses should have course exams. -3 -4 -3
20 A method to evaluate students’ learning is important to course design.  2 -1  1
21 Instructors of teacher training courses should check in often with students to determine their level of understanding. -1  0  3
22 Teaching instruction courses should prepare students for teaching internships.  0  1  0
23 Teacher training courses should have assigned readings on varied aspects of teaching
and learning.
 1 -2 -1
24 Considering students’ personal and cultural characteristics is important in designing a teaching course.  0  2  1
25 Considering students’ developmental level is important in designing a teaching course. -3 -1  2
26 Learning how to provide feedback to future students is important for a teaching course.  1  0  3
27 In a teacher training course, I should be expected to create a teaching philosophy.  4  1  3
28 Teacher training classes should have supplemental learning materials. -1 -2 -2
29 I should process fellow classmates’ teaching experiences as a part of a teacher
training course.
 1 -1 -3
30 The instructor in a teacher training course should be passionate about teaching. -1  2  4
31 In a teacher training course, I should be able to design a teaching instruction course. -4 -1 -4
32 Instructors of teacher training courses should be approachable.  0  2  4
33 Creating course objectives are important to a teaching course.  3  0  3
34 Teacher training courses should have pre/posttest of students’ learning. -2 -4 -3
35 Planning for a teaching experience is an important part of the course.  3  2 -1
36 Portions of teacher training courses should include lectures. -2 -1 -2
37 In a teacher training course, I should be able to design a counseling course.  4  0  0
38 Instructors of teacher training courses should anticipate students’ cultural differences. -1  2  4
39 Designing a syllabus is an important aspect of a teaching course.  3  3  2
40 In a teaching course I should be evaluated on my ability to do a lecture. -2  1 0
41 Decisions on how you will use media are important in designing a teacher training course.  0 -2 -2
42 Instructors of teacher training courses should provide appropriate feedback to students
on teaching.
 2  3 -1
43 Teaching instruction courses should prepare students for future faculty roles.  1  4 -1
44 In a teaching training course, I should have the opportunity to teach to my peers. -4  2 -4
45 Experiential activities are important in a teaching instruction course.  1  3  0
46 Having a teaching experience is important for a course on teaching.  4  4  1
47 In a teacher training course, I should be able to use technology to collect evaluation data. -2 -3 -2
48 In a teacher training course, I should be able to design evaluations of teaching and learning.  2 -1 -4

 

Eric R. Baltrinic, PhD, LPCC-S, is an assistant professor at the University of Alabama. Eric G. Suddeath, PhD, LPC, is an assistant professor at Mississippi State University – Meridian. Correspondence can be addressed to Eric Baltrinic, Graves Hall, Box 870231, Tuscaloosa, AL 35487, erbaltrinic@ua.edu.

Research Identity Development of Counselor Education Doctoral Students: A Grounded Theory

Dodie Limberg, Therese Newton, Kimberly Nelson, Casey A. Barrio Minton, John T. Super, Jonathan Ohrt

 

We present a grounded theory based on interviews with 11 counselor education doctoral students (CEDS) regarding their research identity development. Findings reflect the process-oriented nature of research identity development and the influence of program design, research content knowledge, experiential learning, and self-efficacy on this process. Based on our findings, we emphasize the importance of mentorship and faculty conducting their own research as a way to model the research process. Additionally, our theory points to the need for increased funding for CEDS in order for them to be immersed in the experiential learning process and research courses being tailored to include topics specific to counselor education.

Keywords: grounded theory, research identity development, counselor education doctoral students, mentoring, experiential

 

     Counselor educators’ professional identity consists of five primary roles: counseling, teaching, supervision, research, and leadership and advocacy (Council for Accreditation of Counseling and Related Educational Programs [CACREP], 2015). Counselor education doctoral programs are tasked with fostering an understanding of these roles in future counselor educators (CACREP, 2015). Transitions into the counselor educator role have been described as life-altering and associated with increased levels of stress, self-doubt, and uncertainty (Carlson et al., 2006; Dollarhide et al., 2013; Hughes & Kleist, 2005; Protivnak & Foss, 2009); however, little is known about specific processes and activities that assist programs to intentionally cultivate transitions into these identities.

Although distribution of faculty roles varies depending on the type of position and institution, most academic positions require some level of research or scholarly engagement. Still, only 20% of counselor educators are responsible for producing the majority of publications within counseling journals, and 19% of counselor educators have not published in the last 6 years (Lambie et al., 2014). Borders and colleagues (2014) found that the majority of application-based research courses in counselor education doctoral programs (e.g., qualitative methodology, quantitative methodology, sampling procedures) were taught by non-counseling faculty members, while counseling faculty members were more likely to teach conceptual or theoretical research courses. Further, participants reported that non-counseling faculty led application-based courses because there were no counseling faculty members who were well qualified to instruct such courses (Borders et al., 2014).

To assist counselor education doctoral students’ (CEDS) transition into the role of emerging scholar, Carlson et al. (2006) recommended that CEDS become active in scholarship as a supplement to required research coursework. Additionally, departmental culture, mentorship, and advisement have been shown to reduce rates of attrition and increase feelings of competency and confidence in CEDS (Carlson et al., 2006; Dollarhide et al., 2013; Protivnak & Foss, 2009). However, Borders et al. (2014) found that faculty from 38 different CACREP-accredited programs reported that just over half of the CEDS from these programs became engaged in research during their first year, with nearly 8% not becoming involved in research activity until their third year. Although these experiences assist CEDS to develop as doctoral students, it is unclear which of these activities are instrumental in cultivating a sound research identity (RI) of CEDS. Understanding how RI is cultivated throughout doctoral programs may provide ways to enhance research within the counseling profession. Understanding this developmental process will inform methods for improving how counselor educators prepare CEDS for their professional roles.

Research Identity
     Research identity is an ambiguous term within the counseling literature, with definitions that broadly conceptualize the construct in terms of beliefs, attitudes, and efficacy related to scholarly research, along with a conceptualization of one’s own overall professional identity (Jorgensen & Duncan, 2015; Lamar & Helm, 2017; Ponterotto & Grieger, 1999; Reisetter et al., 2011). Ponterotto and Grieger (1999) described RI as how one views oneself as a scholar or researcher, noting that research worldview (i.e., the lens through which they view, approach, and manage the process of research) impacts how individuals conceptualize, conduct, and interpret results. This perception and interpretation of research as important to RI is critical to consider, as it is common practice for CEDS to enter doctoral studies with limited research experience. Additionally, many CEDS enter into training with a strong clinical identity (Dollarhide et al., 2013), but coupled with the void of research experience or exposure, CEDS may perceive research as disconnected and separate from counseling practice (Murray, 2009). Furthermore, universities vary in the support (e.g., graduate assistant, start-up funds, course release, internal grants) they provide faculty to conduct research.

The process of cultivating a strong RI may be assisted through wedding science and practice (Gelso et al., 2013) and aligning research instruction with values and theories often used in counseling practice (Reisetter et al., 2011). More specifically, Reisetter and colleagues (2011) found that cultivation of a strong RI was aided when CEDS were able to use traditional counseling skills such as openness, reflexive thinking, and attention to cognitive and affective features while working alongside research “participants” rather than conducting studies on research “subjects.” Counseling research is sometimes considered a practice limited to doctoral training and faculty roles, perhaps perpetuating the perception that counseling research and practice are separate and distinct phenomena (Murray, 2009). Mobley and Wester (2007) found that only 30% of practicing clinicians reported reading and integrating research into their work; therefore, early introduction to research may also aid in diminishing the research–practice gap within the counseling profession. The cultivation of a strong RI may begin through exposure to research and scholarly activity at the master’s level (Gibson et al., 2010). More recently, early introduction to research activity and counseling literature at the master’s level is supported within the 2016 CACREP Standards (2015), specifically the infusion of current literature into counseling courses (Standard 2.E.) and training in research and program evaluation (Standard 2.F.8.). Therefore, we may see a shift in the research–practice gap based on these included standards in years to come.

Jorgensen and Duncan (2015) used grounded theory to better understand how RI develops within master’s-level counseling students (n = 12) and clinicians (n = 5). The manner in which participants viewed research, whether as separate from their counselor identity or as fluidly woven throughout, influenced the development of a strong RI. Further, participants’ views and beliefs about research were directly influenced by external factors such as training program expectations, messages received from faculty and supervisors, and academic course requirements. Beginning the process of RI development during master’s-level training may support more advanced RI development for those who pursue doctoral training.

Through photo elicitation and individual interviews, Lamar and Helm (2017) sought to gain a deeper understanding of CEDS’ RI experiences. Their findings highlighted several facets of the internal processes associated with RI development, including inconsistency in research self-efficacy, integration of RI into existing identities, and finding methods of contributing to the greater good through research. The role of external support during the doctoral program was also a contributing factor to RI development, with multiple participants noting the importance of family and friend support in addition to faculty support. Although this study highlighted many facets of RI development, much of the discussion focused on CEDS’ internal processes, rather than the role of specific experiences within their doctoral programs.

Research Training Environment
     Literature is emerging related to specific elements of counselor education doctoral programs that most effectively influence RI. Further, there is limited research examining individual characteristics of CEDS that may support the cultivation of a strong RI. One of the more extensively reviewed theories related to RI cultivation is the belief that the research training environment, specifically the program faculty, holds the most influence and power over the strength of a doctoral student’s RI (Gelso et al., 2013). Gelso et al. (2013) also hypothesized that the research training environment directly affects students’ research attitudes, self-efficacy, and eventual productivity. Additionally, Gelso et al. outlined factors in the research training environment that influence a strong RI, including (a) appropriate and positive faculty modeling of research behaviors and attitudes, (b) positive reinforcement of student scholarly activities, (c) the emphasis of research as a social and interpersonal activity, and (d) emphasizing all studies as imperfect and flawed. Emphasis on research as a social and interpersonal activity consistently received the most powerful support in cultivating RI. This element of the research training environment may speak to the positive influence of working on research teams or in mentor and advising relationships (Gelso et al., 2013).

To date, there are limited studies that have addressed the specific doctoral program experiences and personal characteristics of CEDS that may lead to a strong and enduring RI. The purpose of this study was to: (a) gain a better understanding of CEDS’ RI development process during their doctoral program, and (b) identify specific experiences that influenced CEDS’ development as researchers. The research questions guiding the investigation were: 1) How do CEDS understand RI? and 2) How do CEDS develop as researchers during their doctoral program?

Method

     We used grounded theory design for our study because of the limited empirical data about how CEDS develop an RI. Grounded theory provides researchers with a framework to generate a theory from the context of a phenomenon and offers a process to develop a model to be used as a theoretical foundation (Charmaz, 2014; Corbin & Strauss, 2008). Prior to starting our investigation, we received IRB approval for this study.

Research Team and Positionality
     The core research team consisted of one Black female in the second year of her doctoral program, one White female in the first year of her doctoral program, and one White female in her third year as an assistant professor. A White male in his sixth year as an assistant professor participated as the internal auditor, and a White male in his third year as a clinical assistant professor participated as the external auditor. Both doctoral students had completed two courses that covered qualitative research design, and all three faculty members had experience utilizing grounded theory. Prior to beginning our work together, we discussed our beliefs and experiences related to RI development. All members of the research team were in training to be or were counselor educators and researchers, and we acknowledged this as part of our positionality. We all agreed that we value research as part of our roles as counselor educators, and we discussed our beliefs that the primary purpose of pursuing a doctoral degree is to gain skills as a researcher rather than an advanced counselor. We acknowledged the strengths that our varying levels of professional experiences provided to our work on this project, and we also recognized the power differential within the research team; thus, we added auditors to help ensure trustworthiness. All members of the core research team addressed their biases and judgments regarding participants’ experiences through bracketing and memoing to ensure that participants’ voices were heard with as much objectivity as possible (Hays & Wood, 2011). We recorded our biases and expectations in a meeting prior to data collection. Furthermore, we continued to discuss assumptions and biases in order to maintain awareness of the influence we may have on data analysis (Charmaz, 2014). Our assumptions included (a) the influence of length of time in a program, (b) the impact of mentoring, (c) how participants’ research interests would mirror their mentors’, (d) that beginning students may not be able to articulate or identify the difference between professional identity and RI, (e) that CEDS who want to pursue academia may identify more as researchers than in other roles (i.e., teaching, supervision), and (f) that coursework and previous experience would influence RI. Each step of the data analysis process provided us the opportunity to revisit our biases.

Participants and Procedure
     Individuals who were currently enrolled in CACREP-accredited counselor education and supervision doctoral programs were eligible for participation in the study. We used purposive sampling (Glesne, 2011) to strategically contact eight doctoral program liaisons at CACREP-accredited doctoral programs via email to identify potential participants. The programs were selected to represent all regions and all levels of Carnegie classification. The liaisons all agreed to forward an email that included the purpose of the study and criteria for participation. A total of 11 CEDS responded to the email, met selection criteria, and participated in the study. We determined that 11 participants was an adequate sample size considering data saturation was reached during the data analysis process (Creswell, 2007). Participants represented eight different CACREP-accredited doctoral programs across six states. At the time of the interviews, three participants were in the first year of their program, five were in their second year, and three were in their third year. To prevent identification of participants, we report demographic data in aggregate form. The sample included eight women and three men who ranged in age from 26–36 years (M = 30.2). Six participants self-identified as White (non-Hispanic), three as multiracial, one as Latinx, and one as another identity not specified. All participants held a master’s degree in counseling; they entered their doctoral programs with 0–5 years of post-master’s clinical experience (M = 1.9). Eight participants indicated a desire to pursue a faculty position, two indicated a desire to pursue academia while also continuing clinical work, and one did not indicate a planned career path. Of those who indicated post-doctoral plans, seven participants expected to pursue a faculty role within a research-focused institution and three indicated a preference for a teaching-focused institution. All participants had attended and presented at a state or national conference within the past 3 years, with the number of presentations ranging from three to 44 (M = 11.7). Nine participants had submitted manuscripts to peer-reviewed journals and had at least one manuscript published or in press. Finally, four participants had received grant funding.

Data Collection
     We collected data through a demographic questionnaire and semi-structured individual interviews. The demographic questionnaire consisted of nine questions focused on general demographic characteristics (i.e., gender, age, race, and education). Additionally, we asked questions focused on participants’ experiences as researchers (i.e., professional organization affiliations, service, conference presentations, publications, and grant experience). These questions were used to triangulate the data. The semi-structured interviews consisted of eight open-ended questions asked in sequential order to promote consistency across participants (Heppner et al., 2016) and we developed them from existing literature. Examples of questions included: 1) How would you describe your research identity? 2) Identify or talk about things that happened during your doctoral program that helped you think of yourself as a researcher, and 3) Can you talk about any experiences that have created doubts about adopting the identity of a researcher? The two doctoral students on the research team conducted the interviews via phone. Interviews lasted approximately 45–60 minutes and were audio recorded. After all interviews were conducted, a member of the research team transcribed the interviews.

Data Analysis and Trustworthiness
     We followed grounded theory data analysis procedures outlined by Corbin and Strauss (2008). Prior to data analysis, we recorded biases, read through all of the data, and discussed the coding process to ensure consistency. We followed three steps of coding: 1) open coding, 2) axial coding, and 3) selective coding. Our first step of data analysis was open coding. We read through the data several times and then started to create tentative labels for chunks of data that summarized what we were reading. We recorded examples of participants’ words and established properties of each code. We then coded line-by-line together using the first participant transcript in order to have opportunities to check in and share and compare our open codes. Then we individually coded the remainder of the participants and came back together as a group to discuss and memo. We developed a master list of 184 open codes.

Next, we moved from inductive to deductive analysis using axial coding to identify relationships among the open codes. We identified relationships among the open codes and grouped them into categories. Initially we created a list of 55 axial codes, but after examining the codes further, we made a team decision to collapse them to 19 axial codes that were represented as action-oriented tasks within our theory (see Table 1).

Last, we used selective coding to identify core variables that include all of the data. We found that two factors and four subfactors most accurately represent the data (see Figure 1). The auditor was involved in each step of coding and provided feedback throughout. To enhance trustworthiness and manage bias when collecting and analyzing the data, we applied several strategies: (a) we recorded memos about our ideas about the codes and their relationships (i.e., reflexivity; Morrow, 2005); (b) we used investigator triangulation (i.e., involving multiple investigators to analyze the data independently, then meeting together to discuss; Archibald, 2015); (c) we included an internal and external auditor to evaluate the data (Glesne, 2011; Hays & Wood, 2011); (d) we conducted member checking by sending participants their complete transcript and summary of the findings, including the visual (Creswell & Miller, 2000); and (e) we used multiple sources of data (i.e., survey questions on the demographic form; Creswell, 2007) to triangulate the data.

 

Table 1

List of Factors and Subfactors

Factor 1: Research Identity Formation as a Process

unable to articulate what research identity is

linking research identity to their research interests or connecting it to their professional experiences

associating research identity with various methodologies

identifying as a researcher

understanding what a research faculty member does

Factor 2: Value and Interest in Research

desiring to conduct research

aspiring to maintain a degree of research in their future role

making a connection between research and practice and contributing to the counseling field

Subfactor 1: Intentional Program Design

implementing an intentional curriculum

developing a research culture (present and limited)

active faculty mentoring and modeling of research

Subfactor 2: Research Content Knowledge

understanding research design

building awareness of the logistics of a research study

learning statistics

Subfactor 3: Research Experiential Learning

engaging in scholarly activities

conducting independent research

having a graduate research assistantship

Subfactor 4: Research Self-Efficacy

receiving external validation

receiving growth-oriented feedback (both negative and positive)

 

Figure 1

Model of CEDS’ Research Identity Development

 

Results

Data analysis resulted in a grounded theory composed of two main factors that support the overall process of RI development among CEDS: (a) RI formation as a process and (b) value and interest in research. The first factor is the foundation of our theory because it describes RI development as an ongoing, formative process. The second main factor, value and interest in research, provides an interpersonal approach to RI development in which CEDS begin to embrace “researcher” as a part of who they are.

Our theory of CEDS’ RI development is represented visually in Figure 1. At each axis of the figure, the process of RI is represented longitudinally, and the value and interest in research increases during the process. The four subfactors (i.e., program design, content knowledge, experiential learning, and self-efficacy) contribute to each other but are also independent components that influence the process and the value and interest. Each subfactor is represented as an upward arrow, which supports the idea within our theory that each subfactor increases through the formation process. Each of these subfactors includes components that are specific action-oriented tasks (see Table 1). In order to make our findings relevant and clear, we have organized them by the two research questions that guided our study. To bring our findings to life, we describe the two major factors, four subfactors, and action-oriented tasks using direct quotes from the participants.

Research Question 1: How Do CEDS Describe RI?
     Two factors supported this research question: RI formation as a process and value and interest in research.

Factor 1: Research Identity Formation as a Process
     Within this factor we identified five action-oriented tasks: (a) being unable to articulate what research identity is, (b) linking research identity to their research interests or connecting it to their professional experiences, (c) associating research identity with various methodologies, (d) identifying as a researcher, and (e) understanding what a research faculty member does. Participants described RI as a formational process. Participant 10 explained, “I still see myself as a student. . . . I still feel like I have a lot to learn and I am in the process of learning, but I have a really good foundation from the practical experiences I have had [in my doctoral program].” When asked how they would describe RI, many were unable to articulate what RI is, asking for clarification or remarking on how they had not been asked to consider this before. Participants often linked RI to their research interests or professional experiences. For example, Participant 11 said, “in clinical practice, I centered around women and women issues. Feminism has come up as a product of other things being in my PhD program, so with my dissertation, my topic is focused on feminism.” Several participants associated RI with various methodologies, including Participant 7: “I would say you know in terms of research methodology and what not, I strongly align with quantitative research. I am a very quantitative-minded person.” Some described this formational process as the transition to identifying as a researcher:

I actually started a research program in my university, inviting or matching master’s students who were interested in certain research with different research projects that were available. So that was another way of me kind of taking on some of that mentorship role in terms of research. (Participant 9)

As their RI emerged, participants understood what research-oriented faculty members do:

Having faculty talk about their research and their process of research in my doc program has been extremely helpful. They talk about not only what they are working on but also the struggles of their process and so they don’t make it look glamorous all the time. (Participant 5)

Factor 2: Value and Interest in Research
     All participants talked about the value and increased interest in research as they went through their doctoral program. We identified three action-oriented tasks within this factor: (a) desiring to conduct research, (b) aspiring to maintain a degree of research in their future role, and (c) making a connection between research and practice and contributing to the counseling field. Participant 6 described, “Since I have been in the doctoral program, I have a bigger appreciation for the infinite nature of it (research).” Participants spoke about an increased desire to conduct research; for example, “research is one of the most exciting parts of being a doc student, being able to think of a new project and carrying out the steps and being able to almost discover new knowledge” (Participant 1). All participants aspired to maintain a degree of research in future professional roles after completion of their doctoral programs regardless of whether they obtained a faculty role at a teaching-focused or research-focused university. For example, Participant 4 stated: “Even if I go into a teaching university, I have intentions in continuing very strongly my research and keeping that up. I think it is very important and it is something that I like doing.” Additionally, participants started to make the connection between research and practice and contributing to the counseling profession:

I think research is extremely important because that is what clinicians refer to whenever they have questions about how to treat their clients, and so I definitely rely upon research to understand views in the field and I value it myself so that I am more well-rounded as an educator. (Participant 6)

Research Question 2: How Do CEDS Develop Their RI During Their Doctoral Program?
     The following four subfactors provided a description of how CEDS develop RI during their training: intentional program design, research content knowledge, research experiential learning, and research self-efficacy. Each subfactor contains action-oriented tasks.

Subfactor 1: Intentional Program Design
     Participants discussed the impact the design of their doctoral program had on their development as researchers. They talked about three action-oriented tasks: (a) implementing an intentional curriculum, (b) developing a research culture (present and limited), and (c) active faculty mentoring and modeling of research. Participants appreciated the intentional design of the curriculum. For example, Participant 5 described how research was highlighted across courses: “In everything that I have had to do in class, there is some form of needing to produce either a proposal or being a good consumer of research . . . it [the value of research] is very apparent in every course.” Additionally, participants talked about the presence or lack of a research culture. For example, Participant 2 described how “at any given time, I was working on two or three projects,” whereas Participant 7 noted that “gaining research experience is not equally or adequately provided to our doctoral students.” Some participants discussed being assigned a mentor, and others talked about cultivating an organic mentoring relationship through graduate assistantships or collaboration with faculty on topics of interest. However, all participants emphasized the importance of faculty mentoring:

I think definitely doing research with the faculty member has helped quite a bit, especially doing the analysis that I am doing right now with the chair of our program has really helped me see research in a new light, in a new way, and I have been grateful for that. (Participant 1)

The importance of modeling of research was described in terms of faculty actually conducting their own research. For example, Participant 11 described how her professor “was conducting a research study and I was helping her input data and write and analyze the data . . . that really helped me grapple with what research looks like and is it something that I can do.” Participant 10 noted how peers conducting research provided a model:

Having that peer experience (a cohort) of getting involved in research and knowing again that we don’t have to have all of the answers and we will figure it out and this is where we all are, that was also really helpful for me and developing more confidence in my ability to do this [research].

Subfactor 2: Research Content Knowledge
     All participants discussed the importance of building their research content knowledge. Research content knowledge consisted of three action-oriented tasks: (a) understanding research design, (b) building awareness of the logistics of a research study, and (c) learning statistics. Participant 1 described their experience of understanding research design: “I think one of the most important pieces of my research identity is to be well-rounded and [know] all of the techniques in research designs.” Participants also described developing an awareness of the logistics of research study, ranging from getting IRB approval to the challenges of data collection. For example, Participant 9 stated:

Seeing what goes into it and seeing the building blocks of the process and also really getting that chance to really think about the study beforehand and making sure you’re getting all of the stuff to protect your clients, to protecting confidentiality, those kind of things. So I think it is kind of understanding more about the research process and also again what goes into it and what makes the research better.

Participants also explained how learning statistics was important; however, a fear of statistics was a barrier to their learning and development. Participant 2 said, “I thought before I had to be a stats wiz to figure anything out, and I realize now that I just have to understand how to use my resources . . . I don’t have to be some stat wiz to actually do [quantitative research].”

Subfactor 3: Research Experiential Learning
     Research experiential learning describes actual hands-on experiences participants had related to research. Within our theory, three action-oriented tasks emerged from this subfactor: (a) engaging in scholarly activities, (b) conducting independent research, and (c) having a graduate research assistantship. Engaging in scholarly activities included conducting studies, writing for publication, presenting at conferences, and contributing to or writing a grant proposal. Participant 5 described the importance of being engaged in scholarly activities through their graduate assistantship:

I did have a research graduate assistantship where I worked under some faculty and that definitely exposed me to a higher level of research, and being exposed to that higher level of research allowed me to fine tune how I do research. So that was reassuring in some ways and educational.

Participants also described the importance of leading and conducting their own research via dissertation or other experiences during their doctoral program. For example, Participant 9 said:

Starting research projects that were not involving a faculty member I think has also impacted my work a lot, I learned a lot from that process, you know, having to submit [to] an IRB, having to structure the study and figure out what to do, and so again learning from mistakes, learning from experience, and building self-efficacy.

Subfactor 4: Research Self-Efficacy
     The subfactor of research self-efficacy related to the process of participants being confident in identifying themselves and their skills as researchers. We found two action-oriented tasks related to research self-efficacy: (a) receiving external validation and (b) receiving growth-oriented feedback (both negative and positive). Participant 3 described their experience of receiving external validation through sources outside of their doctoral program as helpful in building confidence as a researcher:

I have submitted and have been approved to present at conferences. That has boosted my confidence level to know that they know I am interested in something and I can talk about it . . . that has encouraged me to further pursue research.

Participant 8 explained how receiving growth-oriented feedback on their research supported their own RI development: “People stopped by [my conference presentation] and were interested in what research I was doing. It was cool to talk about it and get some feedback and hear what people think about the research I am doing.”

Discussion

Previous researchers have found RI within counselor education to be an unclear term (Jorgensen & Duncan, 2015; Lamar & Helm, 2017). Although our participants struggled to define RI, our participants described RI as the process of identifying as a researcher, the experiences related to conducting research, and finding value and interest in research. Consistent with previous findings (e.g., Ponterotto & Grieger, 1999), we found that interest in and value of research is an important part of RI. Therefore, our qualitative approach provided us a way to operationally define CEDS’ RI as a formative process of identifying as a researcher that is influenced by the program design, level of research content knowledge, experiential learning of research, and research self-efficacy.

Our findings emphasize the importance of counselor education and supervision doctoral program design. Similar to previous researchers (e.g., Borders et al., 2019; Carlson et al., 2006; Dollarhide et al., 2013; Protivnak & Foss, 2009), we found that developing a culture of research that includes mentoring and modeling of research is vital to CEDS’ RI development. Lamar and Helm (2017) also noted the valuable role faculty mentorship and engagement in research activities, in addition to research content knowledge, has on CEDS’ RI development. Although Lamar and Helm noted that RI development may be enhanced through programmatic intentionality toward mentorship and curriculum design, they continually emphasized the importance of CEDS initiating mentoring relationships and taking accountability for their own RI development. We agree that individual initiative and accountability are valuable and important characteristics for CEDS to possess; however, we also acknowledge that student-driven initiation of such relationships may be challenging in program cultures that do not support RI or do not provide equitable access to mentoring and research opportunities.

Consistent with recommendations by Gelso et al. (2013) and Borders et al. (2014), building a strong foundation of research content knowledge (e.g., statistics, design) is an important component of CEDS’ RI development. Unlike Borders and colleagues, our participants did not discuss how who taught their statistics courses made a difference. Rather, participants discussed the value of experiential learning (i.e., participating on a research team), and conducting research on their own influenced how they built their content knowledge. This finding is similar to Carlson et al.’s (2006) and supports Borders et al.’s findings regarding the critical importance of early research involvement for CEDS.

Implications for Practice
     Our grounded theory provides a clear, action-oriented model that consists of multiple tasks that can be applied in counselor education doctoral programs. Given our findings regarding the importance of experiential learning, we acknowledge the importance for increased funding to ensure CEDS are able to focus on their studies and immerse themselves in research experiences. Additionally, design of doctoral programs is crucial to how CEDS develop as researchers. Findings highlight the importance of faculty members at all levels being actively involved in their own scholarship and providing students with opportunities to be a part of it. In addition, we recommend intentional attention to mentorship as an explicit program strategy for promoting a culture of research. Findings also support the importance of coursework for providing students with relevant research content knowledge they can use in research and scholarly activities (e.g., study proposal, conceptual manuscript, conference presentation). Additionally, we recommend offering a core of research courses that build upon one another to increase research content knowledge and experiential application. More specifically, this may include a research design course taught by counselor education faculty at the beginning of the program to orient students to the importance of research for practice; such a foundation may help ensure students are primed to apply skills learned in more technical courses. Finally, we suggest that RI development is a process that is never complete; therefore, counselor educators are encouraged to continue to participate in professional development opportunities that are research-focused (e.g., AARC, ACES Inform, Evidence-Based School Counseling Conference, AERA). More importantly, it should be the charge of these organizations to continue to offer high quality trainings on a variety of research designs and advanced statistics.

Implications for Future Research
     Replication or expansion of our study is warranted across settings and developmental levels. Specifically, it would be interesting to examine RI development of pre-tenured faculty and tenured faculty members to see if our model holds or what variations exist between these populations. Or it may be beneficial to assess the variance of RI based on the year a student is in the program (e.g., first year vs. third year). Additionally, further quantitative examination of relationships between each component of our theory would be valuable to understand the relationship between the constructs more thoroughly. Furthermore, pedagogical interventions, such as conducting a scholarship of teaching and learning focused on counselor education doctoral-level research courses, may be valuable in order to support their merit.

Limitations
     Although we engaged in intentional practices to ensure trustworthiness throughout our study, there are limitations that should be considered. Specifically, all of the authors value and find research to be an important aspect of counselor education and participants self-selected to participate in the research study, which is common practice in most qualitative studies. However, self-selection may present bias in the findings because of the participants’ levels of interest in the topic of research. Additionally, participant selection was based on those who responded to the email and met the criteria; therefore, there was limited selection bias of the participants from the research team. Furthermore, participants were from a variety of programs and their year in their program (e.g., first year) varied; all the intricacies within each program cannot be accounted for and they may contribute to how the participants view research. Finally, the perceived hierarchy (i.e., faculty and students) on the research team may have contributed to the data analysis process by students adjusting their analysis based on faculty input.

Conclusion
     In summary, our study examined CEDS’ experiences that helped build RI during their doctoral program. We interviewed 11 CEDS who were from eight CACREP-accredited doctoral programs from six different states and varied in the year of their program. Our grounded theory reflects the process-oriented nature of RI development and the influence of program design, research content knowledge, experiential learning, and self-efficacy on this process. Based on our findings, we emphasize the importance of mentorship and faculty conducting their own research as ways to model the research process. Additionally, our theory points to the need for increased funding for CEDS in order for them to be immersed in the experiential learning process and research courses being tailored to include topics specific to counselor education and supervision.

 

Conflict of Interest and Funding Disclosure
The authors reported no conflict of interest
or funding contributions for the development
of this manuscript.

 

References

Archibald, M. (2015). Investigator triangulation: A collaborative strategy with potential for mixed methods research. Journal of Mixed Methods Research, 10(3), 228–250.

Association for Assessment and Research in Counseling. (2019). About us. http://aarc-counseling.org/abo
ut-us

Borders, L. D., Gonzalez, L. M., Umstead, L. K., & Wester, K. L. (2019). New counselor educators’ scholarly productivity: Supportive and discouraging environments. Counselor Education and Supervision, 58(4), 293–308. https://doi.org/10.1002/ceas.12158

Borders, L. D., Wester, K. L., Fickling, M. J., & Adamson, N. A. (2014). Research training in CACREP-accredited doctoral programs. Counselor Education and Supervision, 53, 145–160.
https://doi.org/10.1002/j.1556-6978.2014.00054.x

Carlson, L. A., Portman, T. A. A., & Bartlett, J. R. (2006). Self-management of career development: Intentionality for counselor educators in training. The Journal of Humanistic Counseling, Education and Development, 45(2), 126–137.
https://doi.org/10.1002/j.2161-1939.2006.tb00012.x

Charmaz, K. (2014). Constructing grounded theory (2nd ed.). SAGE.

Corbin, J., & Strauss, A. (2008). Basics of qualitative research: Techniques and procedures for developing grounded theory (3rd ed.). SAGE.

Council for Accreditation of Counseling and Related Educational Programs. (2015). 2016 CACREP standards. http://www.cacrep.org/wp-content/uploads/2017/08/2016-Standards-with-citations.pdf

Creswell, J. W. (2007). Qualitative inquiry & research design: Choosing among five approaches (2nd ed.). SAGE.

Creswell, J. W., & Miller, D. L. (2000). Determining validity in qualitative inquiry. Theory Into Practice, 39(3), 124–130. https://doi.org/10.1207/s15430421tip3903_2

Dollarhide, C. T., Gibson, D. M., & Moss, J. M. (2013). Professional identity development of counselor education doctoral students. Counselor Education and Supervision, 52(2), 137–150.
https://doi.org/10.1002/j.1556-6978.2013.00034.x

Gelso, C. J., Baumann, E. C., Chui, H. T., & Savela, A. E. (2013). The making of a scientist-psychotherapist: The research training environment and the psychotherapist. Psychotherapy, 50(2), 139–149. https://doi.org/10.1037/a0028257

Gibson, D. M., Dollarhide, C. T., & Moss, J. M. (2010). Professional identity development: A grounded theory of transformational tasks of new counselors. Counselor Education and Supervision, 50(1), 21–38. https://doi.org/10.1002/j.1556-6978.2010.tb00106.x

Glesne, C. (2011). Becoming qualitative researchers: An introduction (4th ed.). Pearson.

Hays, D. G., & Wood, C. (2011). Infusing qualitative traditions in counseling research designs. Journal of Counseling & Development, 89(3), 288–295. https://doi.org/10.1002/j.1556-6678.2011.tb00091.x

Heppner, P. P., Wampold, B. E., Owen, J., Thompson, M. N., & Wang, K. T. (2016). Research design in counseling (4th ed.). Cengage.

Hughes, F. R., & Kleist, D. M. (2005). First-semester experiences of counselor education doctoral students. Counselor Education and Supervision, 45(2), 97–108.
https://doi.org/10.1002/j.1556-6978.2005.tb00133.x

Jorgensen, M. F., & Duncan, K. (2015). A grounded theory of master’s-level counselor research identity. Counselor Education and Development, 54(1), 17–31.
https://doi.org/10.1002/j.1556-6978.2015.00067.x

Lamar, M. R., & Helm, H. M. (2017). Understanding the researcher identity development of counselor education and supervision doctoral students. Counselor Education and Supervision, 56(1), 2–18.
https://doi.org/10.1002/ceas.12056

Lambie, G. W., Ascher, D. L., Sivo, S. A., & Hayes, B. G. (2014). Counselor education doctoral program faculty members’ refereed article publications. Journal of Counseling & Development, 92(3), 338–346. https://doi.org/10.1002/j.1556-6676.2014.00161.x

Mobley, K., & Wester, K. L. (2007). Evidence-based practices: Building a bridge between researchers and practitioners. Association for Counselor Education and Supervision.

Morrow, S. L. (2005). Quality and trustworthiness in qualitative research in counseling psychology. Journal of Counseling Psychology, 52(2), 250–260. https://doi.org/10.1037/0022-0167.52.2.250

Murray, C. E. (2009). Diffusion of innovation theory: A bridge for the research-practice gap in
counseling. Journal of Counseling & Development, 87(1), 108–116.
https://doi.org/10.1002/j.1556-6678.2009.tb00556.x

Ponterotto, J. G., & Grieger, I. (1999). Merging qualitative and quantitative perspectives in a research identity. In M. Kopala & L. A. Suzuki (Eds.), Using qualitative methods in psychology (pp. 49–62). SAGE.

Protivnak, J. J., & Foss, L. L. (2009). An exploration of themes that influence the counselor education doctoral student experience. Counselor Education and Supervision, 48(4), 239–256.
https://doi.org/10.1002/j.1556-6978.2009.tb00078.x

Reisetter, M., Korcuska, J. S., Yexley, M., Bonds, D., Nikels, H., & McHenry, W. (2011). Counselor educators and qualitative research: Affirming a research identity. Counselor Education and Supervision, 44(1), 2–16. https://doi.org/10.1002/j.1556-6978.2004.tb01856.x

 

Dodie Limberg, PhD, is an associate professor at the University of South Carolina. Therese Newton, NCC, is an assistant professor at Augusta University. Kimberly Nelson is an assistant professor at Fort Valley State University. Casey A. Barrio Minton, NCC, is a professor at the University of Tennessee, Knoxville. John T. Super, NCC, LMFT, is a clinical assistant professor at the University of Central Florida. Jonathan Ohrt is an associate professor at the University of South Carolina. Correspondence may be addressed to Dodie Limberg, 265 Wardlaw College Main St., Columbia, SC 29201, dlimberg@sc.edu.

Preparing Counselor Education and Supervision Doctoral Students Through an HLT Lens: The Importance of Research and Scholarship

Cian L. Brown, Anthony J. Vajda, David D. Christian

 

We examined the publication trends of faculty in 396 CACREP-accredited counselor education and supervision (CES) programs based on Carnegie classification by exploring 5,250 publications over the last decade in 21 American Counseling Association and American Counseling Association division journals. Using Bayesian statistics, this study expounded upon existing literature and differences that exist between institution classifications and total publications. The results of this study can be used to inform the training and preparation of doctoral students in CES programs through a Happenstance Learning Theory framework, specifically regarding their role as scholars and researchers. We present implications and argue for the importance of programs and faculty providing research experience for doctoral students in order to promote career success and satisfaction.  

Keywords: doctoral counselor education and supervision, Carnegie classification, Happenstance Learning Theory, publication trends, Bayesian statistics

 

Pursuing a doctoral degree in counselor education and supervision (CES) can be a daunting task. Although there are some levels of certainty, there is also a great degree of uncertainty, especially with regard to recognizing the valuable experiences that will inevitably lead to career opportunities, satisfaction, and success (Baker & Moore, 2015; Del Rio & Mieling, 2012; Dollarhide et al., 2013; Dunn & Kniess, 2019; Hinkle et al., 2014; Zeligman et al., 2015). CES doctoral students enrolled in programs accredited by the Council for Accreditation of Counseling and Related Educational Programs (CACREP) can expect to develop core areas of practice such as counseling, supervision, teaching, leadership and advocacy, and research and scholarship. Happenstance Learning Theory (HLT) provides a framework through which those planned and unplanned experiences—and the degrees of certainty and uncertainty—of doctoral students can be understood. For example, mentorship and career development throughout the course of the doctoral program impact students’ experiences (Kuo et al., 2017; Perera-Diltz & Duba Sauerheber, 2017; Protivnak & Foss, 2009; Purgason et al., 2016; Sackett et al., 2015). Previous research indicates that research and scholarship are highly emphasized factors for impacting career opportunities and success for potential and current CES faculty (Barrio Minton et al., 2008; Newhart et al., 2020). However, the exact requirements for publications and scholarship in CES remain unclear and often vary by institution and program (Davis et al., 2006; Lambie et al., 2014; Ramsey et al., 2002; Shropshire et al., 2015; Wester et al., 2013). In order to better understand potential implications for faculty, programs, and doctoral students looking to enter academia, researchers must continue exploring CES publication and scholarship trends. 

Research and Scholarship in CES
     Research and scholarly activity are a responsibility and priority among faculty in higher education in order to further inform the profession and promote productivity. Thus, “developing doctoral counselor education students’ research and scholarship competencies needs to be supported and nurtured in preparation programs where the faculty and systemic climate may promote these professional skills, dispositions, and behaviors” (Lambie & Vaccaro, 2011, p. 254). Although additional research is warranted, researchers have conducted several studies to better understand the landscape of publication trends among counselor educators and CES programs. To date, all prior studies have primarily relied on self-report surveys and have not examined longitudinal trends (Lambie et al., 2014; Newhart et al., 2020; Ramsey et al., 2002).

Ramsey et al. (2002) conducted survey research regarding the scholarly productivity of counselor educators at CACREP-accredited programs at the various levels of Carnegie classification from 1992 to 1995. Of the 104 programs they contacted, only 113 faculty at 47 institutions responded. According to their research, faculty at research and doctorate-granting institutions (the Carnegie classifications at the time) reported spending more time publishing journal articles than faculty at comprehensive institutions, while all CES faculty, regardless of their institution’s Carnegie classification, perceived journal articles as the most important form of scholarship for tenure/promotion decisions. Although Ramsey et al.’s research provides insight into the perceived role publications play for tenure/promotion, relying on self-reported publication patterns means it is impossible to know if their results are consistent with the actual publication trends for faculty of CES programs of various Carnegie classifications.

Lambie et al. (2014) accounted for this limitation by using online research platforms to identify publication trends of faculty at CACREP-accredited doctoral programs. Their research provided important information related to the publication process for counselor educators at doctoral-granting institutions but is limited in that their sample only consisted of 55 programs, whereas as of 2020, there were 85 CACREP-accredited doctoral programs. Lambie et al. (2014) emphasized the role of doctoral students and the necessity of mentorship in scholarly writing and publishing as outlined by CACREP standards. Through modeling and mentorship, counselor educators prepare doctoral students to transition into academic positions. The purpose of their study was to identify potential implications for supporting CES faculty and the career development of doctoral students (i.e., future counselor educators) by looking at the effects of faculty members’ academic rank, gender, Carnegie classification of current institution, and year doctoral degree was conferred on their rate of scholarly productivity over a 6-year time period. Between 2004 and 2009, counselor educators published a mean of 4.43 articles (Mdn = 3.0, SD = 4.77, range = 0–29 published articles) across 321 identified peer-reviewed journals. Lambie et al. (2014) further pointed out the variance in publication among CES faculty. Specifically, 20% of CES faculty published an average of 11.6 articles over the 6-year period, while 62% published an average of 3.02, and 16.1% did not publish any articles during this span of time. Their results also revealed a significant difference between the publication rates based on an institution’s Carnegie classification, where faculty at very high (R1) and high (R2) research activity institutions published significantly more than those at doctoral/professional universities. In addition, Lambie et al.’s (2014) finding that CES faculty who had more recently completed their doctoral degrees had the highest publication rates indicated programs are better preparing doctoral students to produce scholarly work. Their findings also implied that doctoral preparation programs can promote career readiness by implementing research competencies, such as scholarly writing and research mentorship, early in doctoral programs.

Newhart et al. (2020) similarly assessed publication rates among 257 counselor educators using a self-report survey across CACREP-accredited programs at various Carnegie classifications and academic ranks. Their stated purpose was to expand the current literature on CES publication rates using self-reported data to include non-tenured faculty and master’s-level–only programs. Their survey yielded a 17% response rate after randomly selecting 1,500 faculty members to participate. Respondents reported an average of 14.24 articles published or in press at the time of the survey, with an average of 1.69 publications per year. Carnegie classification appeared to be a significant predictor of publication rates across institutions, with faculty at more research-focused institutions publishing more often than faculty with lower research expectations. Similar to previous studies, results related to Carnegie classification appeared to underscore the emphasis certain programs place on publication standards, which can inform doctoral students’ decisions regarding which environments might be more suitable and conducive to their aspirations upon entering into academia. Although timely, Newhart et al.’s study has several limitations. There was no apparent time frame, leaving one to assume the reported information reflected participants’ total career publications, which could potentially skew the data. The 17% response rate for this study was another potential limitation, as it yielded responses from only 257 counselor educators with varying levels of experience. And as they highlighted, the use of self-report data may influence response bias and risk inflation of reported results based on desirability and bias.

Although previous researchers have asserted that doctoral-granting institutions are more likely to emphasize publishing (Barrio Minton et al., 2008; Lambie et al., 2014; Ramsey et al., 2002), research has yet to establish this as fact by comparing actual publication trends across a variety of institution types. Barrio Minton et al. (2008) began to address the differences when they called for future research to “examine publication trends and histories of counselor educators who are employed in programs in universities that are likely to place a high emphasis on publication” (p. 135) but failed to define, with certainty, the type of universities that emphasize publications. Despite the call for a revised definition of scholarship 17 years ago (Ramsey et al., 2002), scholarship is still heavily defined based on number of publications (Whitaker, 2018). These prior studies highlight the increased need for the use of observational data over a longitudinal period to verify self-reports and increase understanding of publication writing for the career development and mentorship of CES doctoral students.

Preparing CES Doctoral Students
     Although the exact extent is unknown, research and scholarship are clearly important factors for employability as CES faculty as well as career satisfaction and success (Lambie et al., 2014; Sackett et al., 2015). Preparing CES doctoral students to be employable, happy, and successful in academia requires (a) understanding the extent to which research is required at various institutions and (b) ensuring they are exposed to the necessary curricula related to research (Lambie et al., 2008, 2014; Lambie & Vaccaro, 2011; Sackett et al., 2015). Although we aim to clarify research expectations, it is important to first establish a framework to guide CES programs and faculty. HLT is one such framework that emphasizes planned and unplanned experiences that influence career direction (Krumboltz, 2009). Using HLT, CES faculty and programs can provide better learning environments and mentorship experiences through leveraging planned and unplanned activities. From this lens, faculty encourage students to engage in planned experiences aligned with their career aspirations while also being open to potentially formative unplanned experiences, especially related to research and scholarship.

Happenstance Learning Theory (HLT)
     According to HLT, career development is the result of numerous planned and unplanned experiences over the course of life in which people develop skills, interests, knowledge, beliefs, preferences, sensitivities, emotions, and behaviors guiding them toward a career (Krumboltz, 2009). The process of career development from an HLT perspective involves individuals “engaging in a variety of interesting and beneficial activities, ascertaining their reactions, remaining alert to alternative opportunities, and learning skills for succeeding in each new activity” (Krumboltz, 2009, p. 135). From an HLT stance, individuals must take five specific actions toward career development (Krumboltz, 2009). Initially, they must acknowledge anxiety toward career choice as normal and understand that the career development process as a long-term endeavor influenced by both planned and unplanned experiences. Next, it is important to allow identified concerns to be a starting point for further exploration. Third, they need to explore how past experiences with unplanned events have influenced current career interests and behaviors. Fourth, they should reframe unplanned experiences as opportunities for growth and learn to recognize these opportunities in their everyday lives. Finally, it is important that individuals remove or overcome any and all blocks to career-related action.

In an endeavor to explain career development and choice, HLT points to various planned and unplanned experiences throughout the life span (Krumboltz, 2009). Planned experiences include events individuals initiate such as pursuing a doctoral degree, choosing a particular CES program, identifying a focus of study, selecting courses as part of a program of study, and approaching specific faculty for advising and mentorship in an effort to achieve career aspirations. Unplanned experiences include events that individuals have no control over that often lead to revised career aspirations such as influential course instructors; type and quality of advising and mentoring; and various opportunities to teach, present, and publish with program faculty. Even though “the interaction of planned and unplanned actions in response to self-initiated and circumstantial situations is so complex that the consequences are virtually unpredictable and can best be labeled as happenstance” (Krumboltz, 2009, p. 136), unplanned experiences are particularly important to HLT. In fact, it is important that individuals take advantage of these unplanned experiences as opportunities to grow—something they are less likely to do if their predetermined career aspirations are too rigid (Gysbers et al., 2014).

For CES doctoral students, HLT is particularly pertinent in that although many enter programs with clear career aspirations, these career goals often remain fluid, changing and developing through planned and unplanned experiences throughout the training process. Although this drive to reach predetermined goals can serve as motivation, individuals who have made firm career decisions tend to focus on experiences that affirm their choices and overlook or fail to engage in unplanned experiences not related to their career goals (Gysbers et al., 2014). Thus, it is important that CES faculty not only encourage doctoral students to be open minded about potential career outcomes, but also provide opportunities for doctoral students to engage in formative unplanned experiences.

Although CACREP provides specific mandatory standards that must be accounted for, they allow programs to exercise flexibility and creativity in how they address them (CACREP, 2015; Goodrich et al., 2011). Students can expect a specific knowledge base but also have opportunities for paving their own career path because of the uniqueness of each CES program and other factors such as pre-enrollment career aspirations, unplanned life events, challenges or successes in courses, program emphasis, and mentorship. Both planned and unplanned experiences involve facing challenges, leading to developmental and transformational tasks that influence the integration of multiple identities, self-efficacy, and acceptance of responsibility as a leader in the counseling profession (Dollarhide et al., 2013). From an HLT framework, these transformational tasks are particularly significant, as they can be the catalyst for revised career aspirations or the reinforcement of previously determined career goals. This highlights the importance of advising and mentoring, and the need for ample opportunities for students to engage in diverse experiences so that these transformations can occur.

Planned Experiences
     Doctoral students in CACREP-accredited CES programs can expect planned experiences relating to coursework that integrates theories relevant to counseling, the skills and modalities of clinical supervision, pedagogy and teaching methods related to educating counselors, research designs and professional writing, and leadership skills. Although CES programs are designed to provide planned experiences related to all of the roles of a counselor educator (CACREP, 2015), the emphasis placed on each varies depending on the program and institution. CES faculty prepare doctoral students for a future in teaching, research, and service, often through experiences co-instructing counselors-in-training, scholarly work, and leadership roles advocating for the profession (Protivnak & Foss, 2009; Sears & Davis, 2003).

CACREP (2015) standards require that doctoral students learn research design, data analysis, program evaluation, and instrument design; however, there are not strict requirements or guidelines indicating what scholarly activities must be experienced before students graduate. Research experience is considered important because future CES faculty will likely be expected to engage in scholarship of some form, including writing journal articles, presenting at conferences, conducting program evaluations, and preparing other scholarly works such as grants and training manuals. However, after finding that less than a third of CES doctoral students had published a scholarly article, Lambie and Vaccaro (2011) concluded that CES programs must provide more planned experiences for student research engagement. Finally, because doctoral students inevitably learn valuable lessons in research and scholarship through the planned experience of completing a dissertation, CES programs must provide adequate training for students to successfully complete this milestone (Lambie et al., 2008).

Unplanned Experiences
     CES doctoral students also have various opportunities for unplanned learning experiences with research and scholarship through coursework and collaboration with peers and faculty. Unplanned experiences that appear to be particularly important for CES doctoral students often occur through mentoring (Kuo et al., 2017; Perera-Diltz & Duba Sauerheber, 2017; Protivnak & Foss, 2009; Purgason et al., 2016; Sackett et al., 2015). Mentorship experiences include relationships with advisors and dissertation chairs, work beyond the classroom setting with faculty mentors, and relationships with counselor educators from other universities or institutions. Kahn (2001) posited that research-specific mentoring and collaborative research projects can create an environment conducive for CES doctoral students to develop research skills by observing faculty.

Several studies have highlighted the importance of mentorship in the career development of CES students (Casto et al., 2005; Cusworth, 2001; Hoskins & Goldberg, 2005; Nelson et al., 2006; Protivnak & Foss, 2009). Protivnak and Foss (2009) interviewed 141 current CES doctoral students who stressed the helpfulness of mentorship while navigating their doctoral program but also discussed the consequences of a lack of mentorship and support. Participants who received mentorship stated that it helped with balance and guidance in the program, while participants without adequate mentorship shared feelings of frustration and being on their own. Further, Love et al. (2007) found that research mentoring was a predictor of whether or not CES doctoral students became involved in research projects.

CES Program Characteristics Influencing Engagement in Research Experiences
     All of these research experiences, both planned and unplanned, will vary across programs and depend on a multitude of factors, one of which might be the Carnegie classification of the institution where the program is housed. Carnegie classification divides colleges and universities that house CES programs into several categories, including the following: doctoral universities, master’s colleges and universities, baccalaureate colleges, and special focus institutions. Doctoral universities are further classified based on a measure of research activity into one of three levels: R1, for very high research activity; R2, for high research activity; and D/PU (doctoral/professional universities), for moderate research activity. If previous literature indicating that doctoral-granting institutions are more likely to emphasize publishing and produce more publications (Barrio Minton et al., 2008; Lambie et al., 2014; Ramsey et al., 2002) is accurate, then this might impact doctoral students’ career aspirations as well as exposure to and engagement in research-related experiences.

According to HLT, CES doctoral students’ career aspirations can influence how they engage in certain planned experiences and if they choose to engage in certain unplanned experiences (Krumboltz, 2009). For example, a student focused on a career at an institution with less emphasis on research (e.g., master’s university) may put forth minimal effort in research courses and opt out of any unplanned experiences related to scholarly activity, such as accepting an invitation to join a research team. Also, it is possible that CES doctoral students at R1, R2, and D/PU institutions might have varying exposure to opportunities to engage in unplanned experiences related to research and scholarship if faculty at those institutions are spending less time in the role of researcher. For instance, Goodrich et al. (2011) found that in a survey of 16 CACREP-accredited counseling programs, only six programs had established research teams and only four programs required students to submit scholarly work to a professional journal before they could graduate.

Purpose

This study was designed to explore the current trends in publication rates of faculty in CES programs over a 10-year time period. Using a Bayesian analysis, we examined the following questions:

  • Research Question 1: What are the differences among CES programs’ faculty publication rates based on all Carnegie classifications?

o Research Question 1.a: Are there differences among master’s-level programs based on Carnegie classifications in terms of faculty publication rates?

  • Research Question 2: Does observable data support prior literature findings regarding publication trends among CES programs at institutions with different levels of Carnegie classification?

Bayesian analysis is appropriate when “one can incorporate (un)certainty about a parameter and update his knowledge through the prior distribution” of probabilities (Depaoli & van de Schoot, 2017, p. 4). The inferences made by Newhart et al. (2020) were used as prior information to inform the collected observational data for this study. Newhart et al. used self-reported survey data to run a Poisson regression with the same variables proposed for this study. However, their data focused primarily on the differences among research institutions and combined non–research-designated institutions (i.e., master’s universities) into a single category. Newhart et al.’s output helped inform the limitations of the observational data collection procedures, such as error in using database search engines. Additionally, this is the first known study to examine observational data of publication trends for CES programs, which might provide an under- or overestimation when compared to self-reported data. Alternatively, the use of self-reported data has often been stated as a limitation because of participant bias, which might inflate the outcomes. Therefore, it would be helpful to compare inferences from both sets of data. An initial comparison of parameter estimates between both studies will inform the trends of publications between Carnegie classifications.

For this study, and similar to Newhart et al. (2020), Carnegie classification operated as the predictor variable and number of publications as the outcome variable. The results of the comparison and Bayesian hypothesis testing of data will provide a means to verify self-reported data trends between Carnegie classification using parameter estimates and further information regarding the scholarly productivity over a 10-year period and insight toward publication trends among non–PhD-level institutions using the posterior distributions.

Method

In order to answer the research questions, a list of all CACREP-accredited counseling programs in the United States was compiled by the principal investigator using the CACREP website directory. Next, a list of peer-reviewed journals affiliated with the American Counseling Association (ACA) was created. All journals were included, regardless of whether they published during the entire 10-year time period. Database search engines (e.g., EBSCO Academic Search Complete) and publisher websites were used as the primary tools to locate all articles published in every identified journal during the specific time period. After articles were secured, a database was created where authors’ associated institutions at the time of publication were indexed. At least one author for each publication was associated with a CACREP-accredited counseling program, an inclusion criterion for this study. Thus, if an article was authored by two faculty at two different CACREP-accredited programs, both institutions received credit for that publication. Finally, each institution’s most recent Carnegie classification was identified. A total of 5,250 publications authored by faculty at 396 institutions with CACREP-accredited programs were included in the analysis. The total number of publications accounts for articles with multiple authors from different institutions, with potentially different Carnegie classifications, being counted more than once. For example, an article authored by two faculty, one from an R1 institution and one from an M1, was counted as two publications. The rationale for this was that each institution listed on any given article would receive credit for this publication. R1 programs accounted for 37.68% (M = 33.53) and R2 accounted for 31.37% (M = 25.34) of publications in ACA-affiliated journals (see Table 1 for a detailed breakdown of institution and publications in ACA-affiliated journals).

 

Table 1

Breakdown of Observed Data of Total Publications by Carnegie Classification

Total Publications
Carnegie Classification # of Programs % # of Pubs % Mean SD Var
R1—Very High Research Activity       59 14.86 1,978  37.68 33.53 30.53 932.05
R2—High Research Activity       65 16.37 1,647  31.37 25.34 28.29 800.45
D/PU—Doctoral/Professional Universities       71 17.88 652  12.42 9.18 13.36 178.52
M1—Larger Master’s Program     116 29.47 802  15.28 6.91 9.71 93.90
M2—Medium Master’s Program       43 10.83 105    2.00 2.44 3.14 9.87
M3—Smaller Master’s Program       13   3.27 20    0.38 1.54 2.30 5.27
Bacc.—Baccalaureate Colleges         9   2.27 9    0.17 1.00 1.32 1.75
SF—Special Focus Institutions       20   5.04 37    0.70 1.85 2.56 6.56
Total     396 5250 13.26 21.32

 Note. This table provides the descriptive statistics for programs and publications by Carnegie classification. Only CACREP-accredited programs were included.

Data Analysis
     Following the data collection, the observed data was entered into and analyzed using SAS statistical software system to run the Markov chain Monte Carlo procedure with the Metropolis–Hastings algorithm to generate the estimated models. A Bayesian theoretical approach was taken to use prior information elicited from Newhart et al.’s (2020) publication, “Factors Influencing Publication Rates Among Counselor Educators.” It was determined that a Poisson regression analysis was appropriate for determining the relationship between a predictor variable and an outcome variable characterized in the form of a frequency count. One assumption of Poisson models is that the mean and the variance are equal (homogeneity of conditional means). If this assumption is violated, a negative binomial model can account for a large difference between the variance and mean by estimating a dispersion parameter (Agresti, 2007). The test for the assumption of equal conditional mean and variance was violated, indicating overdispersion. Overdispersion occurs when the data has greater variability. The following negative binomial model was used to run a negative binomial regression (where D is the dispersion parameter): E(Y) = μ, Var(Y) = μ+Dμ2.

     Next, the self-reported data collected from Newhart et al. (2020) was used to determine prior information to distinguish the differences between Carnegie classification and publication rates using R1 institutions as a baseline (see Table 2). The logarithm of the ratio was used for the prior mean of the distribution.

 

Table 2

Newhart et al. (2020) Self-Reported Total Publications by Carnegie Classification

Total Publications
Carnegie Classification M SD Ratio
R1: Doctoral Universities – Very High 25.78 26.15
R2: Doctoral Universities – High 19.74 18.53 0.766
D/PU: Doctoral/Professional Universities – Moderate 13.31 14.78 0.516
Master’s Universities 7.98 7.46 0.309

Note. Ratio reflects multiplicative factor in relation to baseline R1.

  

Results

A negative binomial regression was used to (a) determine what the posterior probability publishing rates of non-R1 programs were compared to programs at R1 institutions and (b) examine if Newhart et al.’s (2020) self-reported data was plausible given the observed data. An initial model contained 10,000 burn-ins with a total of 100,000 iterations; however, this model lacked efficient information as determined by the effective sample size and efficiency. Next, Gibbs sampling with the Jeffreys prior was used and produced similar posterior parameter estimates and increased efficiency, indicating robustness; however, the effective sample size did not increase. Therefore, a sum-to-zero constraint was used to re-parametrize the model by centering the parameters. This resulted in coefficients representing group deviations from the grand mean, where in prior models the coefficients represented group deviations from the reference group. The following results are reported using the WAMBS checklist procedure for reporting Bayesian statistical results (Depaoli & van de Schoot, 2017). The WAMBS checklist consists of four stages and 10 points to appropriately understand, interpret, and provide results of Bayesian statistics. The following paragraph outlines these 10 steps as applied to the current data.

First, normally distributed, non-informative priors were used (see Table 3). Second, model convergence was inspected by visually inspecting trace plots and using Geweke’s statistic. A visual inspection of the posterior parameter trace plots provided evidence of chain convergence in which each chain centers around a value and has few fluctuations displaying a “fuzzy” pattern. Geweke’s statistic compares the differences in means across chains to test convergence by comparing the first 10% of the chain to the last 50%. According to the Geweke’s statistics results, all values are within the range of +/- 1.96 and retain the null hypothesis with p > .05 (nonsignificant), indicating convergence. Convergence remained after doubling the number of iterations. Third, the chains did not appear to shift and converge at another location after doubling the iterations, with parameters centering around the previous estimates. Fourth, each parameter histogram was reviewed and determined to have adequate representation of the posterior distribution. Fifth, after determining the model had converged, the chains were inspected for dependency as evidenced by the autocorrelations. The model appeared to have low autocorrelations, with each chain approaching and reaching zero between 10 and 20 lags, indicating low chain dependency. In addition to low autocorrelations, the effective sample size indicated the model was robust with information as evidenced by (ESS > 10,000) and positive efficiency. Prior to interpreting the output, we compared the model using the informative prior information, which slightly pulled the posterior mean estimates closer to that of the prior information; however, the results were effectively the same. Sixth, the posterior distribution appeared to make substantive sense as evidenced by smooth posterior density plots with reasonable standard deviations within the scale of the original parameters. Steps 7 through 9 were skipped in cases where only non-informative priors were used. Lastly, Step 10, the Bayesian way of interpreting and reporting results, was followed.

To answer the first research question, the post-summary means that are group deviations from the grand mean (intercept) were taken to determine the differences of Carnegie classification in comparison to R1, yielding the parameter estimate B (see Table 3).

 

Table 3

Posterior Summaries

Parameter Priors M HPD Interval B exp(B) 1/exp(B)
Intercept N (0, 10) 1.612 1.4251 1.8018 3.521
R1 N (0, 10) 1.909 1.6013 2.2142
R2 N (-0.27, 10) 1.628 1.3315 1.9199 -0.28 0.756 1.323
D/PU N (-.66, 10) 0.612 0.3142 0.9038 -1.295 0.274 3.650
M1 N (-1.17, 10) 0.317 0.0617 0.5824 -1.587 0.206 4.854
M2 N (-1.17, 10) -0.712 -1.0935 -0.3411 -2.619 0.073 13.699
M3 N (-1.17, 10) -1.164 -1.8311 -0.4917 -3.082 0.046 21.740
Bacc N (2, 10) -1.609 -2.4932 -0.7119 -3.512 0.029 34.483
SF N (2, 10) -0.982 -1.5083 -0.4479 -2.897 0.055 18.182
Dispersion N (1, 1) 0.885 0.7506 1.0275 1.118

Note: exp(B) reflects the times fewer publications in relation to R1; 1/exp(B) reflects R1 x more publications in relation to parameter.

 

The results of the negative binomial regression indicated faculty at R1 programs published at a rate of 1.32 times that of faculty at R2 programs. Faculty at R1 programs published 3.65 times more than faculty at D/PU programs and 4.85 times more than faculty at M1 programs. Figure 1 provides a visual density plot of the posterior summaries of the group deviations from the intercept. An interpretation of the visual analysis indicated publication rates among faculty arranged into three groupings based on the observed data in the estimated model: R1 and R2 programs, D/PU and M1 programs, and the remainder of the program types.

  

Figure 1

Posterior Density Plot

Note. Posterior density plot for differences from the grand mean.

 

To answer the second research question, a series of Bayesian hypothesis testing was conducted. Hypotheses tests were only conducted for doctoral- and master’s-level programs because Newhart et al. (2020) only provided information regarding doctoral- and master’s-level programs. The observed data collected yielded higher mean and standard deviations for each Carnegie classification compared to Newhart et al. Therefore, instead of comparing the differences among means, it appeared more appropriate to assess the differences between self-reported and observed data ratios regarding program Carnegie classification publication productivity. Newhart et al.’s self-reported data indicated that in relation to R1 programs, faculty at R2 programs published .77 times fewer articles, faculty at D/PU programs published .52 times fewer articles, and faculty at master’s-level programs published .31 times fewer articles (see Table 2). These ratios were converted using the logarithmic form to be used as the prior means. After determining the differences of the observed data from the previous question, the prior means were used to compare the plausibility of Newhart et al.’s data with the observed data.

Surprisingly, the self-reported ratio between R1 and R2 programs was similar to the observed; therefore, the hypothesis test yielded a 52.35% probability of Newhart et al.’s (2020) self-reported finding of the ratio between R2 and R1 programs falling below the posterior estimate and 47.65% probability of it falling above (see Figure 2). However, the remainder of self-reported data fell above the posterior estimates with 99%–100% probability. Therefore, the plausibility of Newhart et al.’s findings regarding the ratio between R1 and R2 programs was 100%; however, the plausibility of all other programs was 0%–1%. It appears Newhart’s self-reported data was potentially underestimating differences in publication ratios between programs beyond R2 programs in relation to R1 when compared to the observed data.

 

Figure 2

Plausibility of Newhart et al. (2020) Data

Note. R2, D/PU, M1, M2, and M3 program estimations are displayed in relation to R1 programs.

 

Discussion

In this study, we examined the actual publication trends of CES faculty by reviewing all articles published in ACA-affiliated, peer-reviewed journals from 2008 to 2018. The results of this study support the perceived relationship between higher Carnegie classification and increased scholarly productivity (Barrio Minton et al., 2008) and confirm previous self-reported research findings (Ramsey et al., 2002) that faculty at higher-ranked institutions spend more time publishing. A review of the results and previous literature indicates several unique findings relevant to faculty, programs, and doctoral students. The differences between Carnegie classifications show that although CES faculty at R1 universities publish at higher rates, as anticipated, CES faculty at R2 and R1 universities are publishing at similar rates in ACA journals. CES faculty in programs at R1 and R2 institutions produce the highest number of publications, accounting for 69.1% of publications from 2008 to 2018, suggesting these programs will have the highest demands for research activity. Interestingly, although they are publishing less frequently than R1 and R2 programs, publication rates appear to be similar for CES faculty in programs at D/PU and M1 institutions. Together they account for 27.7% of publications over the past decade, a considerable amount of research in the counseling profession. Counseling programs at M2, M3, Baccalaureate, and Special Focus institutions have the lowest publication outcomes, accounting for 3.3% of publications over the past decade, a finding consistent with previous literature (Barrio Minton et al., 2008; Ramsey et al., 2002) and the method by which Carnegie classifications are attained.

The fact that CES faculty at M1 institutions, which supposedly do not place high emphasis on research, are publishing at a rate similar to faculty at D/PU institutions is interesting. It is possible that CES faculty at M1 institutions are spending more time engaged in scholarly activity because of the perceived importance of publishing for tenure/promotion (Barrio Minton at al., 2008; Ramsey et al., 2002; Ray et al., 2011; Whitaker, 2018). Applicants for tenure-track positions, as well as tenure-track CES faculty already at these programs, might expect to experience pressure to publish at a higher level similar to that of D/PUs for a variety of reasons. Faculty at M1 institutions might feel motivated to increase their publications as their institution attempts to change classification, which could result in increased external funding, attained interest of high-quality faculty, and gained recognition (Olson, 2018). Alternatively, CES faculty working at M1 or D/PU institutions who plan to apply to programs at institutions with high or very high research activity might feel pressure to publish more frequently in order to advance their careers as desired (Lambie et al., 2014). Salary may also influence CES faculty considering institutional moves, with annual salaries at R1 institutions averaging $17,000 more than R2, and annual salaries at R2 averaging $9,000 more than D/PU and $7,500 more than M1 institutions (Chronicle of Higher Education, 2018).

Another unique finding is that it appears the observed differences between R1 and R2 CES faculty publication rates match Newhart et al.’s (2020), providing further evidence CES faculty at R1 classified institutions as a whole are publishing at a rate 1.32 times higher than CES faculty at R2 institutions in ACA-affiliated journals. It also appears Newhart et al.’s findings underestimate program differences and do not account for the differences among master’s-level programs as evidenced by the higher rate of publication by CES faculty at M1 programs.

The results of the current study highlight the importance of an emphasis on research and scholarship in CES doctoral programs in order to prepare future CES faculty to be successful in their roles. As doctoral students begin their job search, students seeking faculty positions face the uncertainty of not knowing where positions will be available and at what types of institutions. Although some doctoral students may have a clear idea of the type of institution where they wish to work, it is not guaranteed they will secure their desired position. In a profession that is growing quickly and becoming increasingly competitive, it is essential that CES programs support doctoral students in honing their research skills for career success and to promote job satisfaction. In programs where CES faculty are expected to publish at higher rates, doctoral students with inadequate preparation are at risk of becoming unsatisfied in their positions, which can result in decreased productivity and retention (Wong & Heng, 2009). Therefore, a focus on research and scholarship in CES programs not only helps in the career development of doctoral students but promotes retention of faculty in the long term (Sangganjanavanich & Balkin, 2013).

Limitations

Limitations of this study include issues regarding sample and journal selection. Regarding journals selected, because previous research indicates that counselor educators most often publish in counseling-related journals (Barrio Minton et al., 2008), we chose to limit our study to ACA division journals. However, many counselor educators publish in non-ACA journals, such as Professional School Counseling and the International Journal of Play Therapy. Our sample included only programs that were listed as CACREP accredited in August of 2018, which will have included programs that were either merging or losing accreditation, as well as not including programs that have since become accredited. Additionally, not all programs may have been accredited during the entire 10-year time frame, and an institution’s Carnegie classification possibly could have changed during that span as well. Specifically, during the 10-year time frame used for this article, Carnegie classifications were reviewed every 5 years. Currently, review and reclassification occurs every 3 years. Future research could account for this by organizing publications in 3-year clumps and including reclassification as a variable for data analysis. Future research also might consider additional counseling journals not affiliated with ACA, the quality and type of manuscripts published (e.g., conceptual, qualitative, quantitative), and the presence of doctoral student authorship in the published manuscript. Further, exploring publications by specific years will reveal particular trends over the 10-year time period.

Implications

Viewing the results of this study through an HLT lens, planned experiences are structured by the program in order to ensure that CACREP standards are met and that students become competent and knowledgeable CES faculty. However, faculty members are positioned to provide opportunities for doctoral students to have unplanned experiences and to support doctoral students navigating unplanned experiences beyond their control. In terms of research, the authors of this article argue for the necessity of increased opportunities for CES doctoral students to engage in unplanned experiences such as formal research teams, supervised research projects, and research collaborations through conducting studies, writing journal articles, and presenting scholarly work. Research and scholarly activity are an integral part of being a CES faculty member (CACREP, 2015).

Balancing the expectations of various CES roles, such as teaching, student mentorship, research, and leadership, creates a natural pressure for faculty members contributing to challenges such as difficulties with time management and role confusion (Smith & Leppma, 2017). For faculty members expected to produce several articles per year, tenure and promotion requirements may increase this perceived pressure, as one’s job security often depends on one’s rate of publication. Tenure and promotion requirements promote the need for quality scholarship published in peer-reviewed journals; however, the expectations of CES roles are not consistent across CES universities and programs, resulting in differences on the impact on scholarly productivity and perceived pressures to engage in efforts to publish (Ray et al., 2011). These expectations for faculty may also influence the level of engagement CES faculty have with students regarding their research projects and endeavors. According to Section C of the ACA Code of Ethics, counselors have an ethical obligation to “engage in counseling practices that are based on rigorous research methodologies” (ACA, 2014, p. 8), and an entire section (Section G) is dedicated to research and publication. The ACA code not only offers guidance for ensuring research is conducted ethically to protect participants’ rights, but it also calls for research to be used as a means for promoting a healthier and more just society. CES faculty are charged to produce research and to engage doctoral students in developing and participating in research publication (Lambie et al., 2014; Wester et al., 2013). Future research exploring annual publication expectations and the number of publications at important tenure/promotion milestones for CES faculty could provide clarity regarding program and university workloads.

The authors suggest programs and faculty create ample opportunities for doctoral students to engage in research through the use of research teams and establishing expectations to publish during their doctoral tenure. Programs largely vary in their research training; although some programs provide clear and established research teams, a majority do not. Further, fewer programs require students to submit a publication to a professional journal prior to candidacy (Goodrich et al., 2011). By providing doctoral students with research mentorship and opportunities to collaborate on scholarly work, faculty members increase the likelihood that doctoral students will engage in research activities. Doctoral students who not only engage in research-related activities but publish while in their doctoral program are more likely to have increased interest, engagement, and competence in research-related tasks (Lambie & Vaccaro, 2011). Doctoral program faculty should not only design courses that teach research methods but should infuse research and scholarly writing into every course. Although it might seem more difficult to do this in certain types of courses, such as those with a clinical focus, CES faculty could use those opportunities to teach and practice action research (Whiston, 1996), qualitative research (Hays & Wood, 2011), or single-case research designs (Ray, 2015), giving students the tools necessary to efficiently produce quality research, especially if they obtain faculty positions in CES programs.

Additionally, students can approach faculty advisors for assistance identifying their interests and strengths and seek out mentorship opportunities that align with their career ambitions during the initial year of their doctoral program. Further, as mentors and advisors, faculty members can help doctoral students identify their interests and strengths, set career goals, and align those goals with appropriate types of institutions. For instance, it appears that programs at D/PU institutions with moderate emphasis on scholarship and research may want to develop or continue to develop research mentorship for doctoral students to improve their job placement opportunities. Further, although M1 institutions are not involved in the training of doctoral students, this group comprises a majority of programs, indicating that a good portion of doctoral students will be working at master’s-level institutions, and if placed at an M1, they may still have an intrinsic or extrinsic responsibility to conduct and publish research.

Conclusion

The authors sought to further understand the publication trends of faculty in 396 CACREP-accredited CES programs based on Carnegie classification by exploring 5,250 publications over the last decade in 21 ACA and ACA division journals and how these results can be used to inform CES training and preparation of doctoral students through an HLT framework. Although findings indicate that programs at R1 and R2 institutions account for nearly 70% of research, a majority of the remainder of CES literature (nearly 28%) is produced by D/PUs and larger master’s programs (M1s), indicating a greater emphasis on research than previously perceived at non-doctoral institutions. Programs and faculty can provide enriched experiences through advising and mentorship to better prepare future counselor educators in the areas of research and scholarship.

 

Conflict of Interest and Funding Disclosure
The authors reported no conflict of interest
or funding contributions for the development
of this manuscript.

 

References

Agresti, A. (2007). An introduction to categorical data analysis (2nd ed.). Wiley.

American Counseling Association. (2014). ACA code of ethics. https://www.counseling.org/Resources/aca-code-of-ethics.pdf

Baker, C. A., & Moore, J. L., III. (2015). Experiences of underrepresented doctoral students in counselor education. Journal for Multicultural Education, 9(2), 68–84. https://doi.org/10.1108/JME-11-2014-0036

Barrio Minton, C. A., Fernando, D. M., & Ray, D. C. (2008). Ten years of peer-reviewed articles in counselor education: Where, what, who? Counselor Education and Supervision, 48(2), 133–143.
https://doi.org/10.1002/j.1556-6978.2008.tb00068.x

Casto, C., Caldwell, C., & Salazar, C. F. (2005). Creating mentoring relationships between female faculty and students in counselor education: Guidelines for potential mentees and mentors. Journal of Counseling & Development, 83(3), 331–336. https://doi.org/10.1002/j.1556-6678.2005.tb00351.x

Chronicle of Higher Education. (2018, August 19). Average salaries of full-time instructional staff members at 4-year public institutions, by gender, 2016–17.  https://www.chronicle.com/article/average-salaries-of-full-time-instructional-staff-members-at-4-year-public-institutions-by-gender-2016-17/

Council for Accreditation of Counseling and Related Educational Programs. (2015). 2016 CACREP standards. http://www.cacrep.org/wp-content/uploads/2017/08/2016-Standards-with-citations.pdf

Cusworth, S. (2001, August 24–28). Orientation and retention of counseling PhD students: A qualitative study. [Paper presentation]. Annual conference of the American Psychological Association, San Francisco, CA, United States.

Davis, T. E., Levitt, D. H., McGlothlin, J. M., & Hill, N. R. (2006). Perceived expectations related to promotion and tenure: A national survey of CACREP program liaisons. Counselor Education and Supervision, 46(2), 146–156. https://doi.org/10.1002/j.1556-6978.2006.tb00019.x

Del Rio, C. M., & Mieling, G. G. (2012). What you need to know: PhDs in counselor education and supervision. The Family Journal, 20(1), 18–28. https://doi.org/10.1177/1066480711429265

Depaoli, S., & van de Schoot, R. (2017). Improving transparency and replication in Bayesian statistics: The WAMBS-checklist. Psychological Methods, 22(2), 240–261. https://doi.org/10.1037/met0000065

Dollarhide, C. T., Gibson, D. M., & Moss, J. M. (2013). Professional identity development of counselor education doctoral students. Counselor Education and Supervision, 52(2), 137–150.
https://doi.org/10.1002/j.1556-6978.2013.00034.x

Dunn, M., & Kniess, D. R. (2019). To pursue or not to pursue a terminal degree. New Directions for Student Services, 2019(166), 41–50. https://doi.org/10.1002/ss.20306

Goodrich, K. M., Shin, R. Q., & Smith, L. C. (2011). The doctorate in counselor education. International Journal for the Advancement of Counselling, 33(3), 184–195. https://doi.org/10.1007/s10447-011-9123-7

Gysbers, N. C., Heppner, M. J., & Johnston, J. A. (2014). Career counseling: Holism, diversity, and strengths (4th ed.). American Counseling Association.

Hays, D. G., & Wood, C. (2011). Infusing qualitative traditions in counseling research designs. Journal of Counseling & Development, 89(3), 288–295. https://doi.org/10.1002/j.1556-6678.2011.tb00091.x

Hinkle, M., Iarussi, M. M., Schermer, T. W., & Yensel, J. F. (2014). Motivations to pursue the doctoral degree in counselor education and supervision. The Journal of Counselor Preparation and Supervision, 6(1).
https://doi.org/10.7729/51.1069

Hoskins, C. M., & Goldberg, A. D. (2005). Doctoral student persistence in counselor education programs: Student–program match. Counselor Education and Supervision, 44(3), 175–188.
https://doi.org/10.1002/j.1556-6978.2005.tb01745.x

Kahn, J. H. (2001). Predicting the scholarly activity of counseling psychology students: A refinement and extension. Journal of Counseling Psychology, 48(3), 344–354. https://doi.org/10.1037/0022-0167.48.3.344

Krumboltz, J. D. (2009). The happenstance learning theory. Journal of Career Assessment, 17(2), 135–154.
https://doi.org/10.1177/1069072708328861

Kuo, P. B., Woo, H., & Bang, N. M. (2017). Advisory relationship as a moderator between research self-efficacy, motivation, and productivity among counselor education doctoral students. Counselor Education and Supervision, 56(2), 130–144. https://doi.org/10.1002/ceas.12067

Lambie, G. W., Ascher, D. L., Sivo, S. A., & Hayes, B. G. (2014). Counselor education doctoral program faculty members’ refereed article publications. Journal of Counseling & Development, 92(3), 338–346.
https://doi.org/10.1002/j.1556-6676.2014.00161.x

Lambie, G. W., Sias, S. M., Davis, K. M., Lawson, G., & Akos, P. (2008). A scholarly writing resource for counselor educators and their students. Journal of Counseling & Development, 86(1), 18–25.
https://doi.org/10.1002/j.1556-6678.2008.tb00621.x

Lambie, G. W., & Vaccaro, N. (2011). Doctoral counselor education students’ levels of research self-efficacy, perceptions of the research training environment, and interest in research. Counselor Education and Supervision, 50(4), 243–258. https://doi.org/10.1002/j.1556-6978.2011.tb00122.x

Love, K. M., Bahner, A. D., Jones, L. N., & Nilsson, J. E. (2007). An investigation of early research experience and research self-efficacy. Professional Psychology: Research and Practice, 38(3), 314–320.
https://doi.org/10.1037/0735-7028.38.3.314

Nelson, K. W., Oliver, M., & Capps, F. (2006). Becoming a supervisor: Doctoral student perceptions of the training experience. Counselor Education and Supervision, 46(1), 17–31.
https://doi.org/10.1002/j.1556-6978.2006.tb00009.x

Newhart, S., Mullen, P. R., Blount, A. J., & Hagedorn, W. B. (2020). Factors influencing publication rates among counselor educators. Teaching and Supervision in Counseling, 2(1), 45–59. https://doi.org/10.7290/tsc020105

Olson, G. A. (2018, July 29). What institutions gain from higher Carnegie status. The Chronicle of Higher Education. https://www.chronicle.com/article/What-Institutions-Gain-From/244052

Perera-Diltz, D., & Duba Sauerheber, J. (2017). Mentoring and other valued components of counselor educator doctoral training: A Delphi study. International Journal of Mentoring and Coaching in Education, 6(2), 116–127. https://doi.org/10.1108/IJMCE-09-2016-0064

Protivnak, J. J., & Foss, L. L. (2009). An exploration of themes that influence the counselor education doctoral student experience. Counselor Education and Supervision, 48(4), 239–256.
https://doi.org/10.1002/j.1556-6978.2009.tb00078.x

Purgason, L. L., Avent, J. R., Cashwell, C. S., Jordan, M. E., & Reese, R. F. (2016). Culturally relevant advising: Applying relational-cultural theory in counselor education. Journal of Counseling & Development, 94(4), 429–436. https://doi.org/10.1002/jcad.12101

Ramsey, M., Cavallaro, M., Kiselica, M., & Zila, L. (2002). Scholarly productivity redefined in counselor education. Counselor Education and Supervision, 42(1), 40–57. https://doi.org/10.1002/j.1556-6978.2002.tb01302.x

Ray, D. C. (2015). Single-case research design and analysis: Counseling applications. Journal of Counseling & Development, 93(4), 394–402. https://doi.org/10.1002/jcad.12037

Ray, D. C., Hull, D. M., Thacker, A. J., Pace, L. S., Swan, K. L., Carlson, S. E., & Sullivan, J. M. (2011). Research in counseling: A 10-year review to inform practice. Journal of Counseling & Development, 89(3), 349–359. https://doi.org/10.1002/j.1556-6678.2011.tb00099.x

Sackett, C. R., Hartig, N., Bodenhorn, N., Farmer, L. B., Ghoston, M. R., Graham, J., & Lile, J. (2015). Advising master’s students pursuing doctoral study: A survey of counselor educators and supervisors. The Professional Counselor, 5(4), 473–485. https://doi.org/10.15241/crs.5.4.473

Sangganjanavanich, V. F., & Balkin, R. S. (2013). Burnout and job satisfaction among counselor educators. The Journal of Humanistic Counseling, 52(1), 67–79. https://doi.org/10.1002/j.2161-1939.2013.00033.x

Sears, S. J., & Davis, T. E. (2003). The doctorate in counselor education: Implications for leadership. In J. D. West, C. J. Osborn, & D. L. Bubenzer (Eds.), Leaders and legacies: Contributions to the profession of counseling (pp. 95–108). Brunner-Routledge.

Shropshire, S., Semenza, J. L., & Kearns, K. (2015). Promotion and tenure: Carnegie reclassification triggers a revision. Library Management, 36(4/5), 340–350. https://doi.org/10.1108/LM-09-2014-0113

Smith, M. C., & Leppma, M. (2017). The manuscript completion workshop: Supporting professional development of tenure track faculty members. Journal of Faculty Development, 31(2), 43–48.

Wester, K. L., Borders, L. D., Boul, S., & Horton, E. (2013). Research quality: Critique of quantitative articles in the Journal of Counseling & Development. Journal of Counseling & Development, 91(3), 280–290.
https://doi.org/10.1002/j.1556-6676.2013.00096.x

Whiston, S. C. (1996). Accountability through action research: Research methods for practitioners. Journal of Counseling & Development, 74(6), 616–623. m https://doi.org/10.1002/j.1556-6676.1996.tb02301.x

Whitaker, M. (2018, November 21). How to be strategic on the tenure track. The Chronicle of Higher Education. https://www.chronicle.com/article/How-to-Be-Strategic-on-the/244863

Wong, E. S. K., & Heng, T. N. (2009). Case study of factors influencing job satisfaction in two Malaysian universities. International Business Research, 2(2), 86–98. https://doi.org/10.5539/ibr.v2n2p86

Zeligman, M., Prescod, D. J., & Greene, J. H. (2015). Journey toward becoming a counselor education doctoral student:
Perspectives of women of color. The Journal of Negro Education, 84(1), 66–79.
https://doi.org/10.7709/jnegroeducation.84.1.0066

 

Cian L. Brown, MS, NCC, LPC, BCN, is a doctoral candidate at the University of Arkansas. Anthony J. Vajda, PhD, NCC, is an assistant professor at the University of Arkansas. David D. Christian, PhD, LPC, is an assistant professor at the University of Arkansas. Correspondence may be addressed to Cian Brown, University of Arkansas, 751 W. Maple St., GRAD 117, Fayetteville, AR 72701, clb061@uark.edu.