Current Practices in Online Counselor Education

William H. Snow, Margaret R. Lamar, J. Scott Hinkle, Megan Speciale

 

The Council for Accreditation of Counseling & Related Educational Programs (CACREP) database of institutions revealed that as of March 2018 there were 36 CACREP-accredited institutions offering 64 online degree programs. As the number of online programs with CACREP accreditation continues to grow, there is an expanding body of research supporting best practices in digital remote instruction that refutes the ongoing perception that online or remote instruction is inherently inferior to residential programming. The purpose of this article is to explore the current literature, outline the features of current online programs and report the survey results of 31 online counselor educators describing their distance education experience to include the challenges they face and the methods they use to ensure student success.


Keywords:
online, distance education, remote instruction, counselor education, CACREP

 

Counselor education programs are being increasingly offered via distance education, or what is commonly referred to as distance learning or online education. Growth in online counselor education has followed a similar trend to that in higher education in general (Allen & Seaman, 2016). Adult learners prefer varied methods of obtaining education, which is especially important in counselor education among students who work full-time, have families, and prefer the flexibility of distance learning (Renfro-Michel, O’Halloran, & Delaney, 2010). Students choose online counselor education programs for many reasons, including geographic isolation, student immobility, time-intensive work commitments, childcare responsibilities, and physical limitations (The College Atlas, 2017). Others may choose online learning simply because it fits their learning style (Renfro-Michel, O’Halloran, & Delaney, 2010). Additionally, education and training for underserved and marginalized populations may benefit from the flexibility and accessibility of online counselor education.

The Council for Accreditation of Counseling & Related Educational Programs (CACREP; 2015) accredits online programs and has determined that these programs meet the same standards as residential programs. Consequently, counselor education needs a greater awareness of how online programs deliver instruction and actually meet CACREP standards. Specifically, existing online programs will benefit from the experience of other online programs by learning how to exceed and surpass minimum accreditation expectations by utilizing the newest technologies and pedagogical approaches (Furlonger & Gencic, 2014). The current study provides information regarding the current state of online counselor education in the United States by exploring faculty’s descriptions of their online programs, including their current technologies, student and program community building approaches, and challenges faced.

 

Distance Education Defined

Despite its common usage throughout higher education, the U.S. Department of Education (DOE) does not use the terms distance learning, online learning, or online education; rather, it has adopted the term distance education (DOE, 2012). However, in practice, the terms distance education, distance learning, online learning, and online education are used interchangeably. The DOE has defined distance education as the use of one or more technologies that deliver instruction to students who are separated from the instructor and that supports “regular and substantive interaction between the students and the instructor, either synchronously or asynchronously” (2012, p. 5). The DOE has specified that technologies may include the internet, one-way and two-way transmissions through open broadcast and other communications devices, audioconferencing, videocassettes, DVDs, and CD-ROMs. Programs are considered distance education programs if at least 50% or more of their instruction is via distance learning technologies. Additionally, residential programs may contain distance education elements and still characterize themselves as residential if less than 50% of their instruction is via distance education. Traditional on-ground universities are incorporating online components at increasing rates; in fact, 67% of students in public universities took at least one distance education course in 2014, further reflecting the growth in this teaching modality (Allen & Seaman, 2016).

Enrollment in online education continues to grow, with nearly 6 million students in the United States engaged in distance education courses (Allen & Seaman, 2016). Approximately 2.8 million students are taking online classes exclusively. In a conservative estimate, over 25% of students enrolled in CACREP programs are considered distance learning students. In a March 2018 review of the CACREP database of accredited institutions, there were 36 accredited institutions offering 64 degree programs. Although accurate numbers are not available from any official sources, it is a conservative estimate that over 12,000 students are enrolled in a CACREP-accredited online program. When comparing this estimate to the latest published 2016 CACREP enrollment figure of 45,820 (CACREP, 2017), online students now constitute over 25% of the total. This does not include many other residential counselor education students in hybrid programs who may take one or more classes through distance learning means.

At the time of this writing, an additional three institutions were currently listed as under CACREP review, and soon their students will likely be added to this growing online enrollment. As this trend continues, it is essential for counselor education programs to understand issues, trends, and best practices in online education in order to make informed choices regarding counselor education and training, as well as preparing graduates for employment. It also is important for hiring managers in mental health agencies to understand the nature and quality of the training graduates of these programs have received.

One important factor contributing to the increasing trends in online learning is the accessibility it can bring to diverse populations throughout the world (Sells, Tan, Brogan, Dahlen, & Stupart, 2012). For instance, populations without access to traditional residential, brick-and-mortar classroom experiences can benefit from the greater flexibility and ease of attendance that distance learning has to offer (Bennet-Levy, Cromarty, Hawkins, & Mills, 2012). Remote areas in the United States, including rural and frontier regions, often lack physical access to counselor education programs, which limits the numbers of service providers to remote and traditionally underserved areas of the country. Additionally, the online counselor education environment makes it possible for commuters to take some of their course work remotely, especially in winter when travel can become a safety issue, and in urban areas where travel is lengthy and stressful because of traffic.

 

The Online Counselor Education Environment

The Association for Counselor Education and Supervision (ACES) Technology Interest Network (2017) recently published guidelines for distance education within counselor education that offer useful suggestions to online counselor education programs or to those programs looking to establish online courses. Current research supports that successful distance education programs include active and engaged faculty–student collaboration, frequent communications, sound pedagogical frameworks, and interactive and technically uncomplicated support and resources (Benshoff & Gibbons, 2011; Murdock & Williams, 2011). Physical distance and the associated lack of student–faculty connection has been a concern in the development of online counselor education programs. In its infancy, videoconferencing was unreliable, unaffordable, and often a technological distraction to the learning process. The newest wave of technology—enhanced distance education—has improved interactions using email, e-learning platforms, and threaded discussion boards to make asynchronous messaging virtually instantaneous (Hall, Nielsen, Nelson, & Buchholz, 2010). Today, with the availability of affordable and reliable technical products such as GoToMeeting, Zoom, and Adobe Connect, online counselor educators are holding live, synchronous meetings with students on a regular basis. This includes individual advising, group supervision, and entire class sessions.

It is important to convey that online interactions are different than face-to-face, but they are not inferior to an in-person faculty–student learning relationship (Hickey, McAleer, & Khalili, 2015). Students and faculty prefer one method to the other, often contingent upon their personal belief in the effectiveness of the modality overall and their belief in their own personal fit for this style of teaching and learning (Watson, 2012). In the actual practice of distance education, professors and students are an email, phone call, or videoconference away; thus, communication with peers and instructors is readily accessible (Murdock & Williams, 2011; Trepal, Haberstroh, Duffey, & Evans, 2007). When communicating online, students may feel more relaxed and less inhibited, which may facilitate more self-disclosure, reflexivity, and rapport via increased dialogue (Cummings, Foels, & Chaffin, 2013; Watson, 2012). Subsequently, faculty who are well-organized, technologically proficient, and more responsive to students’ requests may prefer online teaching opportunities and find their online student connections more engaging and satisfying (Meyer, 2015). Upon Institutional Research Board approval, an exploratory survey of online counselor educators was conducted in 2016 and 2017 to better understand the current state of distance counselor education in the United States.

 

Method

Participants

Recruitment of participants was conducted via the ACES Listserv (CESNET). No financial incentive or other reward was offered for participation. The 31 participants comprised a sample of convenience, a common first step in preliminary research efforts (Kerlinger & Lee, 1999). Participants of the study categorized themselves as full-time faculty members (55.6%), part-time faculty members (11.1%), academic chairs and department heads (22.2%), academic administrators (3.7%), and serving in other roles (7.4%).

 

Study Design and Procedure

The survey was written and administered using Qualtrics, a commercial web-based product. The survey contained questions aimed at exploring online counselor education programs, including current technologies utilized, approaches to reducing social distance, development of community among students, major challenges in conducting online counselor education, and current practices in meeting these challenges. The survey was composed of one demographic question, 15 multiple-response questions, and two open-ended survey questions. The demographic question asked about the respondent’s role in the university. The 15 multiple-response questions included items such as: (a) How does online counselor education fit into your department’s educational mission? (b) Do you provide a residential program in which to compare your students? (c) How successful are your online graduates in gaining postgraduate clinical placements and licensure? (d) What is the average size of an online class with one instructor? and (e) How do online students engage with faculty and staff at your university? Two open-ended questions were asked: “What are the top 3 to 5 best practices you believe are most important for the successful online education of counselors?” and “What are the top 3 to 5 lessons learned from your engagement in the online education of counselors?”

Additional questions focused on type of department and its organization, graduates’ acceptance to doctoral programs, amount of time required on the physical campus, e-learning platforms and technologies, online challenges, and best practices for online education and lessons learned. The 18 survey questions were designed for completion in no more than 20 minutes and the survey was active for 10 months, during which time there were three appeals for responses yielding 31 respondents.

 

Procedure

An initial recruiting email and three follow-ups were sent via CESNET. Potential participants were invited to visit a web page that first led to an introductory paragraph and informed consent page. An embedded skip logic system required consent before allowing access to the actual survey questions.

The results were exported from the Qualtrics web-based survey product, and the analysis of the 15 fixed-response questions produced descriptive statistics. Cross tabulations and chi square statistics further compared the perceptions of faculty and those identifying themselves as departmental chairs and administrators.

The two open-ended questions—“What are the top 3 to 5 best practices you believe are most important for the successful online education of counselors?” and “What are the top 3 to 5 lessons learned from your engagement in the online education of counselors?”—yielded 78 statements about lessons learned and 80 statements about best practices for a total of 158 statements. The analysis of the 158 narrative comments initially consisted of individually analyzing each response by identifying and extracting the common words and phrases. It is noted that many responses contained more than one suggestion or comment. Some responses were a paragraph in length and thus more than one key word or phrase could come from a single narrative response. This first step yielded a master list of 18 common words and phrases. The second step was to again review each comment, compare it to this master list, and place a check mark for each category. The third step was to look for similarities in the 18 common words and group them into a smaller number of meaningful categories. These steps were checked among the researchers for fidelity of reporting and trustworthiness.

 

Results

Thirty-one distance learning counselor education faculty, department chairs, and administrators responded to the survey. They reported their maximum class sizes ranged from 10 to 40 with a mean of 20.6 (SD = 6.5), and the average class size was 15.5 (SD = 3.7). When asked how online students are organized within their university, 26% reported that students choose classes on an individual basis, 38% said students are individually assigned classes using an organized schedule, and 32% indicated that students take assigned classes together as a cohort.

Additionally, respondents were asked how online students engage with faculty and staff at their university. Email was the most popular, used by all (100%), and second was phone calls (94%). Synchronous live group discussions using videoconferencing technologies were used by 87%, while individual video calls were reported by 77%. Asynchronous electronic discussion boards were utilized by 87% of the counselor education programs.

Ninety percent of respondents indicated that remote or distance counseling students were required to attend the residential campus at least once during their program, with 13% requiring students to come to campus only once, 52% requiring students to attend twice, and 26% requiring students to come to a physical campus location four or more times during their program.

All participants indicated using some form of online learning platform with Blackboard (65%), Canvas (23%), Pearson E-College (6%), and Moodle (3%) among the ones most often listed. Respondents indicated the satisfaction levels of their current online learning platform as: very dissatisfied (6.5%), dissatisfied (3.2%), somewhat dissatisfied (6.5%), neutral (9.7%), somewhat satisfied (16.1%), satisfied (41.9%), and very satisfied (9.7%). There was no significant relationship between the platform used and the level of satisfaction or dissatisfaction (X2 (18,30) = 11.036, p > .05), with all platforms faring equally well. Ninety-seven percent of respondents indicated using videoconferencing for teaching and individual advising using such programs as Adobe Connect (45%), Zoom (26%), or GoToMeeting (11%), while 19% reported using an assortment of other related technologies.

Participants were asked about their university’s greatest challenges in providing quality online counselor education. They were given five pre-defined options and a sixth option of “other” with a text box for further elaboration, and were allowed to choose more than one category. Responses included making online students feel a sense of connection to the university (62%), changing faculty teaching styles from traditional classroom models to those better suited for online coursework (52%), providing experiential clinical training to online students (48%), supporting quality practicum and internship experiences for online students residing at a distance from the physical campus (38%), convincing faculty that quality outcomes are possible with online programs (31%), and other (10%).

Each participant was asked what their institution did to ensure students could succeed in online counselor education. They were given three pre-defined options and a fourth option of “other” with a text box for further elaboration, and were allowed to choose more than one option. The responses included specific screening through the admissions process (58%), technology and learning platform support for online students (48%), and assessment for online learning aptitude (26%). Twenty-three percent chose the category of other and mentioned small classes, individual meetings with students, providing student feedback, offering tutorials, and ensuring accessibility to faculty and institutional resources.

Two open-ended questions were asked and narrative comments were analyzed, sorted, and grouped into categories. The first open-ended question was: “What are the top 3 to 5 best practices that are the most important for the successful online education of counselors?” This yielded 78 narrative comments that fit into the categories of fostering student engagement (n = 19), building community and facilitating dialogue (n = 14), supporting clinical training and supervision (n = 11), ensuring courses are well planned and organized (n = 10), providing timely and robust feedback (n = 6), ensuring excellent student screening and advising (n = 6), investing in technology (n = 6), ensuring expectations are clear and set at a high standard (n = 5), investing in top-quality learning materials (n = 4), believing that online counselor education works (n = 3), and other miscellaneous comments (n = 4). Some narrative responses contained more than one suggestion or comment that fit multiple categories.

The second open-ended question—“What are the top 3 to 5 lessons learned from the online education of counselors?”—yielded 80 narrative comments that fit into the categories of fostering student engagement (n = 11), ensuring excellent student screening and advising (n = 11), recognizing that online learning has its own unique workload challenges for students and faculty (n = 11), providing timely and robust feedback (n = 8), building community and facilitating dialogue (n = 7), ensuring courses are well planned and organized (n = 7), investing in technology (n = 6), believing that online counselor education works (n = 6), ensuring expectations are clear and set at a high standard (n = 5), investing in top-quality learning materials (n = 3), supporting clinical training and supervision (n = 2), and other miscellaneous comments (n = 8).

Each participant was asked how online counselor education fit into their department’s educational mission and was given three categorical choices. Nineteen percent stated it was a minor focus of their department’s educational mission, 48% stated it was a major focus, and 32% stated it was the primary focus of their department’s educational mission.

The 55% of participants indicating they had both residential and online programs were asked to respond to three follow-up multiple-choice questions gauging the success rates of their online graduates (versus residential graduates) in attaining: (1) postgraduate clinical placements, (2) postgraduate clinical licensure, and (3) acceptance into doctoral programs. Ninety-three percent stated that online graduates were as successful as residential students in gaining postgraduate clinical placements. Ninety-three percent stated online graduates were equally successful in obtaining state licensure. Eighty-five percent stated online graduates were equally successful in getting acceptance into doctoral programs.

There were some small differences in perception that were further analyzed. Upon using a chi square analysis, there were no statistically significant differences in the positive perceptions of online graduates in gaining postgraduate clinical placements (X2 (2, 13) = .709, p > .05), the positive perceptions regarding the relative success of online versus residential graduates in gaining postgraduate clinical licensure (X2 (2, 13) = .701, p > .05), or perceptions of the relative success of online graduates in becoming accepted in doctoral programs (X2 (2, 12) = 1.33, p > .05).

 

Discussion

The respondents reported that their distance learning courses had a mean class size of 15.5. Students in these classes likely benefit from the small class sizes and the relatively low faculty–student ratio. These numbers are lower than many residential classes that can average 25 students or more. It is not clear what the optimal online class size should be, but there is evidence that the challenge of larger classes may introduce burdens difficult for some students to overcome (Chapman & Ludlow, 2010). Beattie and Thiele (2016) found first-generation students in larger classes were less likely to talk to their professor or teaching assistants about class-related ideas. In addition, Black and Latinx students in larger classes were less likely to talk with their professors about their careers and futures (Beattie & Thiele, 2016).

Programs appeared to have no consistent approach to organizing students and scheduling courses. The three dominant models present different balances of flexibility and predictability with advantages and disadvantages for both. Some counselor education programs provide students the utmost flexibility in selecting classes, others assign classes using a more controlled schedule, and others are more rigid and assign students to all classes.

The model for organizing students impacts the social connections students make with one another. In concept, models that provide students with more opportunities to engage each other in a consistent and effective pattern of positive interactions result in students more comfortable working with one another, and requesting and receiving constructive feedback from their peers and instructors.

Cohort models, in which students take all courses together over the life of a degree program, are the least flexible but most predictable and have the greatest potential for fostering strong connections. When effectively implemented, cohort models can foster a supportive learning environment and greater student collaboration and cohesion with higher rates of student retention and ultimately higher graduation rates (Barnett & Muse, 1993; Maher, 2005). Advising loads can decrease as cohort students support one another as informal peer mentors. However, cohorts are not without their disadvantages and can develop problematic interpersonal dynamics, splinter into sub-groups, and lead to students assuming negative roles (Hubbell & Hubbell, 2010; Pemberton & Akkary, 2010). An alternative model in which students make their own schedules and choose their own classes provides greater flexibility but fewer opportunities to build social cohesion with others in their program. At the same time, these students may not demonstrate the negative dynamics regarding interpersonal engagement that can occur with close cohort groups.

 

Faculty–Student Engagement

Remote students want to stay in touch with their faculty advisors, course instructors, and fellow students. Numerous social engagement opportunities exist through technological tools including email, cell phone texts, phone calls, and videoconference advising. These fast and efficient tools provide the same benefits of in-person meetings without the lag time and commute requirements. Faculty and staff obviously need to make this a priority to use these tools and respond to online students in a timely manner.

All technological tools referred to in the survey responses provide excellent connectivity and communication if used appropriately. Students want timely responses, but for a busy faculty or staff member it is easy to allow emails and voicemails to go unattended. Emails not responded to and unanswered voicemail messages can create anxiety for students whose only interaction is through electronic means. This also might reinforce a sense of isolation for students who are just “hanging out there” on their own and having to be resourceful to get their needs met. It is recommended that the term timely needs to be defined and communicated so faculty and students understand response expectations. It is less important that responses are expected in 24, 48, or even 72 hours; what students need to know is when to expect a response.

Survey responses indicated that remote counselor education students are dependent upon technology, including the internet and associated web-based e-learning platforms. When the internet is down, passwords do not work, or computers fail, the remote student’s learning is stalled. Counselor education programs offering online programming must provide administrative services, technology, and learning support for online students in order to quickly remediate technology issues when they occur. It is imperative that standard practice for institutions include the provision of robust technology support to reduce down-time and ensure continuity of operations and connection for remote students.

 

Fostering Program and Institutional Connections

Faculty were asked how often online students were required to come to a physical campus location as part of their program. Programs often refer to short-term campus visits as limited residencies to clarify that students will need to come to the campus. Limited residencies are standard, with 90% responding that students were required to come to campus at least once. Short-term intensive residencies are excellent opportunities for online students to make connections with their faculty and fellow students (Kops, 2014). Residential intensives also provide opportunities for the university student life office, alumni department, business office, financial aid office, registrar, and other university personnel to connect with students and link a human face to an email address.

Distance learning students want to engage with their university, as well as fellow students and faculty. They want to feel a sense of connection in a similar manner as residential students (Murdock & Williams, 2011). Institutions should think creatively about opportunities to include online learners in activities beyond the classroom. An example of promoting inclusiveness is when one university moved the traditional weekday residential town halls to a Sunday evening teleconference webinar. This allowed for greater access, boosted attendance, and served to make online counselor education students feel like a part of the larger institution.

As brick-and-mortar institutions consider how to better engage distance learning students, they need to understand that a majority of students (53%) taking exclusively distance education courses reside in the same state as the university they are attending (Allen & Seaman, 2016). Given that most are within driving distance of the physical campus, students are more open to coming to campus for special events, feel their presence is valued, and know that they are not just part of an electronic platform (Murdock & Williams, 2011).

 

E-Learning Platforms as Critical Online Infrastructure

All participants (100%) reported using an online learning platform. E-learning platforms are standard for sharing syllabi, course organization, schedules, announcements, assignments, discussion boards, homework submissions, tests, and grades. They are foundational in supporting faculty instruction and student success with numerous quality options available. Overall, online faculty were pleased with their technological platforms and there was no clear best platform.

Online learning platforms are rich in technological features. For example, threaded discussions allow for rich, thoughtful dialogue among students and faculty, and they are often valued by less verbally competitive students who might express reluctance to speak up in class but are willing to share their comments in writing. Course examinations and quizzes in a variety of formats can be produced and delivered online through e-learning platforms such as Blackboard, Canvas, and Moodle. Faculty have flexibility for when exams are offered and how much time students have to complete them. When used in conjunction with proctoring services such as Respondus, ProctorU, and B-Virtual, integrity in the examination process can be assured. Once students complete their exam, software can automatically score and grade objective questions, and provide immediate feedback to students.

 

Videoconferencing and Virtual Remote Classrooms

Videoconferencing for teaching and individual advising through Adobe Connect, Zoom, GoToMeeting, and related technologies is now standard practice and changing the nature of remote learning. Distance learning can now employ virtual classroom models with synchronous audio and video communication that closely parallels what occurs in a residential classroom. Videoconferencing platforms provide tools to share PowerPoints, graphics, and videos as might occur in a residential class. Class participants can write on virtual whiteboards with color markers, annotating almost anything on their screen. Group and private chat functionality can provide faculty with real-time feedback during a class session. Newer videoconferencing features now allow faculty to break students into smaller, private discussion groups and move around to each group virtually, just like what often occurs in a residential classroom. With preparation, faculty can execute integrated survey polls during a video class session. Essentially, videoconferencing tools reduce the distance in distance education.

Videoconference platforms allow faculty to teach clinical skills in nearly the same manner as in residential programs. Counselor education faculty can model skills such as active listening in real time to their online class. Faculty can then have students individually demonstrate those skills while being observed. Embedded features allow faculty to record the video and audio features of any conversation for playback and analysis. Videoconference platforms now offer “breakout” rooms to place students in sub-groups for skills practice and debriefing, similar to working in small groups in residential classrooms. Faculty members and teaching assistants can visit each breakout room to ensure students are on task and properly demonstrating counseling skills. Just as in a residential class, students can reconvene and share the challenges and lessons learned from their small group experience.

 

Challenges in Providing Remote Counselor Education

Participants were asked to select one or more of their top challenges in providing quality online counselor education. In order of frequency, they reported the greatest challenges as making online students feel a sense of connection to the university (62%), changing faculty teaching styles from brick-and-mortar classroom models to those better suited for online coursework (52%), providing experiential clinical training to online students (48%), supporting quality practicum and internship experiences for online students residing at a distance from the physical campus (38%), and convincing faculty members that quality outcomes are possible with online programs (31%).

Creating a sense of university connection. Counselor education faculty did not report having major concerns with faculty–student engagement. Faculty seemed confident with student learning outcomes using e-learning platforms and videoconferencing tools that serve to reduce social distance between faculty and students and facilitate quality learning experiences. This confidence could be the result of counselor educators’ focus on fostering relationships as a foundational counseling skill (Kaplan, Tarvydas, & Gladding, 2014).

However, faculty felt challenged to foster a student’s sense of connection with the larger university. For example, remote students not receiving emails and announcements about opportunities available only to residential students can feel left out. Remote students might find it difficult to navigate the university student life office, business department, financial aid office, registration system, and other university systems initially designed for residential students. Highly dependent on their smartphone and computer, remote students can feel neglected as they anxiously wait for responses to email and voicemail inquiries (Milman, Posey, Pintz, Wright, & Zhou, 2015).

In the online environment, there are extracurricular options for participating in town halls, special webinars, and open discussion forums with departmental and university leaders. Ninety percent of the programs require students to come to their physical campus one or more times. These short-term residencies are opportunities for students to meet the faculty, departmental chairs, and university leaders face-to-face and further build a sense of connection.

A majority of online students (53%) reside in the same state as the university they are attending (Allen & Seaman, 2016), with many within commuting distance of their brick-and- mortar campus. These students will appreciate hearing about the same opportunities afforded to residential students, and under the right circumstances and scheduling they will participate.

Changing faculty teaching styles. Not all residential teaching styles and methods, such as authority-based lecture formats, work well with all students (Donche, Maeyer, Coertjens, Van Daal, & Van Petegem, 2013). Distance learning students present their own challenges and preferences. Successful distance education programs require active and engaged faculty who frequently communicate with their students, use sound pedagogical frameworks, and maintain a collaborative and interactive style (Benshoff & Gibbons, 2011; Murdock & Williams, 2011). Discovery orientation, discussion, debriefing, action research, and flipped classrooms where content is delivered outside the classroom and the classroom is used to discuss the material are good examples of more collaborative styles (Brewer & Movahedazarhouligh, 2018; Donche et al., 2013).

Organization is critical for all students, but more so for remote students who often are working adults with busy schedules. They want to integrate their coursework into other life commitments and want a clear, well-organized, and thoughtfully planned course with all the requirements published in advance, including specific assignment due dates. Distance counselor education faculty will find their syllabi growing longer with more detail as they work to integrate traditional assignments with the e-learning and videoconferencing tools in order to create engaging, predictable, and enjoyable interactive learning experiences.

Providing experiential clinical training. Counselor educators ideally provide multimodal learning opportunities for counseling students to understand, internalize, and demonstrate clinical skills for a diverse clientele. In residential classrooms, the knowledge component is usually imparted through textbooks, supplemental readings, course assignments, video demonstration, and instructor-led lecture and discussions. All remote programs provide similar opportunities for students and replicate residential teaching models with their use of asynchronous e-learning platforms and synchronous videoconferencing technologies.

Asynchronous methods are not well suited for modeling, teaching, and assessing interpersonal skills. However, synchronous videoconferencing technologies provide the same opportunity as residential settings to conduct “fishbowl” class exercises, break students into groups to practice clinical skills, conduct role plays, apply procedural learning, and give students immediate, meaningful feedback about their skills development.

The majority of surveyed programs required remote students to come to campus at least once to assess students for clinical potential, impart critical skills, and monitor student progress in achieving prerequisite clinical competencies required to start practicum. Courses that teach and assess clinical interviewing skills are well suited for these intensive experiences and provide an important gatekeeping function. Faculty not only have the opportunity to see and hear students engage in role plays, but also to see them interact with other students.

Supporting quality practicum and internship experiences. Remote counselor educators report that their programs are challenged in supporting quality practicum and internship experiences. Residential students benefit from the relationships universities develop over time with local public and nonprofit mental health agencies in which practicum and internship students may cluster at one or more sites. Although online students living close enough to the residential campus may benefit from the same opportunities, remote students living at a distance typically do not experience this benefit. They often have to seek out, interview, and compete for a clinical position at a site unfamiliar to their academic program’s field placement coordinator. Thus, online counselor education students will need field placement coordination that can help with unique practicum and internship requirements. The placement coordinator will need to know how to review and approve distance sites without a physical assessment. Relationships with placement sites will need to rely upon email, phone, and teleconference meetings. Furthermore, online students can live in a state other than where the university is located, requiring the field placement coordinator to be aware of various state laws and regulations.

Convincing faculty that quality outcomes are possible. Approximately one-third of the surveyed counselor education faculty reported the need to convince other faculty that quality outcomes are possible with remote counselor education. Changing the minds of skeptical colleagues is challenging but naturally subject to improvement over time as online learning increases, matures, and becomes integrated into the fabric of counselor education. In the interim, programs would be wise to invest in assisting faculty skeptics to understand that online counselor education can be managed effectively (Sibley & Whitaker, 2015). First, rather than just telling faculty that online counselor education works, programs should demonstrate high levels of interactivity that are comparable to face-to-face engagement by using state-of-the-art videoconferencing platforms. Second, it is worth sharing positive research outcomes related to remote education. Third, it is best to start small by encouraging residential faculty to first try a hybrid course by holding only one or two of their total class sessions online. Fourth, it is important to provide robust support for reluctant but willing faculty who agree to integrate at least one or two online sessions into their residential coursework. Finally, institutions will find more willing faculty if they offer incentives for those who give online counselor education a chance.

 

Ensuring Online Student Success

Student success is defined by the DOE as related to student retention, graduation rates, time to completion, academic success, and gainful employment (Bailey et al., 2011). Counselor education programs would likely add clinical success in practicum and internship and post-master’s licensure to these critical success outcomes.

The survey respondents reported that student success begins with making sure that the students they accept have the aptitude to learn via online distance education. Students may have unrealistic perceptions that remote distance education is somehow less academically strenuous. Programs need to ensure students are prepared for the unique aspects of online versus residential learning. Fifty-eight percent of the programs engaged in student screening beginning with the admissions process. A quarter of the respondents used a formal assessment tool to assess students for success factors such as motivation, learning style, study habits, access to technology, and technological skills. A commonly used instrument was the Online Readiness Assessment developed by Williams (2017).

 

Lessons Learned and Best Practices

The 158 statements regarding best practices and lessons learned were further refined to yield the top six imperatives for success in online counselor education, namely: (1) fostering student–faculty–community engagement (57.4%); (2) providing high expectations, excellent screening, advising, and feedback (36%); (3) investing in quality instructional materials, course development, and technology support (30.5%); (4) providing excellent support for online clinical training and supervision (14.6%); (5) recognizing the workload requirements and time constraints of online students; (6) working to instill the belief in others that quality outcomes are possible with online counselor education programs (10.1%); and (7) other assorted responses (13.5%).

An indicator of success for many counselor education programs is the rate at which students graduate, obtain clinical placement, and become licensed. There is also an interest in how successful graduates are in becoming admitted into doctoral programs. For online programs, a further benchmark test is to compare online student graduation, licensure, and doctoral admissions rates to those in residential programs. Fifty-five percent of the respondents served in programs with residential as well as online students. These respondents were able to compare their online student outcomes to residential student outcomes. Their perception was that online graduates were as successful as residential students in gaining postgraduate clinical placements (93%), obtaining state licensure (93%), and acceptance into doctoral programs (85%). They generally believed online graduates were competitive with residential graduates.

 

Limitations, Recommendations, and Conclusion

Limitations of the Study

When this study began in 2016, there were 11 CACREP-accredited institutions offering online counselor education programs, and by March 2018, there were 36. This study represents a single snapshot of the online counselor education experience during a time of tremendous growth.

This study focused on the reported experience of faculty, departmental chairs, and administrators who have some commitment and investment in online learning. Some would point out the bias of those who advocate for remote counselor education in relaying their own experiences, anecdotal evidence, and personal comparisons of online and residential teaching.

The exploratory nature of this study was clearly not comprehensive in its inclusion of all the factors associated with online counselor education. Specific details of online counselor education programs were not emphasized and could have offered more information about university and departmental resources for remote education, faculty training for online educational formats, and student evaluations of online courses. The numerous technologies used were identified, but this says nothing about their differential effectiveness. Future studies should include these variables as well as other factors that will provide further information about the successes and challenges of online counselor education.

This survey assessed the informed opinions of counselor education faculty and administrators who responded that they were generally satisfied with the various aspects of their programs, including student outcomes. What was not assessed was the actual quality of the education itself. In order to change the mind of skeptics, more than opinions and testimonies will be needed. Future studies need to objectively compare learning outcomes, demonstrate quality, and delineate how remote counselor education programs are meeting the challenges of training counselors within distance learning modalities.

 

Recommendations

The dynamic nature of the field of online counselor education requires ongoing study. As more programs offer courses and full programs through distance learning modalities, they can contribute their own unique expertise and lessons learned to inform and enrich the broader field.

The challenge of faculty skepticism and possible mixed motives regarding online learning will continue to be problematic. There is a lingering perception by some faculty that online counselor education programs are not equivalent to residential training. An inherent faculty bias might exist in which residential means higher quality and online means lower quality. Some faculty may teach online courses only for additional compensation while privately having reservations. In contrast, departmental chairs and academic administrators might want the same high levels of quality, but may find themselves more driven by the responsibility for meeting enrollment numbers and budgets. In times of scarcity, these individuals may see online counselor education as the answer for new revenue sources (Jones, 2015). For others, online education may present concerns while providing an appeal for its innovative qualities or providing social justice through increasing access to higher education by underserved populations. The best way to clarify the issues and better inform the minds of skeptics is to present them with objective data regarding the nature and positive contributions of remote counselor education learning outcomes.

Aside from the modality of their instructional platform, it is important to understand if effective remote counselor educators are different from equally effective residential course instructors. Remote teaching effectiveness might be associated with some combination of attributes, interests, and motivations, and thus self-selection to teach remote students. Further studies will need to tease out what works, what does not work, and what type of faculty and faculty training make someone best suited for participation in remote counselor education.

Technology is critical to the advances in remote counselor education. Email, smartphones, texting, and e-learning platforms have helped faculty create engaging courses with extensive faculty–student interactions. Videoconferencing in particular has served to reduce the social distance between faculty and remote students. As aforementioned, innovative programs are taking the distance out of distance counselor education, where the virtual remote classroom modality provides similar experiences to those of residential classes. The nature of these technologically facilitated online relationships deserves further study to determine which technologies and related protocols enhance learning and which impede it.

A logical next step is to build on the work that has been accomplished and conduct more head-to-head comparisons of student outcomes among remote and residential programs. This is very feasible, as 34 of the 36 institutions currently offering online counselor education programs also have a residential program with which to make comparisons. These within-institution comparisons will be inherently quasi-experimental. Effective program comparisons of delivery models will require systematically implemented reliable and valid measures of student learning outcomes at strategic points in the counselor training program. The Counselor Competency Scale (Lambie, Mullen, Swank, & Blount, 2018) is a commonly used standardized assessment for graduate students engaged in clinical practicum and internship. The National Counseling Exam scores of current students and recent graduates can provide standardized measures to compare outcomes of graduates across programs.

Finally, although we can learn from institutional best practices and student success stories, we also could benefit from understanding why some programs, faculty, and students struggle. Challenges are certainly faced in remote counselor education and training, but it is likely that one or more programs have developed innovative concepts to surmount these obstacles. The 31 respondents were able to articulate many best practices to manage challenges and believed they were achieving the same learning objectives achieved by residential counseling students. Many faculty members, departmental chairs, and administrators believed that remote counselor education graduates are as successful as those attending residential programs, but this opinion is not universally shared. What is clear is that despite some reservations, a growing number of counselors are trained via a remote modality. It is time to embrace distance counselor education; learn from best practices, successes, and struggles; and continue to improve outcomes for the benefit of programs, the profession of counseling, and the consumers of the services our graduates provide.

 

Conflict of Interest and Funding Disclosure

The authors reported no conflict of interest or funding contributions for the development of this manuscript.

 

References

Allen, I. E., & Seaman, J. (2016). Online report card: Tracking online education in the United States. Babson Survey Research Group. Retrieved from https://onlinelearningsurvey.com/reports/onlinereportcard.pdf

Association for Counselor Education and Supervision Technology Interest Network. (2017). ACES guidelines for online learning in counselor education. Retrieved from https://www.acesonline.net/sites/default/files/Online%20Learning%20CES%20Guidelines%20May%202017%20(1).pdf

Bailey, M., Benitiz, M., Burton, W., Carey, K., Cunningham, A., Fraire, J., . . . Wheelan, B. (2011). Committee on measures of student success: A report to Secretary of Education Arne Duncan. U.S. Department of Education. Retrieved from https://www2.ed.gov/about/bdscomm/list/cmss-committee-report-final.pdf

Barnett, B. G., & Muse, I. D. (1993). Cohort groups in educational administration: Promises and challenges. Journal of School Leadership, 3, 400–415.

Beattie, I. R., & Thiele, M. (2016). Connecting in class? College class size and inequality in academic social capital. The Journal of Higher Education, 87, 332–362.

Bennett-Levy, J., Hawkins, R., Perry, H., Cromarty, P., & Mills, J. (2012). Online cognitive behavioural therapy training for therapists: Outcomes, acceptability, and impact of support: Online CBT training. Australian Psychologist, 47(3), 174–182. doi:10.1111/j.1742-9544.2012.00089.x

Benshoff, J. M., & Gibbons, M. M. (2011). Bringing life to e-learning: Incorporating a synchronous approach to online teaching in counselor education. The Professional Counselor, 1, 21–28. doi:10.15241/jmb.1.1.21

Brewer, R., & Movahedazarhouligh, S. (2018). Successful stories and conflicts: A literature review on the effectiveness of flipped learning in higher education. Journal of Computer Assisted Learning, 1–8. doi:10.1111/jcal.12250

Chapman, L., & Ludlow, L. (2010). Can downsizing college class sizes augment student outcomes? An investigation of the effects of class size on student learning. Journal of General Education, 59(2), 105–123. doi:10.5325/jgeneeduc.59.2.0105

The College Atlas. (2017). 41 facts about online students. Retrieved from https://www.collegeatlas.org/41-surprising-facts-about-online-students.html

Council for Accreditation of Counseling & Related Educational Programs. (2015). 2016 CACREP standards. Washington, DC: Author.

Council for Accreditation of Counseling & Related Educational Programs. (2017). Annual report 2016. Washington, DC: Author.

Cummings, S. M., Foels, L., & Chaffin, K. M. (2013). Comparative analysis of distance education and classroom-based formats for a clinical social work practice course. Social Work Education, 32, 68–80.
doi:10.1080/02615479.2011.648179

Donche, V., De Maeyer, S., Coertjens, L., Van Daal, T., & Van Petegem, P. (2013). Differential use of learning strategies in first-year higher education: The impact of personality, academic motivation, and teaching strategies. British Journal of Educational Psychology, 83, 238–251. doi:10.1111/bjep.12016

Furlonger, B., & Gencic, E. (2014). Comparing satisfaction, life-stress, coping and academic performance of counselling students in on-campus and distance education learning environments. Australian Journal of Guidance and Counselling, 24, 76–89. doi:10.1017/jgc.2014.2

Hall, B. S., Nielsen, R. C., Nelson, J. R., & Buchholz, C. E. (2010). A humanistic framework for distance education. The Journal of Humanistic Counseling, 49, 45–57. doi:10.1002/j.2161-1939.2010.tb00086.x

Hickey, C., McAleer, S. J., & Khalili, D. (2015). E-learning and traditional approaches in psychotherapy education: Comparison. Archives of Psychiatry and Psychotherapy, 4, 48–52.

Hubbell, L., & Hubbell, K. (2010). When a college class becomes a mob: Coping with student cohorts. College Student Journal, 44, 340–353.

Jones, C. (2015). Openness, technologies, business models and austerity. Learning, Media and Technology, 40, 328–349. doi:10.1080/17439884.2015.1051307

Kaplan, D. M., Tarvydas, V. M., & Gladding, S. T. (2014). 20/20: A vision for the future of counseling: The new consensus definition of counseling. Journal of Counseling & Development, 92, 366–372.
doi:10.1002/j.1556-6676.2014.00164.x

Kerlinger, F. N., & Lee, H. B. (1999). Foundations of behavioral research (4th ed). Fort Worth, TX: Wadsworth.

Kops, W. J. (2014). Teaching compressed-format courses: Teacher-based best practices. Canadian Journal of University Continuing Education, 40, 1–18.

Lambie, G. W., Mullen, P. R., Swank, J. M., & Blount, A. (2018). The Counseling Competencies Scale: Validation and refinement. Measurement and Evaluation in Counseling and Development, 51, 1–15.
doi:10.1080/07481756.2017.1358964

Maher, M. A. (2005). The evolving meaning and influence of cohort membership. Innovative Higher Education, 30(3), 195–211.

Meyer, J. M. (2015). Counseling self-efficacy: On-campus and distance education students. Rehabilitation Counseling Bulletin, 58(3), 165–172. doi:10.1177/0034355214537385

Milman, N. B., Posey, L., Pintz, C., Wright, K., & Zhou, P. (2015). Online master’s students’ perceptions of institutional supports and resources: Initial survey results. Online Learning, 19(4), 45–66.

Murdock, J. L., & Williams, A. M. (2011). Creating an online learning community: Is it possible? Innovative Higher Education, 36, 305–315. doi:10.1007/s10755-011-9188-6

Pemberton, C. L. A., & Akkary, R. K. (2010). A cohort, is a cohort, is a cohort . . . Or is it? Journal of Research on Leadership Education, 5(5), 179–208.

Renfro-Michel, E. L., O’Halloran, K. C., & Delaney, M. E. (2010). Using technology to enhance adult learning in the counselor education classroom. Adultspan Journal, 9, 14–25. doi:10.1002/j.2161-0029.2010.tb00068.x

Sells, J., Tan, A., Brogan, J., Dahlen, U., & Stupart, Y. (2012). Preparing international counselor educators through online distance learning. International Journal for the Advancement of Counselling, 34, 39–54. doi:10.1007/s10447-011-9126-4

Sibley, K., & Whitaker, R. (2015, March 16). Engaging faculty in online education. Educause Review. Retrieved from https://er.educause.edu/articles/2015/3/engaging-faculty-in-online-education

Trepal, H., Haberstroh, S., Duffey, T., & Evans, M. (2007). Considerations and strategies for teaching online counseling skills: Establishing relationships in cyberspace. Counselor Education and Supervision, 46(4), 266–279. doi:10.1002/j.1556-6978.2007.tb00031.x

U.S. Department of Education Office of Postsecondary Education Accreditation Division. (2012). Guidelines for Preparing/Reviewing Petitions and Compliance Reports. Retrieved from https://www.asccc.org/sites/default/files/USDE%20_agency-guidelines.pdf

Watson, J. C. (2012). Online learning and the development of counseling self-efficacy beliefs. The Professional Counselor, 2, 143–151. doi:10.15241/jcw.2.2.143

Williams, V. (2017). Online readiness assessment. Penn State University. Retrieved from https://pennstate.qualtrics.com/jfe/form/SV_7QCNUPsyH9f012B

 

William H. Snow is an associate professor at Palo Alto University. Margaret R. Lamar is an assistant professor at Palo Alto University. J. Scott Hinkle, NCC, is Director of Professional Development at the National Board for Certified Counselors. Megan Speciale, NCC, is an assistant professor at Palo Alto University. Correspondence can be addressed to William Snow, 1791 Arastradero Road, Palo Alto, CA 94304, wsnow@paloaltou.edu.

Becoming a Gatekeeper: Recommendations for Preparing Doctoral Students in Counselor Education

Marisa C. Rapp, Steven J. Moody, Leslie A. Stewart

The Council for Accreditation of Counseling & Related Educational Programs (CACREP) standards call for doctoral preparation programs to graduate students who are competent in gatekeeping functions. Despite these standards, little is understood regarding the development and training of doctoral students in their roles as gatekeepers. We propose a call for further investigation into doctoral student gatekeeper development and training in gatekeeping practices. Additionally, we provide training and programmatic curriculum recommendations derived from current literature for counselor education programs. Finally, we discuss implications of gatekeeping training in counselor education along with future areas of research for the profession.

Keywords: gatekeeping, counselor education, doctoral students, programmatic curriculum, CACREP

 

Gatekeeping practices in counselor education are highly visible in current literature, as counselor impairment continues to be a significant concern for the mental health professions (Brown-Rice & Furr, 2015; Homrich, DeLorenzi, Bloom, & Godbee, 2014; Lumadue & Duffey, 1999; Rapisarda & Britton, 2007; Rust, Raskin, & Hill, 2013; Ziomek-Daigle & Christensen, 2010). V. A. Foster and McAdams (2009) found that counselor educators are frequently faced with counselors-in-training (CITs) whose professional performance fails to meet program standards. Although gatekeeping practices in counselor education have been cursorily examined over the past 40 years (Ziomek-Daigle & Christensen, 2010), more recent literature indicates a need to further address this topic (Brown-Rice & Furr, 2016; Burkholder, Hall, & Burkholder, 2014).

In the past two decades, researchers have examined the following aspects of gatekeeping: student selection; retention; remediation; policies and procedures; and experiences of faculty members, counseling students, and clinical supervisors (Brown-Rice & Furr, 2013, 2015, 2016; V. A. Foster & McAdams, 2009; Gaubatz & Vera, 2002; Homrich et al., 2014; Lumadue & Duffey, 1999; Parker et al., 2014; Rapisarda & Britton, 2007; Ziomek-Daigle & Christensen, 2010). Although the aforementioned areas of study are needed to address the complex facets of the gatekeeping process, there is a noticeable lack of research examining how counselor education programs are preparing and educating future faculty members to begin their role as gatekeepers.

Because doctoral degree programs in counselor education are intended to prepare graduates to work in a variety of roles (Council for Accreditation of Counseling & Related Educational Programs [CACREP], 2015), program faculty must train doctoral students in each of the roles and responsibilities expected of a future faculty member or supervisor. Authors of previous studies have examined constructs of identity, development, practice, and training in the various roles that doctoral students assume, including investigations into a doctoral student’s researcher identity (Lambie & Vaccaro, 2011), supervisor identity (Nelson, Oliver, & Capps, 2006), doctoral professional identity transition (Dollarhide, Gibson, & Moss, 2013), and co-teaching experiences (Baltrinic, Jencius, & McGlothlin, 2016). Studies investigating the various elements of these roles are both timely and necessary (Fernando, 2013; Lambie & Vaccaro, 2011; Nelson et al., 2006); yet, there is a dearth of research examining the complex development of emergent gatekeeper identity. In order to empower counseling programs in training the next generation of competent and ethical professional counselors, the development of doctoral students’ gatekeeping skills and identity must be more fully understood.

 

The Complexity of Gatekeeping in Counselor Education

Gatekeeping is defined as a process to determine suitability for entry into the counseling profession (Brown-Rice & Furr, 2015). When assessing this professional suitability, academic training programs and clinical supervisors actively evaluate CITs during their training as a means to safeguard the integrity of the profession and protect client welfare (Brear, Dorrian, & Luscri, 2008; Homrich et al., 2014). Evaluators who question a CIT’s clinical, academic, and dispositional fitness but fail to intervene with problematic behavior run the risk of endorsing a student who is not ready for the profession. This concept is referred to as gateslipping (Gaubatz & Vera, 2002). Brown-Rice and Furr (2014) found that consequences of gateslipping can impact client care, other CITs, and the entire counseling profession.

Gatekeeping for counselor educators and supervisors is understood as an especially demanding and complex responsibility (Brear & Dorrian, 2010). Potential complications include personal and professional confrontations (Kerl & Eichler, 2005), working through the emotional toll of dismissing a student (Gizara & Forrest, 2004), lack of preparation with facilitating difficult conversations (Jacobs et al., 2011), and fear of legal reprisal when assuming the role of gatekeeper (Homrich et al., 2014). Homrich (2009) found that although counselor educators feel comfortable in evaluating academic and clinical competencies, they often experience difficulty evaluating dispositional competencies that are nebulously and abstractly defined. To complicate the gatekeeping process further, counselor educators are often hesitant to engage in gatekeeping practices, as discerning developmentally appropriate CIT experiences from problematic behavior (Homrich et al., 2014) may be difficult at times. Thus, more clearly defined dispositional competencies and more thorough training in counselor development models may be necessary to assist counselor educators’ self-efficacy in gatekeeping decisions. The proceeding section examines doctoral students in counselor education preparation programs and their involvement in gatekeeping responsibilities and practices.

 

Doctoral Students’ Role in Gatekeeping

Doctoral students pursuing counselor education and supervision degrees are frequently assigned the responsibility of supervisor and co-instructor of master’s-level students. Consequently, doctoral students serve in an evaluative role (Dollarhide et al., 2013; Fernando, 2013) in which they often have specific power and authority (Brown-Rice & Furr, 2015). Power and positional authority inherent in the role of supervisor (Bernard & Goodyear, 2014) and instructor permit doctoral students ample opportunity to appraise CITs’ development and professional disposition during classroom and supervision interaction (Scarborough, Bernard, & Morse, 2006). Doctoral students frequently consult with faculty through the many tasks, roles, and responsibilities they are expected to carry out (Dollarhide et al., 2013). However, relying solely on consultation during gatekeeping responsibilities rather than acquiring formal training can present considerable risks and complications. The gatekeeping process is complex and leaves room for error in following appropriate protocol, understanding CIT behavior and development, supporting CITs, and potentially endorsing CITs with problematic behavior that may have been overlooked.

Despite the importance of doctoral student education in the counseling profession and a substantial body of research on gatekeeping over the past two decades (Brown-Rice & Furr, 2013, 2015, 2016; V. A. Foster & McAdams, 2009; Gaubatz & Vera, 2002; Lumadue & Duffey, 1999; Parker et al., 2014; Rapisarda & Britton, 2007; Ziomek-Daigle & Christensen, 2010), there is an absence in the professional discourse examining the identity, development, practice, and training of doctoral students for their role of gatekeeper. No counseling literature to date has explored how counselor education programs are supporting doctoral students’ transition into the role of gatekeeper, despite the latest accreditation standards calling for doctoral preparation programs to graduate students who are competent in gatekeeping functions relevant to teaching and clinical supervision (CACREP, 2015, Standard 6.B). A lack of specific literature is particularly problematic, as the process of gatekeeping can be difficult for faculty members. It is reasonable to assume that if faculty members struggle to navigate the responsibilities of a gatekeeper, then less experienced doctoral students would struggle in this role as well. Furthermore, most incoming doctoral students have not had an opportunity to formally engage in gatekeeping practices in academic settings as an evaluator (DeDiego & Burgin, 2016).

Although doctoral students have been introduced to the concept of gatekeeping as master’s-level students (e.g., gatekeeping policies), many counselors do not retain or understand gatekeeping information (V. A. Foster & McAdams, 2009; Parker et al., 2014; Rust et al., 2013). These research findings were further examined through an exploratory study in August of 2016. The first two authors of this article assessed beginning doctoral students’ gatekeeping knowledge and self-efficacy prior to doctoral training or formal curricula. Areas of knowledge assessed included general information on the function of gatekeeping, standard practices, and program-specific policies and procedures. Preliminary findings of six participants indicated that incoming doctoral students lacked understanding for their role in gatekeeping. This supports existing research (V. A. Foster & McAdams, 2009; Parker et al., 2014; Rust et al., 2013) and aligns with DeDeigo and Burgin’s (2016) assertion that doctoral students are often unsure of what the role of gatekeeper “even means, let alone how to carry it out” (p. 182). Consequently, attention must be given to preparing doctoral students for their gatekeeping role to meet CACREP standards and, most importantly, prepare them to gatekeep effectively in an effort to prevent gateslippage.

DeDiego and Burgin’s (2016) recommended counselor education programs support doctoral students’ development through specific programmatic training. Despite the established importance of specific training (Brear & Dorrian, 2010), no corresponding guidelines exist for content of material. To address this gap, we provide recommendations of content areas that may assist doctoral students in becoming acquainted with the complex role of gatekeeper. We derived our recommendations from a thorough review of professional literature. Recommendations compiled include current trends related to gatekeeping within the counseling profession, findings from various studies that state what information is deemed important in the realm of gatekeeping, and considerations for educational and professional standards that guide best practices as a counselor educator.

 

Recommendations

Recommendations contain general areas of knowledge that should accompany program-specific material for introductory gatekeeping role information. Providing doctoral students with program-specific policies and procedures related to gatekeeping practices, such as remedial and dismissal procedures, is of utmost importance. This information can be dispersed in a variety of methods such as orientation, gatekeeping-specific training, coursework, and advising. We view these areas of content as foundational in acquainting doctoral students with the role of gatekeeper. We included four general content areas of knowledge pertaining to gatekeeping practices and the role of gatekeeper: current variation of language espoused by the counselor education community; ethics related to gatekeeping; cultural considerations; and legal and due process considerations. Each of these recommended content areas will be briefly discussed with relevant literature supporting the importance of their inclusion.

 

Adopted Language

Current terminology in the field of counselor education describing CITs who struggle to meet professional standards and expectations is broad and lacks a universal language that has been adopted by counselor educators (Brown-Rice & Furr, 2015). Consequently, a plethora of terms and definitions exists in the literature describing CITs who are struggling to meet clinical, academic, and dispositional competencies. As described earlier, the lack of consensus regarding gatekeeping and remediation language may contribute to the lack of clarity, which many counselor educators perceive as a gatekeeping challenge. Terms appearing in gatekeeping literature that describe students of concern include: deficient trainees (Gaubatz & Vera, 2002), problems of professional competence (Elman & Forrest, 2007; Rust et al., 2013), impaired, unsuitable, unqualified, and incompetent (J. M. Foster, Leppma, & Hutchinson, 2014), with varying definitions describing these terms. Duba, Paez, and Kindsvatter (2010) defined counselor impairment as any “emotional, physical, or educational condition that interferes with the quality of one’s professional performance” (p. 155) and defined its counterpart, counselor competency, as an individual demonstrating both clinical skills and psychological health. It is important to emphasize potential complications and implications associated with the term impairment, which can have close association with disability services, rendering a much different meaning for the student, supervisee, or colleague (McAdams & Foster, 2007).

Introducing these terms to doctoral students not only familiarizes them with the definitions, history, and relevance of terms present in the counseling community, it also provides a foundation in which to begin to conceptualize the difference between clinical “impairment” versus emotional distress or developmentally appropriate academic struggle. In upholding responsibilities of gatekeeping, one must be aware of the differentiating aspects of emotional distress and impairment in order to be able to distinguish the two in professionals and students. In further support of this assertion, Rust et al. (2013) stated that counseling programs must be able to distinguish between problems of professional competence and problematic behavior related to normal CIT development. Including a review of relevant terms existing in the counseling literature in the program’s training will allow doctoral students to begin to understand and contextualize the language relevant to their new roles as gatekeepers.

Although it is essential to educate doctoral students on language common to the counseling community, familiarity with language adopted by the department and institution with which they are serving as gatekeepers is vital to training well-informed gatekeepers (Brear & Dorrian, 2010). Having a clear understanding of the terminology surrounding gatekeeping ensures that doctoral students and faculty are able to have an open and consistent dialogue when enforcing gatekeeping practices. Homrich (2009) described consistent implementation of gatekeeping protocol as a best practice for counseling programs and faculty. Additional best practices include the establishment of expectations and communicating them clearly and widely. In the recommendations offered by Homrich (2009), a common language is needed within the department in order to successfully implement these practices to improve and sustain gatekeeping procedures. After doctoral students are situated in the current climate of gatekeeping-related terms and language, an exploration of professional and educational ethics can ensue.

 

Ethics Related to Gatekeeping

Professional and ethical mandates should be identified and discussed to familiarize doctoral students with the corresponding ethical codes that they are expected to uphold. Three sources that guide ethical behavior and educational standards for counselor educators that must be integrated in curricula and training include the American Counseling Association Code of Ethics (2014), the 2016 CACREP Standards (2015), and the National Board for Certified Counselors Code of Ethics (2012). Doctoral preparation programs should draw specific attention to codes related to the function of gatekeeping. These ethical codes and professional standards can be introduced in an orientation and discussed in more depth during advising and formal courses.

Doctoral preparation programs have flexibility in introducing standards and ethical codes during doctoral students’ academic journey. We recommend relevant standards and ethics be introduced early and mentioned often during doctoral training, specifically in terms of gatekeeping. Doctoral students should have prior knowledge of the ethical codes before engaging in gatekeeping or remedial functions with CITs. Moreover, if doctoral students have an understanding of the educational standards that are required of them, they can strive to meet specific standards in a personalized, meaningful manner during their training. Referencing CACREP standards addressed in a course syllabus is required for accreditation and helpful for students; yet, educational standards should be incorporated in training to foster deeper meaning and applicability of standards. As doctoral students are being trained to take leadership positions in the counselor education field, a more thorough understanding of educational principles and ethical codes is vital, particularly in the area of gatekeeping. Faculty members leading doctoral courses are encouraged to speak to standards related to gatekeeping throughout the duration of a course. Faculty intentionally dialoguing about how these standards are being met may allow for doctoral students to provide informal feedback to whether they believe they understand the multifaceted role of gatekeeper. During the review of codes and standards, focused attention should be given to “cultural and developmental sensitivity in interpreting and applying codes and standards” (p. 207) in gatekeeping-related situations (Letourneau, 2016). One option for attending to such sensitivity is the introduction of a case study in which doctoral students participate in open dialogue facilitated by a trainer. The inclusion of a case study aims to engage doctoral students in critical thinking surrounding cultural and diversity implications for gatekeeping practices. The following section will draw further attention to the importance of cultural awareness in gatekeeping practices and responsibilities.

 

Cultural Considerations

It is vital for doctoral students to have an understanding and awareness of the cultural sensitivity that is required of them in making sound gatekeeping-related decisions. Not only do ethical codes and educational mandates expect counselor educators to possess a level of multicultural competency (American Counseling Association, 2014; CACREP, 2015), but recent literature draws attention to cultural considerations in the gatekeeping process (Goodrich & Shin, 2013; Letourneau, 2016). These cultural considerations provide doctoral students with valuable information on conceptualizing and interacting with gatekeeping practices in a more culturally sensitive manner.

Letourneau (2016) described the critical nature of taking into account students’ cultural influences and differences when evaluating their assessment of fitness for the profession, while Goodrich and Shin (2013) called attention to “how cultural values and norms may intersect” (p. 43) with appraisal of CIT counseling competencies. For example, when assessing a CIT’s behavior or performance to determine whether it may be defined as problematic, evaluators may have difficulty establishing if the identified behavior is truly problematic or rather deviating from the cultural norm (Letourneau, 2016). This consideration is essential as culture, diversity, and differing values and beliefs can influence and impact how perceived problematic behaviors emerge and consequently how observed deficiencies in performance are viewed (Goodrich & Shin, 2013; Letourneau, 2016). Examining the cultural values of the counseling profession, counselor education programs, and the community in which the program is embedded can shed light on what behaviors, attitudes, and beliefs are valued and considered norms. This examination can prompt critical awareness of how CITs differing from cultural norms may be assessed and evaluated differently, and even unfairly.

Jacobs et al. (2011) described insufficient support for evaluators in how to facilitate difficult discussions in gatekeeping-related issues, specifically when the issues included attention to diversity components. Doctoral students must be given ample opportunity to identify cultural facets of case examples and talk through their course of action as a means to raise awareness and practice looking through a multicultural lens in gatekeeping-related decisions and processes. Of equal importance is familiarity with legal and due process considerations, which are addressed in the section below.

 

Legal and Due Process Considerations

Three governing regulations that are often discussed in the literature, but left to the reader’s imagination in how faculty members actually understand them, include the Family Educational Rights and Privacy Act (FERPA) of 2000, the Americans with Disabilities Act (ADA) of 1990, and a student’s rights and due process policy within an institution. Presenting these three concepts and their implications to the gatekeeping process is warranted, as doctoral students are assumed only to have the understanding of these concepts from a student perspective. Although FERPA, the ADA, and the due process clause may be covered in new faculty orientation, how these regulations interface with gatekeeping and remediation are generally not reviewed during standard university orientations. It is recommended that training and curricula include general knowledge and institution-specific information related to the regulations. Institution-specific material can include university notification of rights; handbook material directly addressing student rights; remediation policy and procedures; and resources and specific location of campus services such as the disability office. Inclusion of general and program-specific information will help future faculty members in possessing a rounded and well-grounded understanding of how legal considerations will apply to students and inform their gatekeeping practices. Lastly, doctoral students should be informed that the regulations detailed below may limit their access of information due to master’s-level student privacy. To begin, doctoral students should intimately understand FERPA and its application to the CITs they often supervise, teach, and evaluate.

FERPA. General information may consist of the history and evolution of FERPA in higher education and its purpose in protecting students’ confidentiality in relation to educational records. Doctoral students must be introduced to the protocol for ensuring confidentiality in program files. Program files include communication about CIT performance and may be directly related to gatekeeping issues. Doctoral students must recognize that, as evaluators communicating CIT assessment of fitness, including dispositional competencies, they must abide by FERPA regulations, because dispositional competencies are considered educational records.

Educational programs often utilize off-site practicum and internship programs that are independent from the respective university (Gilfoyle, 2008), and this is indeed the case with many CACREP-accredited counselor training programs. Doctoral students must have an understanding of the protocols in place to communicate with site supervisors who are unaffiliated with the university, such as student written-consent forms that are a routine part of paperwork for off-site training placement (Gilfoyle, 2008). Although doctoral students may not be directly corresponding with off-site evaluators, their training should consist of familiarizing them with FERPA regulations that address the disclosure of student records in order to prepare them in serving CITs in a faculty capacity. Understanding how to communicate with entities outside of the university is crucial in the event that they are acting as university supervisors and correspondence is necessary for gatekeeping-related concerns. An additional governmental regulation they are expected to be familiar and interact with is the ADA.

The ADA. Introducing doctoral students to the ADA serves multiple functions. First, similar to FERPA, it would be helpful for doctoral students to be grounded in the history of how the ADA developed and its purpose in protecting students’ rights concerning discrimination. Second, general disability service information, such as physical location on their respective campus, contact information for disability representatives, and protocols for referring a student, provides doctoral students the necessary knowledge in the event that a CIT would inquire about accommodations. If a CIT were to inquire about ADA services during a class in which a doctoral student co-teaches or during a supervision session, it would be appropriate for the doctoral student to disseminate information rather than keeping the CIT waiting until after consultation with a faculty member. Lacking general information relevant to student services may place the doctoral student in a vulnerable position in which the supervisory alliance is undermined, as the doctoral student serving in an evaluative role is not equipped with the information or knowledge to assist the CIT. Finally, presentation of the ADA and its implications for gatekeeping will inform students of the protocols that are necessary when evaluating a CIT who has a record of impairment. For example, if a CIT has registered a disability through the university’s ADA office, appropriate accommodations must be made and their disability must be considered during the gatekeeping process.

 Due Process. The introduction of students’ fundamental right to basic fairness is essential, as many doctoral students may not understand this concept outside of a student perspective because of a lack of experience in instructor and supervisor positions. Examples of such basic fairness can be illustrated for doctoral students through highlighting various components in a counselor training program that should be in place to honor students’ right to fair procedures and protect against arbitrary decision-making. These include but are not limited to access to program requirements, expectations, policies, and practices; opportunity to respond and be heard in a meaningful time in a meaningful way; decisions by faculty members, advisors, or programs to be supported by substantial evidence; option to appeal a decision and to be notified of judicial proceedings; and realistic time to complete remediation (Gilfoyle, 2008; Homrich, 2009). McAdams and Foster (2007) developed a framework to address CIT due process and fundamental fairness considerations in remediation procedures to help guide counselor educators’ implementation of remediation. It is recommended that these guidelines (McAdams & Foster, 2007) be introduced in doctoral student training to generate discussion and included as a resource for future reference. In educating doctoral students about considerations of due process through a faculty lens, formal procedures to address student complaints, concerns, and appeals also should be included in training.

 

Implications for Counselor Education

Doctoral preparation programs are charged with graduating students who will be prepared and competent for the various roles they will assume as a counselor educator and clinical supervisor. The lack of professional literature exploring the development and training of gatekeepers indicates a clear call to the counseling profession to investigate the emergence of counselor educators into their role of gatekeepers. This call is fueled by the need to understand how doctoral preparation programs can support students and ensure competency upon graduation. Generating dialogue related to doctoral student gatekeeper development may consequently continue the conversation of standardization in gatekeeping protocol. Accordingly, this sustained dialogue also would keep the need for more universal gatekeeping nomenclature in the forefront. Continued emphasis on a common gatekeeping language will only strengthen gatekeeping protocol and practices and in return provide an opportunity for training developments that have the potential to be standardized across programs.

The recommended content areas we have offered are intended to prepare doctoral students for their role of gatekeeper and aim to enhance the transition into faculty positions. These recommendations may be limited in their generalizability because gatekeeping practices vary across programs and department cultures, indicating that information and trainings will need to be tailored individually to fit the expectations of each counseling department. These differences hinder the ability to create a standardized training that could be utilized by all departments. As gatekeeping practices continue to receive research attention and the call for more universal language and standardization is answered, standardization of training can be revisited. Nonetheless, general recommendations in training content can serve as groundwork for programs to ensure that students are receiving a foundation of basic knowledge that will allow doctoral students to feel more confident in their role of gatekeeper. The recommended content areas also serve to help incoming doctoral students begin to conceptualize and see through an academic—rather than only a clinical—lens.

Implementation and delivery of recommended content areas may be applied in a flexible manner that meets doctoral preparation programs’ specific needs. The recommendations offered in this article can be applied to enhance existing curricula, infused throughout coursework, or disseminated in a gatekeeping training or general orientation. Faculty creating doctoral curricula should be cognizant of when doctoral students are receiving foundational gatekeeping information. If doctoral students are expected to have interaction with and evaluative power over master’s-level students, recommended gatekeeping content areas should be introduced prior to this interaction.

There are several avenues for future research, as the proposed recommendations for content areas are rich in potential for future scholarly pursuit. The first is the call to the profession for investigations examining training efforts and their effectiveness in preparing future faculty members for the multifaceted role of gatekeeper. The complexity and import of gatekeeping responsibilities and identity development may be a possible reason for the lack of studies to date on this role. Nevertheless, both qualitative and quantitative inquiry could lend insight to gaps in training that lead to potential gateslippage. Quantitative research would be helpful in examining how many programs are currently utilizing trainings and the content of such trainings. In consideration of the number of CACREP-accredited doctoral programs within the United States, a large sample size is feasible to explore trends and capture a full picture. Conducting qualitative analysis would expand and deepen the understanding of how faculty and doctoral students have been trained and their processes and experience in becoming gatekeepers.

In conclusion, doctoral preparation programs can be cognizant to infuse the aforementioned recommended content areas into doctoral curricula to meet CACREP standards and prepare doctoral students for the complex role of gatekeeper. Counselor education and supervision literature indicates that more focused attention on training could be beneficial in improving gatekeeping knowledge for doctoral students. Training recommendations derived from existing literature can be utilized as guidelines to enhance program curriculum and be investigated in future research endeavors. With a scarcity of empirical studies examining gatekeeping training and gatekeeper development, both quantitative and qualitative studies would be beneficial to better understand the role of gatekeeper and strengthen the overall professional identity of counselor educators and clinical supervisors.

 

Conflict of Interest and Funding Disclosure

The authors reported no conflict of interest or funding contributions for the development of this manuscript.

 

References

American Counseling Association. (2014). ACA 2014 Code of ethics. Alexandria, VA: Author.

Baltrinic, E. R., Jencius, M., & McGlothlin, J. (2016). Coteaching in counselor education: Preparing doctoral students for future teaching. Counselor Education and Supervision, 55, 31–45. doi:10.1002/ceas.12031

Bernard, J. M., & Goodyear, R. K. (2014). Fundamentals of clinical supervision (5th ed.). Boston, MA: Pearson.

Brear, P., & Dorrian, J. (2010). Gatekeeping or gate slippage? A national survey of counseling educators in Australian undergraduate and postgraduate academic training programs. Training and Education in Professional Psychology, 4, 264–273. doi:10.1037/a0020714

Brear, P., Dorrian, J., & Luscri, G. (2008). Preparing our future counseling professionals: Gatekeeping and the implications for research. Counselling and Psychotherapy Research, 8, 93–101. doi:10.1080/14733140802007855

Brown-Rice, K. A., & Furr, S. (2013). Preservice counselors’ knowledge of classmates’ problems of professional competency. Journal of Counseling & Development, 91, 224–233. doi:10.1002/j.1556-6676.2013.00089.x

Brown-Rice, K., & Furr, S. (2014). Lifting the empathy veil: Engaging in competent gatekeeping. In Ideas and research you can use: VISTAS 2012. Retrieved from https://www.counseling.org/docs/default-source/vistas/article_11.pdf?sfvrsn=12

Brown-Rice, K., & Furr, S. (2015). Gatekeeping ourselves: Counselor educators’ knowledge of colleagues’ problematic behaviors. Counselor Education and Supervision, 54, 176–188. doi:10.1002/ceas.12012

Brown-Rice, K., & Furr, S. (2016). Counselor educators and students with problems of professional competence: A survey and discussion. The Professional Counselor, 6, 134–146. doi:10.15241/kbr.6.2.134

Burkholder, D., Hall. S. F., & Burkholder. J. (2014). Ward v. Wilbanks: Counselor educators respond. Counselor Education and Supervision, 53, 267–283.

Council for Accreditation of Counseling & Related Educational Programs. (2015). 2016 CACREP Standards. Retrieved from https://www.cacrep.org/for-programs/2016-cacrep-standards/

DeDiego, A. C., & Burgin, E. C. (2016). The doctoral student as university supervisor: Challenges in fulfilling the gatekeeping role. Journal of Counselor Leadership and Advocacy, 3, 173–183.
doi:10.1080/2326716X.2016.1187096

Dollarhide, C. T., Gibson, D. M., & Moss, J. M. (2013). Professional identity development of counselor education doctoral students. Counselor Education and Supervision, 52, 137–150.
doi:10.1002/j.1556-6978.2013.00034.x

Duba, J. D., Paez, S. B., & Kindsvatter, A. (2010). Criteria of nonacademic characteristics used to evaluate and retain community counseling students. Journal of Counseling & Development, 88(2), 154–162. doi:10.1002/j.1556-6678.2010.tb00004.x

Elman, N. S., & Forrest, L. (2007). From trainee impairment to professional competence problems: Seeking new terminology that facilitates effective action. Professional Psychology: Research and Practice, 38, 501–509.

Fernando, D. M. (2013). Supervision by doctoral students: A study of supervisee satisfaction and self-efficacy, and comparison with faculty supervision outcomes. The Clinical Supervisor, 32, 1–14.
doi:10.1080/07325223.2013.778673

Foster, J. M., Leppma, M., & Hutchinson, T. S. (2014). Students’ perspectives on gatekeeping in counselor education: A case study. Counselor Education and Supervision, 53, 190–203.
doi:10.1002/j.1556-6978.2014.00057.x

Foster, V. A., & McAdams, C. R., III. (2009). A framework for creating a climate of transparency for professional performance assessment: Fostering student investment in gatekeeping. Counselor Education and Supervision, 48, 271–284. doi:10.1002/j.1556-6978.2009.tb00080.x

Gaubatz, M. D., & Vera, E. M. (2002). Do formalized gatekeeping procedures increase programs’ follow-up with deficient trainees? Counselor Education and Supervision, 41, 294–305.
doi:10.1002/j.1556-6978.2002.tb01292.x

Gilfoyle, N. (2008). The legal exosystem: Risk management in addressing student competence problems in professional psychology training. Training and Education in Professional Psychology, 2, 202–209. doi:10.1037/1931-3918.2.4.202

Gizara, S. S., & Forrest, L. (2004). Supervisors’ experiences of trainee impairment and incompetence at APA-accredited internship sites. Professional Psychology: Research and Practice, 35, 131–140.
doi:10.1037/0735-7028.35.2.131

Goodrich, K. M., & Shin, R. Q. (2013). A culturally responsive intervention for addressing problematic behaviors in counseling students. Counselor Education and Supervision, 52, 43–55.
doi:10.1002/j.1556-6978.2013.00027.x

Homrich, A. M. (2009). Gatekeeping for personal and professional competence in graduate counseling programs. Counseling and Human Development, 41(7), 1–23.

Homrich, A. M., DeLorenzi, L. D., Bloom, Z. D., & Godbee, B. (2014). Making the case for standards of conduct in clinical training. Counselor Education and Supervision, 53, 126–144. doi:10.1002/j.1556-6978.2014.00053.x

Jacobs, S. C., Huprich, S. K., Grus, C. L., Cage, E. A., Elman, N. S., Forrest, L., . . . Kaslow, N. J. (2011). Trainees with professional competency problems: Preparing trainers for difficult but necessary conversations. Training and Education in Professional Psychology, 5(3), 175–184. doi:10.1037/a0024656

Kerl, S., & Eichler, M. (2005). The loss of innocence: Emotional costs to serving as gatekeepers to the counseling profession. Journal of Creativity in Mental Health, 1(3–4), 71–88. doi:10.1300/J456v01n03_05

Lambie, G. W., & Vaccaro, N. (2011). Doctoral counselor education students’ levels of research self-efficacy, perceptions of the research training environment, and interest in research. Counselor Education and Supervision, 50, 243–258. doi:10.1002/j.1556-6978.2011.tb00122.x

Letourneau, J. L. H. (2016). A decision-making model for addressing problematic behaviors in counseling students. Counseling and Values, 61, 206–222. doi:10.1002/cvj.12038

Lumadue, C. A., & Duffey, T. H. (1999). The role of graduate programs as gatekeepers: A model of evaluating student counselor competence. Counselor Education and Supervision, 39, 101–109. doi:10.1002/j.1556-6978.1999.tb01221.x

McAdams, C. R., III, & Foster, V. A. (2007). A guide to just and fair remediation of counseling students with professional performance deficiencies. Counselor Education and Supervision, 47, 2–13. doi:10.1002/j.1556-6978.2007.tb00034.x

National Board for Certified Counselors. (2012). Code of ethics. Greensboro, NC: Author.

Nelson, K. W., Oliver, M., & Capps, F. (2006). Becoming a supervisor: Doctoral student perceptions of the training experience. Counselor Education and Supervision, 46, 17–31. doi:10.1002/j.1556-6978.2006.tb00009.x

Parker, L. K., Chang, C. Y., Corthell, K. K., Walsh, M. E., Brack, G., & Grubbs, N. K. (2014). A grounded theory of counseling students who report problematic peers. Counselor Education and Supervision, 53, 111–125. doi:10.1002/j.1556-6978.2014.00052.x

Rapisarda, C. A., & Britton, P. J. (2007). Sanctioned supervision: Voices from the experts. Journal of Mental Health Counseling, 29, 81–92. doi:10.17744/mehc.29.1.6tcdb7yga7becwmf

Rust, J. P., Raskin, J. D., & Hill, M. S. (2013). Problems of professional competence among counselor trainees: Programmatic issues and guidelines. Counselor Education and Supervision, 52, 30–42.
doi:10.1002/j1556-6978.2013.00026x

Scarborough, J. L., Bernard, J. M., & Morse, R. E. (2006). Boundary considerations between doctoral students and master’s students. Counseling and Values, 51, 53–65. doi:10.1002/j.2161-007X.2006.tb00065.x

Ziomek-Daigle, J., & Christensen, T. M. (2010). An emergent theory of gatekeeping practices in counselor education. Journal of Counseling & Development, 88, 407–415. doi:10.1002/j.1556-6678.2010.tb00040.x

 

Marisa C. Rapp, NCC, is an assistant professor at the University of Wisconsin–Parkside. Steven J. Moody, NCC, is an associate professor at Idaho State University. Leslie A. Stewart is an assistant professor at Idaho State University. Correspondence can be addressed to Marisa Rapp, 264 Molinaro Hall, 900 Wood Rd., Kenosha, WI 53144-2000, rapp@uwp.edu.

The Research Identity Scale: Psychometric Analyses and Scale Refinement

Maribeth F. Jorgensen, William E. Schweinle

The 68-item Research Identity Scale (RIS) was informed through qualitative exploration of research identity development in master’s-level counseling students and practitioners. Classical psychometric analyses revealed the items had strong validity and reliability and a single factor. A one-parameter Rasch analysis and item review was used to reduce the RIS to 21 items. The RIS offers counselor education programs the opportunity to promote and quantitatively assess research-related learning in counseling students.

Keywords: Research Identity Scale, research identity, research identity development, counselor education, counseling students

With increased accountability and training standards, professionals as well as professional training programs have to provide outcomes data (Gladding & Newsome, 2010). Traditionally, programs have assessed student learning through outcomes measures such as grade point averages, comprehensive exam scores, and state or national licensure exam scores. Because of the goals of various learning processes, it may be important to consider how to measure learning in different ways (e.g., change in behavior, attitude, identity) and specific to the various dimensions of professional counselor identity (e.g., researcher, advocate, supervisor, consultant). Previous research has focused on understanding how measures of research self-efficacy (Phillips & Russell, 1994) and research interest (Kahn & Scott, 1997) allow for an objective assessment of research-related learning in psychology and social work programs. The present research adds to previous literature by offering information about the development and applications of the Research Identity Scale (RIS), which may provide counseling programs with another approach to measure student learning.

Student Learning Outcomes

When deciding how to measure the outcomes of student learning, it is important that programs start with defining the student learning they want to take place (Warden & Benshoff, 2012). Student learning outcomes focus on intellectual and emotional growth in students as a result of what takes place during their training program (Hernon & Dugan, 2004). Student learning outcomes are often guided by the accreditation standards of a particular professional field. Within the field of counselor education, the Council for Accreditation of Counseling & Related Educational Programs (CACREP) is the accrediting agency. CACREP promotes quality training by defining learning standards and requiring programs to provide evidence of their effectiveness in meeting those standards. In relation to research, the 2016 CACREP standards require research to be a part of professional counselor identity development at both the entry level (e.g., master’s level) and doctoral level. The CACREP research standards emphasize the need for counselors-in-training to learn the following:

The importance of research in advancing the counseling profession, including how to critique research to inform counseling practice; identification of evidence-based counseling practices; needs assessments; development of outcome measures for counseling programs; evaluation of counseling interventions and programs; qualitative quantitative, and mixed research methods; designs in research and program evaluation; statistical methods used in conducting research and program evaluation; analysis and use of data in counseling; ethically and culturally relevant strategies for conducting, interpreting, and reporting results of research and/or program evaluation. (CACREP, 2016, p .14)

These CACREP standards not only suggest that counselor development needs to include curriculum that focuses on and integrates research, but also identify a possible need to have measurement tools that specifically assess research-related learning (growth).

Research Learning Outcomes Measures

The Self-Efficacy in Research Measure (SERM) was designed by Phillips and Russell (1994) to measure research self-efficacy, which is similar to the construct of research identity. The SERM is a 33-item scale with four subscales: practical research skills, quantitative and computer skills, research design skills, and writing skills. This scale is internally consistent (α = .96) and scores highly correlate with other components such as research training environment and research productivity. The SERM has been adapted for assessment in psychology (Kahn & Scott, 1997) and social work programs (Holden, Barker, Meenaghan, & Rosenberg, 1999).

Similarly, the Research Self-Efficacy Scale (RSES) developed by Holden and colleagues (1999) uses aspects of the SERM (Phillips & Russell, 1994), but includes only nine items to measure changes in research self-efficacy as an outcome of research curriculum in a social work program. The scale has excellent internal consistency (α = .94) and differences between pre- and post-tests were shown to be statistically significant. Investigators have noticed the value of this scale and have applied it to measure the effectiveness of research courses in social work training programs (Unrau & Beck, 2004; Unrau & Grinnell, 2005).

Unrau and Beck (2004) reported that social work students gained confidence in research when they received courses on research methodology. Students gained most from activities outside their research courses, such as participating in research with faculty members. Following up, Unrau and Grinnell (2005) administered the scale prior to the start of the semester and at the end of the semester to measure change in social work students’ confidence in doing research tasks. Overall, social work students varied greatly in their confidence before taking research courses and made gains throughout the semester. Unrau and Grinnell stressed their results demonstrate the need for the use of pre- and post-tests to better gauge the way curriculum impacts how students experience research.

Previous literature supports the use of scales such as the SERM and RSES to measure the effectiveness of research-related curricula (Holden et al., 1999; Kahn & Scott, 1997; Unrau & Beck, 2004; Unrau & Grinnell, 2005). These findings also suggest the need to continue exploring the research dimension of professional identity. It seems particularly important to measure concepts such as research self-efficacy, research interest, and research productivity, all of which are a part of research identity (Jorgensen & Duncan, 2015a, 2015b).

Research Identity as a Learning Outcome

The concept of research identity (RI) has received minimal attention (Jorgensen & Duncan, 2015a, 2015b; Reisetter et al., 2004). Reisetter and colleagues (2004) described RI as a mental and emotional connection with research. Jorgensen and Duncan (2015a) described RI as the magnitude and quality of relationship with research; the allocation of research within a broader professional identity; and a developmental process that occurs in stages. Scholars have focused on qualitatively exploring the construct of RI, which may give guidance around how to facilitate and examine RI at the program level (Jorgensen & Duncan, 2015a, 2015b; Reisetter et al., 2004). Also, the 2016 CACREP standards include language (e.g., knowledge of evidence-based practices, analysis and use of data in counseling) that favors curriculum that would promote RI. Although previous researchers have given the field prior knowledge of RI (Jorgensen & Duncan, 2015a, 2015b; Reisetter et al., 2004), there has been no focus on further exploring RI in a quantitative way and in the context of being a possible measure of student learning. The first author developed the RIS with the aim of assessing RI through a quantitative lens and augmenting traditional learning outcomes measures such as grades, grade point averages, and standardized test scores. There were three purposes for the current study: (a) to develop the RIS; (b) to examine the psychometric properties of the RIS from a classical testing approach; and (c) to refine the items through future analysis based on the item response theory (Nunnally & Bernstein, 1994). Two research questions guided this study: (a) What are the psychometric properties of the RIS from a classical testing approach? and (b) What items remain after the application of an item response analysis?

Method

Participants

The participants consisted of a convenience sample of 170 undergraduate college students at a Pacific Northwest university. Sampling undergraduate students is a common practice when initially testing scale psychometric properties and employing item response analysis (Embretson & Reise, 2000; Heppner, Wampold, Owen, Thompson, & Wang, 2016). The mean age of the sample was 23.1 years (SD = 6.16) with 49 males (29%), 118 females (69%), and 3 (2%) who did not report gender. The racial identity composition of the participants was mostly homogenous: 112 identified as White (not Hispanic); one identified as American Indian or Alaska Native; 10 identified as Asian; three identified as Black or African American; eight identified as multiracial; 21 identified as Hispanic; three identified as “other”; and seven preferred not to answer.

Instruments

There were three instruments used in this study: a demographic questionnaire, the RSES, and the RIS.

Demographics questionnaire. Participants were asked to complete a demographic sheet that included five questions about age, gender, major, race, and current level of education; these identifiers did not pose risk to confidentiality of the participants. All information was stored on the Qualtrics database, which was password protected and only accessible by the primary investigator.

The RSES. The RSES was developed by Holden et al. (1999) to measure effectiveness of research education in social work training programs. The RSES has nine items that assess respondents’ level of confidence with various research activities. The items are answered on a 0–100 scale with 0 indicating cannot do at all, 50 indicating moderately certain I can do, and 100 indicating certainly can do. The internal consistency of the scale is .94 at both pre- and post-measures. Holden and colleagues reported using an effect size estimate to assess construct validity but did not report these estimates, so there should be caution when assuming this form of validity.

RIS. The initial phase of this research involved the first author developing the 68 items on the RIS (contact first author for access) based on data from her qualitative work about research identity (Jorgensen & Duncan, 2015a). The themes from her qualitative research informed the development of items on the scale (Jorgensen & Duncan, 2015a). Rowan and Wulff (2007) have suggested that using qualitative methods to inform scale development is appropriate, sufficient, and promotes high quality instrument construction.

The first step in developing the RIS items involved the first author analyzing the themes that surfaced during interviews with participants in her qualitative work. This process helped inform the items that could be used to quantitatively measure RI. For example, one theme was Internal Facilitators. Jorgensen and Duncan (2015a) reported that, “participants explained the code of internal facilitators as self-motivation, time management, research self-efficacy, innate traits and thinking styles, interest, curiosity, enjoyment in the research process, willingness to take risks, being open-minded, and future goals” (p. 24). An example of scale items that were operationalized from the theme Internal Facilitators included: 1) I am internally motivated to be involved with research on some level; 2) I am willing to take risks around research; 3) Research will help me meet future goals; and 4) I am a reflective thinker. The first author used that same process when operationalizing each of the qualitative themes into items on the RIS. There were eight themes of RI development (Jorgensen & Duncan, 2015a). Overall, the number of items per theme was proportionate to the strength of theme, as determined by how often it was coded in the qualitative data. After the scale was developed, the second author reviewed the scale items and cross-checked items with the themes and subthemes from the qualitative studies to evaluate face validity (Nunnally & Bernstein, 1994).
The items on the RIS are short with easily understandable terms in order to avoid misunderstanding and reduce perceived cost of responding (Dillman, Smyth, & Christian, 2009). According to the Flesch Reading Ease calculator, the reading level of the scale is 7th grade (Readability Test Tool, n.d.). The format of answers to each item is forced choice. According to Dillman et al. (2009), a forced-choice format “lets the respondent focus memory and cognitive processing efforts on one option at a time” (p. 130). Individuals completing the scale are asked to read each question or phrase and respond either yes or no. To score the scale, a yes would be scored as one and a no would be scored as zero. Eighteen items are reverse-scored (item numbers 11, 23, 28, 32, 39, 41, 42, 43, 45, 48, 51, 53, 54, 58, 59, 60, 61, 62), meaning that with those 18 questions an answer of no would be scored as a one and an answer of yes would be scored as a zero. Using a classical scoring method (Heppner et al., 2016), scores for the RIS are determined by adding up the number of positive responses. Higher scores indicate a stronger RI overall.

Procedure

Upon Institutional Review Board approval, the study instruments were uploaded onto the primary investigator’s Qualtrics account. At that time, information about the study was uploaded onto the university psychology department’s human subject research system (SONA Systems). Once registered on the SONA system, participants were linked to the instruments used for this study through Qualtrics. All participants were asked to read an informational page that briefly described the nature and purpose of the study, and were told that by continuing they were agreeing to participate in the study and could discontinue at any time. Participants consented by selecting “continue” and completed the questionnaire and instruments. After completion, participants were directed to a post-study information page on which they were thanked and provided contact information about the study and the opportunity to schedule a meeting to discuss research findings at the conclusion of the study. No identifying information was gathered from participants. All information was stored on the Qualtrics database.

Results

All analyses were conducted in SAS 9.4 (SAS Institute, 2012). The researchers first used classical methods (e.g., KR20 and principal factor analysis) to examine the psychometric properties of the RIS. Based on the results of the factor analysis, the researchers used results from a one-parameter Rasch analysis to reduce the number of items on the RIS.

Classical Testing

Homogeneity was explored by computing Kuder-Richardson 20 (KR20) alphas. Across all 68 items the internal consistency was strong (.92). Concurrent validity (i.e., construct validity) was examined by looking at correlations between the RIS and the RSES. The overall correlation between the RIS and the RSES was .66 (p < .001).

Item Response Analysis

Item response theory brought about a new perspective on scale development (Embretson & Reise, 2000) in that it promoted scale refinement even at the initial stages of testing. Item response theory allows for shorter tests that can actually be more reliable when items are well-composed (Embretson & Reise, 2000). The RIS initially included 68 items. Through Rasch analyses, the scale was reduced to 21 items (items numbered 3, 4, 9, 10, 12, 13, 16, 18, 19, 24, 26, 34, 39, 41, 42, 43, 44, 46, 47, 49, 61).

The final 21 items were selected for their dispersion across location on theta in order to widely capture the constructs. The polychoric correlation matrix for the 21 items was then subjected to a principal components analysis yielding an initial eigenvalue of 11.72. The next eigenvalue was 1.97, which clearly identified the crook of the elbow. Further, Cronbach’s alpha for these 21 items was .90. Taken together, these results suggest that the 21-item RIS measures a single factor.

This conclusion was further tested by fitting the items to a two-parameter Rasch model (AIC = 3183.1). Slopes were constrained to unity (1.95), and item location estimates are presented in Table 1. Bayesian a posteriori scores also were estimated and strongly correlated with classical scores (i.e., tallies of the number of positive responses [r = .95, p < .0001]).

Discussion

This scale represents a move from subjective to a more objective assessment of RI. In the future, the scale may be used with other student and non-student populations to better establish its psychometric properties, generalizability, and refinement. Although this study sampled undergraduate students, this scale may be well-suited to use with counseling graduate students and practitioners because items were developed based on a qualitative study with master’s-level counseling students and practicing counselors (Jorgensen & Duncan, 2015a).

Additionally, this scale offers another method for assessing student learning and changes that take place for both students and professionals. As indicated by Holden et al. (1999), it is important to assess learning in multiple ways. Traditional methods may have focused on measuring outcomes that reflect a performance-based, rather than a mastery-based, learning orientation. Performance-based learning has been defined as wanting to learn in order to receive external validation such as a grade (Bruning, Schraw, Norby, & Ronning, 2004). Mastery learning has been defined as wanting to learn for personal benefit and with the goal of applying information to reach a more developed personal and professional identity (Bruning et al., 2004).

Based on what is known about mastery learning (Bruning et al., 2004), students with this type of learning orientation experience identity changes that may be best captured through assessing changes in thoughts, attitudes, and beliefs. The RIS was designed to measure constructs that capture internal changes that may be reflective of a mastery learning orientation. A learner who is performance-oriented may earn an A in a research course but show a lower score on the RIS. The opposite also may be true in that a learner may earn a C in a research course but show higher scores on the RIS. Through the process of combining traditional assessment methods such as grades with the RIS, programs may get a more comprehensive understanding of the effectiveness and impact of their research-related curriculum.

 

Table 1.

Item location estimates.

RIS Item Location Estimate
Item 3 -2.41
Item 4 -1.80
Item 10 -3.16
Item 13 -.86
Item 16 -.94
Item 19 -3.08
Item 24 -2.86
Item 9 -1.10
Item 12 .42
Item 18 -2.24
Item 26 -2.20
Item 39 .20
Item 42 -1.28
Item 44 -.76
Item 34 -1.27
Item 41 -.76
Item 43 -1.47
Item 46 -2.03
Item 47 -2.84
Item 49 1.22
Item 61 -.44

 

Limitations and Areas for Future Research

The sample size and composition were sufficient for the purposes of the initial development and classical testing and item response analysis (Heppner et al., 2016); however, these authors still suggest caution when applying the results of this study to other populations. Endorsements of the participants may not reflect answers of the population in other areas of the country or different academic levels. Future research should sample other student and professional groups. This will help to further establish the psychometric properties and item response analysis conclusions and make the RIS more appropriate for use in other fields. Additionally, future research may examine how scores on the RIS correlate with traditional measures of learning (e.g., grades in individual research courses, collapsed grades in all research courses, research portion on counselor licensure exams).

Conclusion

As counselors-in-training and professional counselors are increasingly being required to demonstrate they are using evidence-based practices and measuring the effectiveness of their services, they may benefit from assessments of their RI (American Counseling Association, 2014; Gladding & Newsome, 2010). CACREP (2016) has responded to increased accountability by enhancing their research and evaluation standards for both master’s- and doctoral-level counseling students. The American Counseling Association is further supporting discussions about RI by publishing a recent blog post titled “Research Identity Crisis” (Hennigan Paone, 2017). In the post, Hennigan Paone described a hope for master’s-level clinicians to start acknowledging and appreciating that research helps them work with clients in ways that are informed by “science rather than intuition” (para. 5). As the calling becomes stronger for counselors to become more connected to research, it seems imperative that counseling programs assess their effectiveness in bridging the gap between research and practice. The RIS provides counseling programs an option to do exactly that by evaluating the way students are learning and growing in relation to research. Further, the use of this type of outcome measure could provide for good modeling at the program level; in that, the hope would be that it would encourage counselors-in-training to develop both a curiosity and motivation to infuse research practices (e.g., needs assessments, outcome measures, data analysis) into their clinical work.

 

Conflict of Interest and Funding Disclosure 

The authors reported no conflict of interest or funding contribu tions for the developmentof this manuscript.

 

References

American Counseling Association. (2014). 2014 ACA code of ethics. Alexandria, VA: Author.

Bruning, R. H., Schraw, G. J., Norby, M. M., & Ronning, R. R. (2004). Cognitive psychology and instruction (4th ed.). Upper Saddle River, NY: Pearson Merrill/Prentice Hall.

Council for Accreditation of Counseling & Related Educational Programs. (2016). 2016 CACREP standards. Retrieved from http://www.cacrep.org/wp-content/uploads/2017/07/2016-Standards-with-Glossary-7.2017.pdf

Dillman, D. A., Smyth, J. D., & Christian, L. M. (2009). Internet, mail, and mixed-mode surveys: The tailored design method (3rd ed.). Hoboken, NJ: John Wiley & Sons, Inc.

Embretson, S. E., & Reise, S. P. (2000). Item response theory for psychologists. Mahwah, NJ: Lawrence Erlbaum.

Gladding, S. T., & Newsome, D. W. (2010). Clinical mental health counseling in community and agency settings (3rd ed.). Upper Saddle River, NJ: Prentice Hall.

Hennigan Paone, C. (2017, December 15). Research identity crisis? [Blog post]. Retrieved from https://www.counseling.org/news/aca-blogs/aca-member-blogs/aca-member-blogs/2017/12/15/research-identity-crisis

Heppner, P. P., Wampold, B. E., Owen, J., Thompson, M. N., & Wang, K. T. (2015). Research design in counseling (4th ed.). Boston, MA: Cengage Learning.

Hernon, P. & Dugan, R. E. (2004). Four perspectives on assessment and evaluation. In P. Hernon & R. E. Dugan (Eds.), Outcome assessment in higher education: Views and perspectives (pp. 219–233). Westport, CT: Libraries Unlimited.

Holden, G., Barker, K., Meenaghan, T., & Rosenberg, G. (1999). Research self-efficacy: A new possibility for educational outcomes assessment. Journal of Social Work Education, 35, 463–476.

Jorgensen, M. F., & Duncan, K. (2015a). A grounded theory of master’s-level counselor research identity. Counselor Education and Supervision, 54, 17–31. doi:10.1002/j.1556-6978.2015.00067

Jorgensen, M. F., & Duncan, K. (2015b). A phenomenological investigation of master’s-level counselor research identity development stages. The Professional Counselor, 5, 327–340. doi:10.15241/mfj.5.3.327

Kahn, J. H., & Scott, N. A. (1997). Predictors of research productivity and science-related career goals among
counseling psychology doctoral students. The Counseling Psychologist, 25, 38–67. doi:10.1177/0011000097251005

Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory (3rd ed.). New York, NY: McGraw-Hill.

Phillips, J. C., & Russell, R. K. (1994). Research self-efficacy, the research training environment, and research productivity among graduate students in counseling psychology. The Counseling Psychologist, 22, 628–641. doi:10.1177/0011000094224008

Readability Test Tool. (n.d.). Retrieved from https://www.webpagefx.com/tools/read-able/

Reisetter, M., Korcuska, J. S., Yexley, M., Bonds, D., Nikels, H., & McHenry, W. (2004). Counselor educators and qualitative research: Affirming a research identity. Counselor Education and Supervision, 44, 2–16. doi:10.1002/j.1556-6978.2004.tb01856.x

Rowan, N., & Wulff, D. (2007). Using qualitative methods to inform scale development. The Qualitative Report, 12, 450–466.

SAS Institute [Statistical software]. (2012). Retrieved from https://www.sas.com/en_us/home.html

Unrau, Y. A., & Beck, A. R. (2004). Increasing research self-efficacy among students in professional academic programs. Innovative Higher Education, 28(3), 187–204.

Unrau, Y. A., & Grinnell,, R. M., Jr. (2005). The impact of social work research courses on research self-efficacy for social work students. Social Work Education, 24, 639–651. doi:10.1080/02615470500185069

Warden, S., & Benshoff, J. M. (2012). Testing the engagement theory of program quality in CACREP-accredited counselor education programs. Counselor Education & Supervision, 51, 127–140.
doi:10.1002/j.1556-6978.2012.00009.x

 

Maribeth F. Jorgensen, NCC, is an assistant professor at the University of South Dakota. William E. Schweinle is an associate professor at the University of South Dakota. Correspondence can be addressed to Maribeth Jorgensen, 414 East Clark Street, Vermillion, SD 57069, maribeth.jorgensen@usd.edu.

Counselor Educators’ Teaching Mentorship Styles: A Q Methodology Study

Eric R. Baltrinic, Randall M. Moate, Michelle Gimenez Hinkle, Marty Jencius, Jessica Z. Taylor

Mentoring is an important practice to prepare doctoral students for future graduate teaching, yet little is known about the teaching mentorship styles used by counselor educators. This study identifies the teaching mentorship styles of counselor educators with at least one year of experience as teaching mentors (N = 25). Q methodology was used to obtain subjective understandings of how counselor educators mentor. Our results suggest three styles labeled as Supervisor, Facilitator, and Evaluator. Specifically, these styles reflect counselor educators’ distinct viewpoints on how to mentor doctoral students in teaching within counselor education doctoral programs. Implications and limitations for counselor educators seeking to transfer aspects of the identified mentorship styles to their own practice are presented, and suggestions for future research are discussed.

Keywords: teaching mentorship, counselor education, Q methodology, doctoral students, graduate teaching

Counselor educators mentor doctoral students in many aspects of the counseling profession, including preparation for future faculty roles (Borders et al., 2011; Briggs & Pehrsson, 2008; S.F. Hall & Hulse, 2010; Lazovsky & Shimoni, 2007; Protivnak & Foss, 2009). Counselor education doctoral students (CEDS) credit faculty mentor relationships in general, and teaching mentorships in particular, as strengthening their professional identities (Limberg et al., 2013). For example, co-teaching, a common form of teaching mentorship, includes relationships that allow CEDS to have instructive pedagogical conversations (Casto, Caldwell, & Salazar, 2005) and learn teaching skills (Baltrinic, Jencius, & McGlothlin, 2016).

Support for teaching mentorships is present in the higher education literature. Doctoral students across disciplines reported the helpfulness of regular mentoring (Austin, 2002) and careful guidance in teaching from faculty members (Jepsen, Varhegyi, & Edwards, 2012). Doctoral students attributed mentoring in teaching as important for increasing self-confidence and comfort with teaching as future faculty members (Utecht & Tullous, 2009). In counselor education, the specific benefits attributed to teaching mentorships included greater confidence in CEDS’ ability to find employment as faculty members (Warnke, Bethany, & Hedstrom, 1999) and greater confidence in CEDS’ teaching ability (S. F. Hall & Hulse, 2010). Doctoral students given teaching opportunities without mentoring risk developing poor attitudes and skill sets, instead of having critical experiences to help them become successful university teachers (Silverman, 2003). Overall, the benefits of teaching mentorships are important given that (a) teaching is a primary component of the faculty job (Davis, Levitt, McGlothlin, & Hill, 2006) and (b) new counselor educators need to sufficiently plan and implement quality teaching (Magnuson, Norem, & Lonneman-Doroff, 2009). Counselor education scholars agree on the importance of mentorship for socializing doctoral students for teaching roles (Baltrinic et al., 2016; Orr, Hall, & Hulse-Killacky, 2008), yet little research is available describing specific styles and approaches to teaching mentorship (S. F. Hall & Hulse, 2010). This gap in the literature is concerning given that new counselor educators reported mentoring and feedback on their teaching by senior faculty members was helpful in enhancing their pedagogical skills (Magnuson, Shaw, Tubin, & Norem, 2004).

Type and Style of Teaching Mentorship

In contrast to discrete faculty–student interactions or training episodes (Black, Suarez, & Medina, 2004), mentor relationships may occur over months and years. Kram (1985) has characterized these relationships as career (teaching skills) and psychosocial (mentor–mentee relationship) types. Career mentoring refers to the act of fostering skills development and sharing field-related content to mentees, and psychosocial mentoring pertains more to the interpersonal and relational aspects of entering a field (e.g., emotional support and working through self-doubt; Curtin, Malley, & Stewart, 2016). Both career and psychosocial mentoring types, or some combination, are used by academic faculty mentors (Curtin et al., 2016). But it is uncertain if these, or any other specific mentoring types, are used for teaching mentorships in counselor education. Teaching mentorships of all types allow faculty members to be flexible, emphasize multiple aspects of being a teacher, and allow for the inclusion of multiple mentors (Borders et al., 2011).

Teaching mentorships transpire through a variety of formal (more structured and planned) and informal (less structured and spontaneous) mentorship styles (Borders et al., 2012). For example, a CEDS may experience teaching mentorship as part of a structured pedagogy course (formal), or have an informal conversation with their faculty advisor about teaching experiences spontaneously during an advising session. Given the complexities and importance of mentor relationships in counselor training, little is known about either formal or informal styles. Thus, it is hardly surprising uncertainty exists regarding counselor educators’ preferred ways of mentoring in general (Borders et al., 2012) and mentoring in teaching in particular (S. F. Hall & Hulse, 2010).

We found no evidence in the counselor education literature describing common styles of teaching mentorship used by counselor educators. This is concerning given that faculty members tend to mentor in the manner that they were mentored (L. A. Hall & Burns, 2009), and that CEDS’ mentorship experiences are influential in shaping their careers as future counselor educators (Borders et al., 2011). Our purpose was to learn more about how counselor educators understand and use their own teaching mentorship styles, thus requiring that we measure aspects of sample members’ subjective understanding of this phenomenon. Therefore, we set out to answer the following research question: What are counselor educators’ preferred styles of engaging in teaching mentorships with CEDS?

Method

Because Q methodology objectively analyzes subjective phenomena, such as people’s preferences and opinions on a topic (Stephenson, 1935), it was selected for this study to reveal the structure of counselor educators’ perspectives (i.e., factors) on the teaching mentorship styles used for preparing CEDS to teach. Q methodology embodies the relative strengths of quantitative and qualitative methodologies by drawing on the depth and richness of qualitative data and the objective rigor of factor analysis to analyze data (Shemmings, 2006).

Participants

The participants (N = 25) eligible for this study: (a) were currently employed as a full-time faculty member in a counselor education doctoral program and (b) had accrued at least one year of experience mentoring CEDS in graduate teaching as a counselor educator. Twenty-five is a sufficient number given that Q methodology simply seeks to establish, understand, and compare individuals’ self-referent views expressed through the Q sort process (Brown, 1980). Participants were both conveniently sampled (n = 10) from counselor educators attending a workshop on Q methodology and purposefully sampled (n = 15) through recruitment emails sent to faculty members at several prominent counselor education doctoral programs in the Eastern (n = 7), Midwestern (n = 10), and Southern (n = 8) regions of the United States. Data were collected from participants by mailing packets that contained an informed consent, basic demographic questionnaire, Q sort, post–Q sort questionnaire, and a postage-prepaid return envelope. (Additional participant demographics are shown in Table 1). Note, we abstained from collecting certain demographic data (e.g., race, ethnicity, university type) from participants in response to their stated concerns about anonymity during data collection. Also, participants in this study were those that completed Q sorts (N = 25) versus those (N = 54) counselor educators used to generate the concourse described below.

Table 1

Demographics of Participants (N = 25)

Age                                                   n (%)                        Rank                                                        n (%)

25–30                                            1 (4%)                    Full Professor                                       5 (20%)

31–40                                            7 (28%)                  Associate Professor                              8 (32%)

41–50                                            5 (20%)                  Assistant Professor                              12 (48%)

51–60                                            9 (36%)

61–65+                                          3 (12%)
Gender                                              n (%)                        Tenure Status                                           n (%)

Female                                           13 (52%)                Tenured                                               13 (52%)

Male                                              12 (48%)                Untenured                                            12 (48%)

 

Years of Teaching

Mentorship Experience                  n (%)                                                                                                         

1–5                                                 9 (36%)

6–10                                               3 (12%)

11–15                                             6 (24%)

16–20                                             4 (16%)

 

Concourse Generation and Selecting Items for the Q Sample

Q methodology studies begin with creating a concourse, or a collection of thoughts or sentiments about a topic (Stephenson, 1978), which serves as the source material for selecting items for the Q sample. To generate the concourse for this study, 54 counselor educators, each with a minimum of one year of experience mentoring doctoral students in graduate teaching, were solicited on a counseling listserv (see Table 2). Counselor educators each provided 5–10 opinion statements on teacher mentorship approaches for working with CEDS in response to one open-ended question: What are your preferred approaches to mentoring CEDS in teaching? This process resulted in 432 opinion statements. However, this was too many statements for participants to rank order during the Q sort process. Accordingly, a 2 x 2 factorial design based on Kram’s (1985) career and psychosocial mentorship types and Borders et al.’s (2012) formal and informal mentoring styles was used as a theoretical guide to obtain a reduced yet representative subset (sample) of statements from the concourse (for additional information on Q sample construction, see Paige & Morin, 2016).


Table 2

Demographics of Counselor Educators Providing Opinion Statements for Concourse (N = 54)

Age                                                n (%)                         Racial Identity                                 n (%)

25–30                                           0 (0%)                      African American                            4 (7%)

31–35                                         8 (15%)                     Native American/Indigenous        1 (2%)

36–40                                        13 (24%)                     Caucasian                                       38 (70%)

41–45                                          7 (13%)                    Hispanic/Latino(a)/Chicano(a)       5 (9%)

46–50                                         4 (7%)                      Multiracial                                         3 (6%)

51–55                                          7 (13%)                   Biracial                                  3 (6%)

56–60                                          7 (13%)

61–65                                          4 (7%)

66–70                                          3 (6%)

71–75+                                         1 (2%)

 

Gender                                          n   (%)                       Primary Professional Identity          n  (%)

Female                                         33 (61%)                  Counselor Educator                            51 (94%)

Male                                            19 (35%)                  School Counselor Educator                  3 (6%)

Transgender                                  1 (2%)

Gender Fluid                                  1 (2%)

 

Sexual Identity                             n   (%)                       Academic Rank                                  n  (%)

Lesbian                              3 (6%)                    Professor                                             9 (17%)

Gay                                                4 (7%)                    Associate Professor                             18 (33%)

Bisexual                                         4 (7%)                    Assistant Professor                              27 (50%)

Heterosexual                                43 (80%)

 

First, the lead author organized the 432 statements into two broad categories: informal and formal mentoring styles (Borders et al., 2012). Duplicate, fragmented, and unclear statements were identified and eliminated in this step. Then, the remaining 96 statements (i.e., 48 statements in the informal and formal categories, respectively) were each cross-referenced with two mentoring types (i.e., psychosocial and career; Kram, 1985). Similar to the first step, the lead author reviewed the content of each statement and eliminated any statements containing duplicate, fragmented, or unclear language, resulting in 52 statements across four domains: 13 statements representing informal and career, 13 statements representing informal and psychosocial, 13 statements representing formal and career, and 13 statements representing formal and psychosocial. Finally, the first author eliminated four and reworded two of the 52 statements after they were reviewed by the second, third, and fourth authors, resulting in a final sample of 48 statements (12 statements per domain). This final group of statements is called the Q sample, which in this case is a collection of statements that represent counselor educators’ perspectives on how to mentor CEDS in teaching. The 48-item Q sample constructed by the first author was reviewed by the second, third, and fourth authors to ensure that each item was unique and did not overlap with other statements, and was applicable to the study. The final Q sample was given to participants for rank ordering during the Q sort process.

Q Sort Process

After Institutional Review Board approval was obtained, 25 participants completed the Q sort process. During the Q sort process, participants were prompted to reflect on their personal experiences of mentoring teaching to CEDS and then asked to rank order the 48 items in the Q sample on a forced-choice frequency distribution, shown in Table 3. Participants indicated a conscribed number of items with which they most agreed (+4) to items with which they least agreed (-4) along the distribution. Items placed in the middle of the rank order indicated statements about which participants were neutral or ambivalent. After finishing the rank ordering of items, participants were asked to provide brief post–Q sort written responses for the top two or three statements with which they most and least agreed, which were incorporated into the factor interpretations found in the results section below.

 

Table 3

Q Sort Forced-Choice Frequency Distribution

Ranking Value          – 4         -3         -2            -1            0              +1           +2           +3           +4

Number of Items      3             4         6             7             8                7             6             4             3

 

Data Analysis

Twenty-five completed Q sorts were entered into the PQMethod software program V. 2.35 (Schmolck & Atkinson, 2012). The PQMethod software creates a by-person correlation matrix (i.e., the “intercorrelation of each Q sort with every other Q sort”) used to facilitate factor analysis and subsequent factor rotation (Watts & Stenner, 2012, p. 97). The purpose of factor analysis in Q methodology is to group small numbers of participants with similar views into factors in the form of Q sorts (Brown, 1980). Factor analysis helps researchers rigorously reveal subjective patterns that could be overlooked via qualitative analysis. A 3-factor solution was selected to provide the highest number of significant factor loadings associated with each factor (Watts & Stenner, 2012). Factors were then rotated using varimax criteria with hand rotation adjustments in order to best reveal groupings of individuals with similar Q sorts. The factor rotations increased the total number of significant factor loadings from 17 to 20 of 25 participants, shown in Table 4.

We approached analyzing and interpreting each factor in the context of all other factors to provide a holistic factor interpretation, versus favoring specific items (i.e., factor scores, +4 or -4) over others within a particular factor (Watts & Stenner, 2012). To do so, a worksheet was created from the factor array (see Table 5) for each individual factor containing the highest and lowest ranked items within the factor and those items ranked lower within the factor compared to other factors. Second, items in the worksheets were compared to participants’ demographic and qualitative responses associated with that factor in order to add depth and detail before the final step. Finally, the finished worksheets were used for constructing the factor interpretation narratives, which are written as a story containing the viewpoint of the factor as a whole.

 

Table 4

Rotated Factor Loadings for Supervisor (1), Facilitator (2), and Evaluator (3)

Q Sort      Factor 1          Factor 2          Factor 3

Supervisor      Facilitator      Evaluator

1             .05                 .74                 .07

2             .47                 .46                 .30

3             .13                 .60                 .24

4             .02                 -.13                .76

5             .51                 .26                 -.23

6             .60                 .25                 -.16

7             .18                 .48                 .03

8             .55                 .37                 .24

9             .54                 .17                 .13

10           .70                 .16                 .14

11           .53                 .17                 .34

12           .54               -.11                  .25

13           .22                 .48                 .16

14           .52                 .40                 -.04

15           .34                 .15                 .53

16           .41                 .13                 .19

17           .10                 .39                 .33

18           .19                 .32                 .47

19           .26                 .73                 .05

20           .27                 .04                 .12

21           .36                 .26                 .11

22           .13                 .40                 .54

23           .10                 .55                 .03

24           .20                 .39                 .50

25           .32                 .46                 .08

Note. Significant loading > .43 are in boldface

 

Results

The data analysis revealed the existence of three different viewpoints (i.e., factors 1, 2, 3) on mentoring CEDS in graduate teaching. We named the factors Supervisor (F1), Facilitator (F2), and Evaluator (F3), respectively, and included those names in the factor interpretations below to best represent the distinguishing teaching mentorship characteristics of the groups of individuals associated with each factor. The resulting three factors accounted for 37% of the total variance in the correlation matrix. Note that sole reliance on statistical criteria, such as the proportion of variance, is discouraged in Q methodology. This is because a factor may hold theoretical interest and have contextual relevance that may be overlooked if only a statistical basis for interpreting subjective factors is used (Brown, 1980). Twenty of the 25 participants loaded significantly on one of the three factors. Factor loadings of > .43 were significant at the p < 0.01 level. Factor 1 had eight participants with significant loadings, accounting for 14% of the variance. Factor 2 had seven participants with significant loadings, accounting for 15% of the variance, whereas Factor 3 had five participants with significant loadings, accounting for 9% of the variance. Five of the 25 Q sorts were non-significant; four participants’ Q sorts were non-significant (X < .43) and one was confounded, meaning the factor scores for that participant were associated with more than one factor.

Table 5

48-Item Q Sample Factor Array With Factor Scores

Item STATEMENT FACTOR SCORES

  1            2            3

1 Viewing doctoral students’ life experiences as complementary to those of the faculty teaching mentor. -3 0 -1
2 Exposing doctoral students to progressively more challenging teaching roles with faculty  supervision. 0 0 3
3 Guiding doctoral students to complete a teaching practicum and/or internship as part of their doctoral training. 2 1 1
4 Sharing teaching resources with doctoral students (e.g., group activities, discussion prompts, assignments, etc.). -1 1 0
5 Maintaining a reputation among doctoral students as a quality teacher by modeling and  demonstrating quality teaching. 0 2 -1
6 Giving doctoral students examples from your own teaching on how to overcome teaching     challenges. 4 -3 -2
7 Having doctoral students rehearse teaching strategies (e.g., lectures, activities) prior to      implementing them in the classroom. -2 -3 -3
8 Defining for doctoral students their teaching roles in and out of the classroom. -1 -2 0
9 Modeling best practices in teaching to facilitate the development of doctoral students’ teaching styles. -1 1 -2
10 Having doctoral students facilitate portions of a course under supervision as part of co-teaching, a course assignment, and so forth. 3 3 1
11 Having doctoral students develop and discuss a teaching philosophy. 0 -2 2
12 Teaching doctoral students to develop rubrics and grade student assignments. -2 -1 0
13 Providing doctoral students with a safe space to acknowledge their teaching mistakes. 4 4 1
14 Assisting doctoral students with incorporating technology and course management systems (e.g., Blackboard) into the teaching process. -2 -2 -4
15 Holding doctoral students to high level of accountability regarding their teaching and learning practices. 0 0 4
16 Having doctoral students teach a portion of a class under faculty supervision. 2 3 1
17 Immersing doctoral students in teaching environments in a sink-or-swim manner with no advice, preparation, or supervision. -4 -4 -1
18 Having doctoral students co-teach an entire course with faculty members and/or experienced peers. 4 0 2
19 Providing strengths-based feedback and support regarding teaching. 0 4 0
20 Interacting with doctoral students as colleagues or equals. -3 3 -4
21 Teaching doctoral students to evaluate their teaching effectiveness and student learning. 1 1 4
22 Providing doctoral students with specific examples of how to address student issues. 3 -1 0
23 Acting as a “sounding board” when doctoral students need to discuss their feelings about   teaching. 0 3 -3
24 Promoting the creation of critical learning environments where doctoral students are asked to apply higher order cognitive skills (e.g., Bloom’s Taxonomy). -3 -2 4
25 Assisting doctoral students with identifying challenging student behaviors. 1 1 2
26 Encouraging doctoral students with teaching experience to engage in mentoring of their peers’ teaching. -4 -1 -3
27 Assisting doctoral students with preparing lectures, activities, and discussion topics. -2 -1 -2
28 Focusing on a broad range of learning and instructional theories when grounding one’s     teaching approach. -2 -3 2
29 Having doctoral students participate in a formal course on pedagogy. -1 -4 2
30 Encouraging doctoral students to implement refined teaching approaches after receiving      feedback from teaching mentors. 3 -1 1
31 Disclosing to doctoral students the ways that faculty members developed their teaching practice, including successes and mistakes. 2 1 -2
32 Supporting doctoral students’ solo teaching opportunities (e.g., to lead a class). 1 2 0
33 Providing both candid and immediate feedback to doctoral students about their teaching
performance.
2 0 0
34 Having doctoral students identify the verbal and nonverbal behaviors that contribute to
building teacher–student rapport.
-1 -1 -1
35 Nurturing professionalism in teaching during faculty–doctoral student interactions. -3 4 3
36 Talking to doctoral students about how their life experiences influence their approach to
teaching.
-4 0 -1
37 Providing doctoral students with readings on pedagogy. 1 -4 2
38 Having doctoral students participate in designing a course. 2 0 -2
39 Having doctoral students observe faculty and experienced peers’ teaching. -1 -2 -1
40 Inviting doctoral students to discuss their clinical/school counseling experiences while in a teaching role in the classroom. 1 2 -3
41 Assisting doctoral students with developing a syllabus. 2 -1 -4
42 Planning before class with doctoral students before they engage in teaching activities. 1 -3 -2
43 Discussing boundaries and other ethical concerns regarding teaching. 0 0 3
44 Facilitating opportunities to improve doctoral students’ confidence and comfort about teaching. -1 2 -1
45 Helping doctoral students with understanding the variables and actions linked to an improved learning environment. -2 0 1
46 Assisting doctoral students with linking specific learning theories to course content/topic areas. 0 -3 1
47 Teaching doctoral students to remain empathic to students’ worldviews by using worldview-affirming language. 3 2 3
48 Discussing with doctoral students why instructional decisions were made in the classroom. 1 2 0

The three factors contain factor exemplars merged to form a single ideal Q sort for each factor, called a factor array (Watts & Stenner, 2012). The factor array, which contains the 48 Q sample items and the associated factor scores for Factors 1 through 3, is found in Table 5. The factor array contains factor scores calculated by weighted averages in which higher-loading Q sorts are given more weight in the averaging process because they better exemplify the factor. It is the factor scores contained in the factor array versus participants’ factor loadings that are used for factor interpretation. Note that parenthetical references to Q sample items and commensurate factor scores (e.g., item 24, +4) provide contextual reference for each of the factor interpretations below.

Factor 1: Supervisor

Eight (32%) of the 25 participants were associated with factor 1. Factor 1 mentors (i.e., Supervisors) view mentoring in teaching as a process that begins with CEDS co-teaching an entire course under the supervision of a faculty member or experienced peer (item 18, +4). Providing CEDS with real-world teaching examples from faculty members’ teaching experiences (item 6, +4) and a safe space to acknowledge teaching mistakes (item 13, +4) are defined as key mentoring processes for Factor 1. In so doing, Supervisors provide candid and immediate feedback about CEDS’ teaching performance (item 33, +2) and incorporate examples from their mentors’ own teaching successes and mistakes as part of the feedback (item 31, +2). These points are illustrated by one participant in her post–Q sort responses: “As a doctoral student, I appreciated receiving honest real-talk feedback (about teaching), which rarely happened. Now, when I mentor students, I tell folks what I really think in a kind but frank manner.” Supervisors encourage CEDS to implement refined teaching approaches after receiving candid feedback about their teaching. Additionally, Supervisors regularly plan before class with CEDS before they engage in teaching activities (item 42, +1). CEDS engage in syllabus development (item 41, +2) and course design (item 38, +2), versus sharing teaching resources (item 4, -1) and linking teaching variables to improved learning environments (item 45, -2), both of which are, as one participant remarked, “assumed to be part of the mentoring process.” Supervisors prefer that CEDS complete formal practica or internships as part of their doctoral training (item 3, +2).

Supervisors employ both formal (e.g., co-teaching, practica and internships, and regular pre-class planning) and informal (e.g., real-world examples, candid feedback, and appropriate professional disclosure about teaching) mentoring practices intended for students’ incremental professional development as teachers (Baltrinic et al., 2016). Supervisors’ teaching mentorship style is guided by the belief that experienced faculty members versus less-experienced peers are critical for influencing the development of doctoral students’ teaching skills (item 26, -4), more so than Factors 2 and 3. And, although Supervisors agree that no doctoral student should learn to teach in a sink-or-swim manner (item 17, -4), the Supervisor takes a less nurturing, or life experience–based approach to mentoring (items 1, -3; 35, -3; and 36, -4 respectively) than Factors 2 and 3. A less nurturing approach may be difficult to understand given the nature of mentoring itself. Keep in mind that what is central to Supervisors’ views on mentoring is the instructive and real-world supervision of students’ structured teaching activities over time, which does not preclude faculty members valuing students’ life experience or nurturing their development; rather, these are not central drivers for preferred mentoring interactions between faculty members and students.

Factor 2: Facilitator

Seven (28%) of the 25 participants agreed with Factor 2, which we have titled Facilitator. Facilitators are distinguished as mentors who nurture professionalism during faculty–student interactions (item 35, +4) and provide feedback and support using a strengths-based approach regarding CEDS’ teaching (item 19, +4). Similar to Supervisors (Factor 1), Facilitators provide CEDS with a safe space in the mentoring relationship to acknowledge teaching mistakes (item 13, +4). However, Facilitators favor providing supportive versus corrective or formal feedback (item 30, -1) as central to the mentoring relationship—described aptly by one participant as “I am not big on structured pedagogical teaching. In other words, modeling and supportive discussion can serve the mentor well.” It stands to reason that Facilitators prefer to maintain a reputation as a quality teacher by modeling and demonstrating best practices in teaching (item 5, +2), and thereby extend this practice to facilitate the development of CEDS’ teaching styles (item 9, +1). Accordingly, Facilitators do not approach mentoring in teaching by providing CEDS with formal readings on pedagogy, or have them participate in a formal course on pedagogy (items 29, -4 and 37, -4 respectively). Instead, Facilitators prefer to discuss with CEDS why they made teaching decisions in the classroom without being prescriptive (item 48, +2).

Facilitators approach mentoring by treating CEDS as colleagues or equals during the teaching experience (item 20, +3) and by creating opportunities for them to improve their comfort and confidence when teaching (item 44, +2). When providing feedback, Facilitators act as sounding boards for CEDS to express their feelings about teaching (item 23, +3). For example, noted in one participant’s post–Q sort response, “We learn the most through our own discomfort, so a mentor serving as a sounding board is very important.” Facilitators are more interested than Supervisors or Evaluators (Factor 3) in how CEDS’ life experiences influence their approach to teaching (item 36, 0). In the classroom, Facilitators invite CEDS to discuss their clinical or school counseling experiences when teaching (item 40, +2). In contrast with the Supervisor and the Evaluator, the Facilitator will share examples of their own teaching resources with CEDS (item 4, +1). In general, Facilitators prefer to have CEDS formally teach a portion of a class under their supervision (item 16, +3), versus having them co-teach an entire class or be thrown into teaching in a sink-or-swim manner (item 17, -4).

Facilitators avoid helping CEDS overcome teaching challenges through examples from their own teaching (item 6, -3) or by providing specific examples to address issues. Overall, Facilitators prefer not to define teaching roles for CEDS (item 8, -2), pre-plan specific activities before class (item 42, -3), provide particular learning theories to address specific course content (item 46, -3), or impose on the learning environment (item 28, -3). Finally, Facilitators do not prefer to provide CEDS with feedback that they should use to refine and subsequently implement during future teaching endeavors (item 30, -1), which is not surprising given the relational and discovery-oriented focus of this factor’s approach to mentoring in teaching.

Factor 3: The Evaluator

Factor 3, the Evaluator, included five (20%) of the 25 participants. Evaluators create a critical learning environment for CEDS to use higher order cognitive skills (item 24, +4) while helping them to evaluate their teaching effectiveness and student learning (item 21, +4). Additionally, Evaluators create a safe space for CEDS to acknowledge their mistakes (item 13, +1) and offer corrective feedback as a way for them to refine their teaching (item 30, +1). Unlike Facilitators in Factor 2, Evaluators do not interact with CEDS as colleagues or equals (item 20, -4), initiate conversations about students’ feelings (item 23, -3), or promote students’ confidence and comfort (item 44, -1) about teaching as a central part of mentorship. Instead, Evaluators come from a directive teaching perspective and place an emphasis on content-driven mentorship. Fittingly, Evaluators have high expectations of CEDS to learn and study critical components of teaching and guide students accordingly. Evaluators provide CEDS with readings on pedagogy (item 37, +2) and expose students to a range of learning and instructional theories (item 28, +2). Evaluators also place high value on CEDS taking a formal class on pedagogy (item 29, +2), distinguishing themselves from Supervisors and Facilitators, who rated teaching-related course work as less important.

Although Evaluators make students aware of ethical concerns while teaching (item 42, -2) and identify specific techniques linked to improved learning (item 45, +1), other pragmatic aspects of teaching are given less attention. For example, Evaluators place minimal importance on rubric development and grading practices (item 12, 0) and course design (item 38, -2), and even less importance on developing a syllabus (item 41, -4) and incorporating technology or course management systems into the teaching process (item 14, -4). This is a stark difference from Supervisors in Factor 1, who placed higher value on some of these responsibilities. And Supervisors emphasize skill development, whereas Evaluators stress creating a strong theoretical foundation to guide CEDS’ teaching tasks.

Classroom experiences, though secondary to learning theory and techniques, also are important aspects to mentorship for participants grouped in Factor 3. Evaluators supervise CEDS as they teach portions of courses (item 10, +1) or take on solo teaching opportunities (item 32, 0). In these circumstances, Evaluators hold CEDS to high levels of accountability in terms of their teaching and learning practices (item 15, +4), as opposed to their counterparts in Factors 1 and 2, who rate the importance of accountability more neutrally. One participant illustrates the importance of accountability: “I want doctoral students to know the how, what, and why of where they are going in the classroom, otherwise their students may end up somewhere else. Educators need to be responsible for accounting for students’ outcomes.” Offering feedback to improve teaching is a key aspect of the mentoring process for Evaluators as mentors and students evaluate these hands-on teaching experiences (item 30, +1). These experiences may be critical for Evaluators to assess CEDS’ learning and abilities, gradually exposing them to more challenging teaching roles (item 2, +3).

Throughout the mentorship process, Evaluators place CEDS’ learning and teaching practice at the center of interactions. Whereas Supervisors and Facilitators share their teaching experiences with CEDS, Evaluators avoid conversation about successes or mistakes in their teacher development (item 21, +4). Furthermore, Evaluators do not believe their reputations as quality teachers (item 5, -1) nor their modeling of best practices in teaching is relevant to CEDS’ development of teaching styles (item 9, -2). Instead, Evaluators keep themselves in a distant position during the course of mentorship. Key teaching mentorship interactions are characterized as student-centered and include discussion of their unique teaching philosophies (item 11, +2), exploration of the intentionality behind the instructional decisions they make in classrooms (item 48, 0), and evaluation of their teaching effectiveness (item 21, +4). Consequently, the mentorship style of Evaluators is directive but student-focused, with emphasis on mentees learning and reflecting upon various pedagogical theories and practices as they develop into teachers.

Discussion

Three different perspectives (i.e., Supervisor, Facilitator, and Evaluator) exist among counselor educators of preferred ways to mentor CEDS in teaching. The three perspectives could be conceptualized as different styles of mentorship that are used by counselor educators. Although each perspective is unique, we noticed areas of agreement among counselor educators on using certain formal (e.g., co-teaching), informal (e.g. affirming worldviews), and combinations of mentoring approaches (Borders et al., 2011). These areas of agreement are similar to mentorship experiences in research with CEDS (Borders et al., 2012). The findings of this study also reinforce that mentoring is a complex process in which mentors fill a variety of roles and initiate multiple activities (Casto et al., 2005). Overall, results lend support for teaching mentorship also supported by the literature. For example, Silverman’s (2003) suggestions that learning about pedagogy, having teaching experiences, and working closely with an experienced mentor who facilitates pedagogical conversations are helpful for preparing future faculty members. Though the pairing procedures between participants and students were unknown (e.g., intentionally paired, general guidance; Borders et al., 2011), each factor in this study contained some combination of formal (e.g., planned readings or activities) and informal (e.g., in-the-moment conversations, minimal planning) approaches to mentoring, which is consistent with other findings on preparing CEDS to teach (Baltrinic et al., 2016).

Both career and psychosocial mentoring types are embodied within the three factors reported in the current study, the findings of which support and extend the work of Kram (1985) by providing examples specific to teaching mentorship styles. The Evaluator and the Supervisor perspectives contain career components, as they are knowledge and skill driven, respectively. The Facilitator perspective is reflective of Kram’s psychosocial type, as it is the most relational, exploratory, and insight-oriented perspective of the three. Though career and psychosocial properties overlap between factors (e.g., skill building, feedback, support), each mentoring perspective has one that is a central characteristic.

The combination of career and psychosocial (Kram, 1985) mentoring types evident in the results also are highlighted in other counselor education mentorship guidelines. Similar to the Association for Counselor Education and Supervision research mentorship model (Borders et al., 2012), participants noted the importance of mentors demonstrating and transferring teaching-oriented knowledge and skills to mentees, as well as providing constructive feedback. Other mentor characteristics and tactics, such as facilitating student self-assessment and accountability, modeling, and creating a supportive and open relationship (Black et al., 2004; Briggs & Pehrsson, 2008), are reflected in the current findings on teacher mentoring approaches. For some participants, maintaining a nurturing and supportive environment was of utmost importance, which also has been noted as essential for mentoring CEDS (Casto et al., 2005).

Borders et al. (2011) specifically noted the importance of mentoring graduate students who aspire to be faculty and, though minimally, addressed pedagogy support by offering teaching opportunities to students and engaging them in conversation about their experiences. The current research findings expand on Borders and colleagues’ position by providing ideas on what these conversations might entail. All three factors identified teacher-related topics of conversation and relevant activities, including teaching philosophies, skills, and tasks; pedagogical and learning theories; monitoring student interactions; classroom ethics and boundaries; and self-efficacy associated with teacher development. This offers some unique ideas on topics of interest that may be incorporated into conversations when mentoring students in teaching.

A practical component to teaching mentorships is represented within the factors. Rather than culminating in a product, such as co-written publications developed in research mentoring (Briggs & Pehrsson, 2008), each of the three teaching mentorship factors guide CEDS through applied teaching experiences. These hands-on teaching opportunities provided experiences for CEDS to work through and reflect upon, and offered material for mentors to provide feedback. The extent of student involvement in teaching varied, as did the direction of conversations (e.g., corrective, exploratory); nevertheless, some mentoring tasks were built from observable and enacted teaching moments.

Implications for Counselor Education Programs and Counselor Educators

We believe that it may be helpful for faculty members in positions of leadership (i.e., department chairs, doctoral program coordinators) in counselor education doctoral programs to infuse awareness of teaching mentorship practices among other faculty members. Senior counselor education faculty members responsible for coordinating doctoral programs may be able to create more impactful mentorship experiences for CEDS by encouraging other faculty members to become more aware of their mentorship practices. Several researchers have suggested that quality mentorship is associated with counselor education faculty members who demonstrate intentionality in their mentorship practices (Black et al., 2004; Casto et al., 2005). Findings from this study can generate discussion and self-assessment among faculty members, leading to a clearer understanding of different mentoring styles that exist within a department or program. As different mentoring styles are identified among faculty members, it may help to consider ways to match CEDS with faculty members who will be a good fit for their preferred learning style.

Similarly, we also believe that counselor educators mentoring CEDS in teaching can benefit from being reflective about their own style of mentorship. It may be helpful to consider one’s personal style of mentorship in relation to the styles of teaching mentorship (i.e., Supervisor, Facilitator, and Evaluator) highlighted in this study. Counselor educators who identify with a particular teaching mentorship style may discuss this with CEDS early in the mentorship process to facilitate a goodness of fit. In situations in which CEDS do not have the opportunity to select a mentor of their choosing, it may be particularly important for counselor educators to consider how their style of mentorship will fit with their mentee. It may help counselor educators identifying with a singular style of mentorship to integrate strengths from other styles of mentorship into their practice. For example, a counselor educator who closely identifies with the Supervisor style may benefit from increasing the amount of strength-based feedback they provide mentees (i.e., associated with the Facilitator), or by being more methodical about gradually increasingly their mentees exposure to challenging teaching experiences (i.e., associated with the Evaluator).

Limitations and Recommendations for Future Research

Q studies are not generalizable in the same way as other quantitative studies. The data in this study represent subjective perspectives; thus, results are viewed similar to qualitative studies (Watts & Stenner, 2012). However, Q results offer an additional rigor derived from the factor analysis of the participants’ respective Q sorts. Results from this study pertain to mentoring CEDS in aspects of pedagogy and not clinical teaching or clinical experiences. Future Q methodology studies can use purposeful samples of diverse particpants with a range of pedagogy and clinical teaching experiences, and use participants from a wider range of regions within the United States. Examining students’ and faculty members’ critical incidents during teaching mentorships may increase understanding of respective mentor and mentee perspectives. Future studies distinguishing teacher mentorship from research mentorship would be useful. Finally, investigating the specific practices of the three factor types through single-case studies could provide in-depth perspectives on faculty members’ teaching mentorship styles.

 

Conflict of Interest and Funding Disclosure

The authors reported no conflict of interest or funding contributions for the development of this manuscript.

References

Austin, A. E. (2002). Preparing the next generation of faculty: Graduate school as socialization to the academic                career. The Journal of Higher Education, 73, 94–122.

Baltrinic, E. R., Jencius, M., & McGlothlin, J. (2016). Co-teaching in counselor education: Preparing doctoral                  students for future teaching. Counselor Education and Supervision, 55, 31–45.

Black, L. L., Suarez, E. C., & Medina, S. (2004). Helping students help themselves: Strategies for successful                    mentoring relationships. Counselor Education and Supervision, 44, 44–55.
doi:10.1002/j.1556-6978.2004.tb01859.x

Borders, D. L., Wester, K. L., Haag Granello, D., Chang, C. Y., Hays, D. G., Pepperell, J., & Spurgeon, S. L.
(2012). Association for Counselor Education and Supervision guidelines for research mentorship:                        Development and implementation. Counselor Education and Supervision, 51, 162–175.
doi:10.1002/j.1556-6978.2012.00012.x
Borders, D. L., Young, J. S., Wester, K. L., Murray, C. E., Villalba, J. A., Lewis, T. F., & Mobley, A. K. (2011).             Mentoring promotion/tenure-seeking faculty: Principles of good practice within a counselor education                  program. Counselor Education and Supervision, 50, 171–188. doi:10.1002/j.1556-6978.2011.tb00118.x

Briggs, C. A., & Pehrsson, D.-E. (2008). Research mentorship in counselor education. Counselor Education and             Supervision, 48, 101–113. doi:10.1002/j.1556-6978.2008.tb00066.x

Brown, S. R. (1980). Political subjectivity: Applications of Q methodology in political science. New Haven, CT:               Yale University Press.

Casto, C., Caldwell, C., & Salazar, C. F. (2005). Creating mentoring relationships between female faculty and                  students in counselor education: Guidelines for potential mentees and mentors. Journal of Counseling                  & Development, 83, 331–336. doi:10.1002/j.1556-6678.2005.tb00351.x

Curtin, N., Malley, J., & Stewart, A. J. (2016). Mentoring the next generation of faculty: Supporting academic                  career aspirations among doctoral students. Research in Higher Education, 57, 714–738.
doi:10.1007/s11162-015-9403-x

Davis, T. E., Levitt, D. H., McGlothlin, J. M., & Hill, N. R. (2006). Perceived expectations related to promotion               and tenure: A national survey of CACREP program liaisons. Counselor Education and Supervision, 46,               146–156. doi:10.1002/j.1556-6978.2006.tb00019.x

Hall, L. A., & Burns, L. D. (2009). Identity development and mentoring in doctoral education. Harvard                            Educational Review, 79, 49–70. doi:10.17763/haer.79.1.wr25486891279345

Hall, S. F., & Hulse, D. (2010). Perceptions of doctoral level teaching preparation in counselor education. The                  Journal for Counselor Preparation and Supervision, 1, 2–15. doi:10.7729/12.0108

Jepsen, D. M., Varhegyi, M. M., & Edwards, D. (2012). Academics’ attitudes towards PhD students’ teaching:                 Preparing research higher degree students for an academic career. Journal of Higher Education Policy                 and Management, 34, 629–645.

Kram, K. E. (1985). Mentoring at work: Developmental relationships in organizational life. Glenview, IL: Foresman.

Lazovsky, R., & Shimoni, A. (2007). The on-site mentor of counseling interns: Perceptions of ideal role and                     actual role performance. Journal of Counseling & Development, 85, 303–316.
doi:10.1002/j.1556-6678.2007.tb00479.x

Limberg, D., Bell, H., Super, J. T., Jacobson, L., Fox, J., DePue, K., . . . Lambie. G. W. (2013). Professional
identity development of counselor education doctoral students: A qualitative investigation. The                            Professional Counselor, 3, 40–53. doi:10.15241/dll.3.1.40

Magnuson, S., Norem, K., & Lonneman-Doroff, T. (2009). The 2000 cohort of new assistant professors of                       counselor education: Reflecting at the culmination of six years. Counselor Education and Supervision,                 49, 54–71. doi:10.1002/j.1556-6978.2009.tb00086.x

Magnuson, S., Shaw, H., Tubin, B., & Norem, K. (2004). Assistant professors of counselor education: First and                second year experiences. Journal of Professional Counseling: Practice, Theory, & Research, 32, 3–18.

Orr, J. J., Hall, S. F., & Hulse-Killacky, D. (2008). A model for collaborative teaching teams in counselor                         education. Counselor Education and Supervision, 47, 146–163. doi:10.1002/j.1556-6978.2008.tb00046.x

Paige, J. B., & Morin, K. H. (2016). Q-sample construction: A critical step for Q-methodological study. Western             Journal of Nursing Research, 38, 96–110. doi:10.1177/0193945914545177

Protivnak, J. J., & Foss, L. L. (2009). An exploration of themes that influence the counselor education doctoral                 student experience. Counselor Education and Supervision, 48, 239–256.
doi:10.1002/j.1556-6978.2009.tb00078.x

Schmolck, P., & Atkinson, J. (2012). PQMethod (2.35). Retrieved from http://schmolck.userweb.mwn.de
            qmethod/#PQMethod

Shemmings, D. (2006). ‘Quantifying’ qualitative data: An illustrative example of the use of Q methodology in
psychosocial research. Qualitative Research in Psychology, 3(2), 147–165. doi:10.1191/1478088706qp060oa

Silverman, S. (2003). The role of teaching in the preparation of future faculty. Quest, 55, 72–81.
doi:10.1080/00336297.2003.10491790

Stephenson, W. (1935). Technique of factor analysis. Nature, 136, 297. doi:10.1038/136297b0

Stephenson, W. (1978). Concourse theory of communication. Communication, 3, 21–40.

Utecht, R. L., & Tullous, R. (2009). Are we preparing doctoral students in the art of teaching? Research in                       Higher Education Journal, 4, 1–12.

Warnke, M. A., Bethany, R. L., & Hedstrom, S. M. (1999). Advising doctoral students seeking counselor
education faculty positions. Counselor Education and Supervision, 38, 177–190
doi:10.1002/j.1556-6978.1999.tb00569.x

Watts, S., & Stenner, P. (2012). Doing Q methodological research: Theory, method and interpretation. London,
England: Sage.

 

Eric R. Baltrinic is an assistant professor at Winona State University. Randall M. Moate is an assistant professor at the University of Texas at Tyler. Michelle Gimenez Hinkle is an assistant professor at William Paterson University. Marty Jencius is an associate professor at Kent State University. Jessica Z. Taylor is an assistant professor at Central Methodist University. Correspondence can be addressed to Eric Baltrinic, Gildemeister 116A, P.O. Box 5838, 175 West Mark Street, Winona, MN 55987-5838, ebaltrinic@gmail.com.

The Common Factors Discrimination Model: An Integrated Approach to Counselor Supervision

A. Elizabeth Crunk, Sejal M. Barden

Numerous models of clinical supervision have been developed; however, there is little empirical support indicating that any one model is superior. Therefore, common factors approaches to supervision integrate essential components that are shared among counseling and supervision models. The purpose of this paper is to present an innovative model of clinical supervision, the Common Factors Discrimination Model (CFDM), which integrates the common factors of counseling and supervision approaches with the specific factors of Bernard’s discrimination model for a structured approach to common factors supervision. Strategies and recommendations for implementing the CFDM in clinical supervision are discussed.

Keywords: supervision, common factors, specific factors, discrimination model, counselor education

Clinical supervision is a cornerstone of counselor training (Barnett, Erickson Cornish, Goodyear, & Lichtenberg, 2007) and serves the cardinal functions of providing support and instruction to supervisees while ensuring the welfare of clients and the counseling profession (Bernard & Goodyear, 2014). Numerous models of clinical supervision have been developed, varying in emphasis from models based on theories of psychotherapy, to those that focus on the developmental needs of the supervisee, to models that emphasize the process of supervision and the various roles of the supervisor (Bernard & Goodyear, 2014). However, despite the abundance of available supervision models, there is little evidence to support that any one approach is superior to another (Morgan & Sprenkle, 2007; Storm, Todd, Sprenkle, & Morgan, 2001). Thus, a growing body of clinical supervision literature underscores a need for strategies that integrate the most effective elements of supervision models into a parsimonious approach rather than emphasizing differences between models (Lampropoulos, 2002; Milne, Aylott, Fitzpatrick, & Ellis, 2008; Morgan & Sprenkle, 2007; Watkins, Budge, & Callahan, 2015). Common factors models of supervision bridge the various approaches to supervision by identifying the essential components that are shared across models, such as the supervisory relationship, the provision of feedback, and supervisee acquisition of new knowledge and skills (Milne et al., 2008; Morgan & Sprenkle, 2007). Other common factors approaches to supervision draw on psychotherapy outcome research, aiming to extrapolate common factors of counseling and psychotherapy—such as the therapeutic relationship and the instillation of hope—to clinical supervision approaches (Lampropoulos, 2002; Watkins et al., 2015)

Although reviews of the supervision literature allude to commonalities among supervision approaches (Bernard & Goodyear, 2014), there is a dearth of published literature offering practical strategies for bridging common factors of counseling and supervision. Perhaps even more limited is literature that addresses the necessary convergence of both common and specific factors, or the integration of common factors of supervision with particular interventions that are applied in various supervision approaches (e.g., role-playing or Socratic questioning; Watkins et al., 2015). In a recent article, Watkins and colleagues (2015) proposed a supervision model that extrapolates Wampold and Budge’s (2012) psychotherapy relationship model to specific factors of supervision, encouraging supervisors to apply such relationship common factors to some form of supervision. However, there remains a need for a structured approach to supervision that integrates the common factors of counseling and supervision with the specific factors of commonly used, empirically supported models of clinical supervision.

Because the common factors are, by definition, elements that are shared among theories of counseling and supervision, it can be argued that common factors approaches can be applied to almost any supervision model. However, we argue for the integration of common factors with the discrimination model for several reasons. First, the relationship has been found to be the essential common factor shared among counseling (Lambert & Barley, 2001; Norcross & Lambert, 2014) and supervision approaches, and is often cited as the most critical element of effective supervision and other change-inducing relationships, such as counseling, teaching and coaching (Lampropoulos, 2002; Ramos-Sánchez et al., 2002). The supervisory roles of teacher, counselor and consultant are built into the discrimination model, providing supervisors with natural avenues for fostering a strong supervisory relationship. However, the proposed Common Factors Discrimination Model (CFDM) expands on the discrimination model by providing specific recommendations for how supervisors might use such roles as opportunities for developing and maintaining the supervisory relationship. Second, we consider Bernard’s (1979, 1997) discrimination model to lend itself well to common factors approaches to supervision, as both are concerned with process aspects of supervision, such as tailoring supervision interventions to the needs of the supervisee. Finally, because the discrimination model is widely used by practicing supervisors (Timm, 2015), common factors approaches are likely to fit naturally with customary supervision practices of more experienced supervisors who espouse the discrimination model, yet the CFDM is concise enough for novice supervisors to grasp and apply. Thus, the purpose of this manuscript is to build on Watkins and colleagues’ (2015) model by presenting the CFDM, an innovative approach to supervision that converges common factors identified in both counseling and supervision and integrates them with the specific factors of Bernard’s (1979, 1997) discrimination model. Specifically, we will (a) review the relevant literature on common factors approaches to counseling and supervision and the discrimination model; (b) provide a rationale for a model of supervision that integrates the specific factors of the discrimination model with a common factors approach; and (c) offer strategies and recommendations for applying the CFDM in clinical supervision.

The Common Factors Approach

The notion of therapeutic common factors resulted from psychotherapy outcome research suggesting that psychotherapies yield equivalent outcomes when compared against each other and, thus, what makes psychotherapy effective is not the differences between therapies, but rather the commonalities among them (Lambert, 1986). Wampold’s (2001) landmark research revealed that the theoretical approach utilized by the therapist (e.g., psychodynamic therapy) explained less than 1% of therapy outcome. In light of these findings, researchers and clinicians have been urged to minimize the importance placed on specific clinical techniques and interventions; instead, an emphasis on the commonalities among therapies that are associated with positive outcomes (Norcross & Lambert, 2011), such as the therapeutic alliance, empathy, positive regard, and collaboration within the therapeutic relationship (Norcross & Lambert, 2014; Norcross & Wampold, 2011), is more useful for describing therapeutic changes.

Among the most influential common factors approaches is Lambert’s model of therapeutic factors (see Lambert & Barley, 2001, for a review). Although lacking in stringent meta-analytic or statistical methods, Lambert and Barley (2001) presented four primary factors that are shared among therapeutic approaches (with the percentage that each factor contributes to therapy outcome indicated): (a) extratherapeutic factors (i.e., factors associated with the client, as well as his or her environment; 40%); (b) common factors (i.e., relationship factors such as empathy, warmth, positive regard, supporting the client in taking risks; 30%); (c) placebo, hope, and expectancy factors (i.e., the client’s hope and expectancy for improvement, as well as trust in the treatment; 15%); and (d) skills/techniques factors (i.e., components specific to various therapies, such as empty chair or relaxation techniques; 15%). Although a variety of common factors have been identified in the psychotherapy outcome research, numerous meta-analyses have identified the therapeutic relationship as the sine qua non (Norcross & Lambert, 2011, p. 12) of common factors that account for positive outcomes irrespective of the specific treatment utilized (Norcross & Wampold, 2011). They stated: “although we deplore the mindless dichotomy between relationship and method in psychotherapy, we also need to publicly proclaim what decades of research have discovered and what tens of thousands of relational therapists have witnessed: The relationship can heal” (Norcross & Lambert, 2014, p. 400).

Although the common factors are necessary for producing positive counseling outcomes, this does not mean that specific factors are irrelevant (Norcross & Lambert, 2011). On the contrary, prior research indicates that engaging in specific treatment interventions is associated with the working alliance and with positive counseling outcomes (Tryon & Winograd, 2011; Wampold & Budge, 2012). Watkins and colleagues (2015) noted that treatment interventions are necessary in maintaining client hope and expectations for positive counseling outcomes, stating, “The specific ingredients create benefits through the common factor of expectations, and respecting that interdependent common/specific factor dynamic is vital to treatment outcome” (p. 221).

Common Factors Approaches to Supervision

Although the concept of common factors in counseling and psychotherapy is not a new one and has been the focus of considerable empirical research (Frank, 1982; Lambert & Barley, 2001; Lambert & Ogles, 2004; Rosenzweig, 1936), applying the common factors approach to clinical supervision is relatively novel (Morgan & Sprenkle, 2007). Counseling and clinical supervision are distinct interventions; however, Milne (2006) makes a case for extrapolating findings from psychotherapy research to supervision, as both share common structures and properties of education, skill development, problem-solving and the working alliance. Furthermore, Bernard and Goodyear (2014) noted, “because therapy and supervision are so closely linked, developments in psychotherapy theory inevitably will affect supervision models” (p. 59).

Despite frequent reference to the similarities among supervision models, literature that specifically addresses common factors of supervision approaches is scarce (Bernard & Goodyear, 2014). In our review of the supervision literature, we identified five articles that endorsed common factors approaches to supervision and counselor training (Castonguay, 2000; Lampropoulos, 2002; Milne et al., 2008; Morgan & Sprenkle, 2007; Watkins et al., 2015). Following Castonguay’s (2000) seminal work on training in psychotherapy integration, Lampropoulos (2002) was among the first to address the parallels that exist between common factors of both counseling and supervision, advocating for a theoretically eclectic approach to supervision and for the prescriptive matching of common factors to supervisee needs. For example, Lampropoulos (2002) suggested that supervisors might integrate psychodynamic theory as a means of increasing supervisees’ awareness of countertransference and attachment patterns, or cognitive theory in order to restructure supervisees’ unhelpful thoughts about counseling and supervision.

In contrast to Lampropoulos’s (2002) model, which extrapolates common factors of counseling to supervision, Morgan and Sprenkle (2007) and Milne and colleagues (2008) endorsed approaches that bridge similarities between supervision models. Morgan and Sprenkle (2007) identified a number of common factors among models of supervision, grouping these factors into the following three dimensions falling on their respective continua: (a) emphasis, ranging from specific clinical competence to general professional competence; (b) specificity, ranging from the idiosyncratic needs of supervisees and clients to the general needs of the profession as a whole; and (c) supervisory relationship, ranging from collaborative to directive. The authors (Morgan & Sprenkle, 2007) then proposed a model of supervision that applies these three dimensions of supervision to the supervisor roles of coach, teacher, mentor and administrator. In contrast, Milne and colleagues (2008) conducted a best evidence synthesis of the supervision literature to summarize the current state of empirical research on supervision practices and applied their findings to a basic model of supervision. Although both models (Milne et al., 2008; Morgan & Sprenkle, 2007) contributed viable descriptive models of common factors approaches to supervision, they were limited in providing specific strategies for supervisors to employ in a given situation. Furthermore, neither model specifically addressed the intersection of common factors of counseling and common factors of supervision. Thus, noting that common factors of counseling and specific factors of supervision approaches are interdependently related, Watkins and colleagues (2015) proposed a common/specific factors model, designating the supervisory relationship as the crowning common factor and encouraging supervisors to apply this relationship-centered model to the specific factors of “some form of supervision” (Watkins et al., 2015, p. 226). Following Watkins and colleagues’ recommendations, we therefore present an integrated approach to supervision by applying the common factors of counseling and supervision to the specific factors of the discrimination model.

 The Discrimination Model

The discrimination model (Bernard, 1979, 1997) provides a conceptualization of clinical supervision as both an educational and a relationship process (Bernard & Goodyear, 2014; Borders & Brown, 2005). In essence, the discrimination model involves the dual functions of assessing the supervisee’s skills and choosing a supervisor role for addressing the supervisee’s needs and goals. The supervisee is assessed on three skill areas, or foci: (a) intervention (observable behaviors that the supervisee demonstrates in session, such as demonstration of skills and interventions); (b) conceptualization (cognitive processes, such as the supervisee’s ability to recognize the client’s themes and patterns, as well as the supervisee’s level of understanding of what is taking place in session); and (c) personalization (supervisee self-awareness and ability to adapt his or her own personal style of counseling while maintaining aware-ness of personal issues and countertransference). Furthermore, over 30 years ago, Lanning (1986) proposed the addition of assessing the supervisee’s professional behaviors, such as how the supervisee approaches legal and ethical issues.

When the supervisor has assessed the supervisee’s skill level in each of the three foci, the supervisor utilizing the discrimination model assumes the appropriate role for addressing the supervisee’s needs and goals: (a) teacher (assumed when the supervisor perceives that the supervisee requires instruction or direct feedback); (b) counselor (appropriate for when the supervisor aims to increase supervisee reflectivity, or to process the supervisee’s internal reality and experiences related to his or her professional development or work as a counselor); or (c) consultant (a more collaborative role that is assumed when the supervisor deems it appropriate for the supervisee to think and act more independently, or when the supervisor aims to encourage the supervisee to trust his or her own insights). It is important to note that the supervisor does not take on the singular form of any of the three roles, but rather makes use of the knowledge and skills that are characteristic of each role (Borders & Brown, 2005). The discrimination model is situation-specific; therefore, supervisor roles and foci of assessment might change within a supervision session and across sessions. Consequently, supervisors are advised to remain attuned to the supervisee’s needs in order to attend to his or her most pressing focus area and to assume the most suitable role for addressing these needs rather than displaying strict adherence to a preferred focus or role (Bernard & Goodyear, 2014).

The discrimination model is considered to be an accessible, empirically validated model for supervisors and can be adapted in complexity depending on the supervisor’s level of readiness (Bernard & Goodyear, 2014; Borders & Brown, 2005). Using multidimentional scaling in an empirical study of the discrimination model, Ellis and Dell (1986) provided validation for both the teacher and counselor roles, although the consultant role did not emerge as a distinct role. Their findings are consistent with other studies that provided support for the teacher and counselor roles, but not for the consultant role (Glidden & Tracey, 1992; Goodyear, Abadie, & Efros, 1984; Stenack & Dye, 1982). Thus, the consultant role might be more difficult to distinguish from the teaching and counseling roles, perhaps, as Bernard and Goodyear (2014) noted, because the consultant role requires supervisors to put aside their position of expert or therapist and act more collaboratively with their supervisees. Ellis and Dell provided an alternate (and conflicting) explanation, suggesting that consultation might be an underlying component of both the teaching and counseling roles. These findings indicate a need for future research and possible modification of the discrimination model; however, the discrimination model is generally supported by empirical research.

Rationale for an Integrated Model

Watkins and colleagues (2015) stated: “Akin to the ‘great psychotherapy debate’ about effectiveness (Wampold, 2001), a ‘great psychotherapy supervision debate’ about effectiveness is eminently likely” (p. 17). Several cross-cutting models of clinical supervision have been proposed (Milne et al., 2008; Morgan & Sprenkle, 2007), as well as models that extrapolate common factors of counseling to supervision practices (Lampropoulos, 2002; Watkins et al., 2015); however, there has yet to be a model that systematically converges both. Given the abundance of empirical support for common factors in counseling, we have conceptualized a new model, the CFDM, to integrate a supervision approach that is grounded in effective counseling and supervision practices. Furthermore, Watkins and colleagues encouraged supervisors to apply common factors of counseling to the specific factors of some form of supervision; however, to our knowledge, no such model integrating common factors with the specific factors of an empirically supported model of supervision has been published. Thus, the CFDM combines essential factors of supervision models, converges them with common factors of counseling approaches, and applies them to the specific factors of Bernard’s (1979, 1997) discrimination model for a structured approach that bridges effective elements of both counseling and supervision.

Bernard and Goodyear (2014) pointed to the supervisory relationship as one of the most essential factors in supervision; however, a major criticism of the discrimination model is that the model itself does not thoroughly address the supervisory relationship (Beinart, 2004). Similarly, Freeman and McHenry (1996) found that supervisors ranked the development of clinical skills as their top goal for supervising counselors-in-training and identified that supervision involves taking on the roles of teacher, challenger and supporter, but relationship building did not surface as an emphasis of counselor supervision (Bell, Hagedorn, & Robinson, 2016). Thus, the CFDM builds on the discrimination model by incorporating tenets of the supervisory relationship that are consistent with common factors of counseling and supervision, such as the working alliance (Bordin, 1983), the real relationship (Watkins, 2015), and the instillation of hope (Lambert & Barley, 2001; Lampropoulos, 2002). Historically, the supervision literature suggests that novice supervisors, in particular, might manage feelings of self-doubt and uncertainty by employing a highly structured supervision style, focusing on providing supervisees with feedback on counseling techniques or client diagnosis and placing less emphasis on attending to the supervisory relationship (Hess, 1986; Hess & Hess, 1983). Furthermore, whereas building rapport is a top priority in many therapeutic relationships, counselor supervisors might prioritize other factors instead, such as scheduling, paperwork, and evaluation, before establishing a relationship with the supervisee (Bell et al., 2016). Because the discrimination model is a widely used approach to supervision (Timm, 2015), experienced counselors who wish to incorporate common factors of supervision and counseling into their customary supervision practice will likely find the CFDM to be an intuitive supervision approach. The following section provides a description of the four primary tenets of the CFDM, as well as strategies and recommendations for applying the CFDM in supervision.

The Common Factors Discrimination Model

The CFDM is an innovative model of supervision that aims to integrate the common factors of counseling and supervision with the specific factors of Bernard’s (1979, 1997) discrimination model for a structured, relationship-centered approach to clinical supervision. The CFDM builds on existing supervision models that extrapolate common factors of counseling to supervision practices (Lampropoulos, 2002; Watkins et al., 2015). The CFDM also draws on the discrimination model (Bernard, 1979, 1997) as a method of assessing supervisee needs and tailoring feedback and support accordingly. Although the melding of common factors with the discrimination model has yet to be empirically tested as an integrated approach to supervision, both approaches have received substantial empirical support as standalone models. Empirical research supports common factors approaches to counseling and other change-inducing relationships; however, the CFDM’s underpinnings in the more prescriptive discrimination model provide a structured approach to common factors supervision. In addition, there is evidence to suggest the effectiveness of common factors approaches across cultures (Dewell & Owen, 2015).

We have proposed a model that combines effective common factors of counseling and supervision with the specific factors of Bernard’s (1979, 1997) widely used, empirically supported and accessible discrimination model for a structured approach to common factors supervision. The primary tenets of the CFDM were derived by reviewing the literature on common factors models of supervision and purposively selecting the most common elements, including: (a) development and maintenance of a strong supervisory relationship, (b) supervisee acquisition of new knowledge and skills, (c) supervisee self-awareness and self-reflection, and (d) assessment of supervisees’ needs and the provision of feedback based on the tenets of Bernard’s (1979, 1997) discrimination model. The following section provides a brief fictional case illustration followed by specific strategies for applying the CFDM to supervision. Specific examples for matching common factors with tenets of the discrimination model are provided in Table 1, based on an illustrative case example, followed by a discussion of the primary tenets of the case to the CFDM.

 

Case Illustration

André, a master’s student in mental health counseling, is completing his first semester of clinical practicum at his university’s community counseling center. Although André demonstrates competency across many clinical and professional domains, as a novice counselor trainee he struggles with reflecting feeling with clients in session. His supervisor has noticed that André tends to sidestep emotional topics in session and, instead of reflecting feeling, responds to emotional content by asking the client unrelated questions or by changing the subject. In the few instances in which he has attempted to reflect feeling, André has been inaccurate in his reflections, undershooting the intensity of the client’s feelings or misreading the client’s emotions altogether. This has sometimes led to tension and frustration between André and his clients. Using the CFDM, his supervisor might utilize the following strategies in supervision with André. In the following section, the case of André is discussed, integrating the primary tenets of the CFDM.

 

Application of the CFDM

The Supervisory Relationship

Bernard and Goodyear (2014) suggested that the supervisory relationship is a critical factor in effective supervision, regardless of the model of supervision that is followed. Thus, the central tenet of the CFDM is the development of a collaborative supervisory relationship that is characterized by the Rogerian conditions of empathy, genuineness, and unconditional positive regard (Lampropoulos, 2002). Utilizing the CFDM with André, the supervisor approaches her supervisory roles of teacher, counselor and consultant with warmth and acceptance as she addresses André’s difficulty reflecting feeling with his client, rather than using a confrontational or critical approach. Furthermore, she explores with André his personal experiences with emotion, taking into consideration his background and cultural factors that could play a role in his relationship with emotion.

The real relationship. The real relationship (Lampropoulos, 2002; Watkins, 2015) refers to a supervisory relationship that is unaltered by transference or countertransference and is characterized by empathy, warmth, genuineness, unconditional positive regard and trust. The expression of humor and optimism also is recommended in developing a common factors-influenced supervisory relationship. Extrapolating from Gelso’s (2014) tripartite model of the psychotherapy relationship, Watkins (2015) defined the real relationship as “the personal relationship between supervisor and supervisee marked by the extent to which each is genuine with the other and perceives/experiences the other in ways that befit the other” (p. 146). Factors of the real relationship are critical in supervision, as they allow supervisees to develop trust in the supervisory relationship and provide safety for supervisees to disclose vulnerabilities, mistakes and personal concerns (Storm et al., 2001).

Because the evaluative and hierarchical nature of supervision might make the supervisory relationship vulnerable to supervisory ruptures (Burke, Goodyear, & Guzzardo, 1998; Nelson & Friedlander, 2001; Safran, Muran, Stevens, & Rothman, 2007), the CFDM utilizes a collaborative evaluation process (Rønnestad & Skovholt, 1993), in which supervisees have the opportunity to practice evaluating their skills independently throughout their training either by journaling or by completing an evaluation form about their session and submitting their self-evaluation to their supervisor. Supervisee self-evaluations are then processed in supervision. The CFDM supervisor in the case illustration might use this strategy with André to allow him to raise self-awareness and to receive regular feedback on his skills. Furthermore, assuming the teacher role of the discrimination model, his supervisor might direct André to conduct a self-assessment of his reflections of feeling following each session, which he could bring into supervision to discuss and receive her feedback.

Because the supervisory relationship is the central tenet of the CFDM, it is advisable to evaluate and monitor the relationship throughout supervision. Furthermore, Lampropoulos (2002) recommended that supervisors identify and attempt to repair ruptures as soon as possible, as ruptures can be deleterious to supervision process and outcome. One such measure for evaluation of the supervisory relationship is the Supervisory Relationship Questionnaire (SRQ; Palomo, Beinart, & Cooper, 2010), a 67-item assessment of the supervisee’s perceptions of the supervisory relationship. Other plausible measures include the Working Alliance Inventory (Bahrick, 1990) and the Revised Relationship Inventory (Schacht, Howe, & Berman, 1988). Allowing André to assess the supervisory relationship and give his supervisor feedback can provide insight into André’s perception of their relationship and can allow the supervisor to consider making changes in her approach, if necessary. This also conveys to André that his feedback is valuable and that their supervisory relationship is collaborative.

The working alliance. The working alliance in supervision refers to the collaborative development of goals and tasks for supervision (Bordin, 1983; Constantino, Castonguay, & Schut, 2002; Lampropoulos, 2002). The working alliance is established in the CFDM by collaboratively developing a supervision contract between the supervisor and the supervisee (Lampropoulos, 2002) at the very beginning of the supervisory relationship. Goals for supervision that are addressed in the contract include evaluating supervisees’ strengths and areas for growth and identifying specific skills to be learned, as well as issues related to supervisee theoretical orientation. The tasks used to reach these goals can include process notes, live supervision, and interpersonal process recall (IPR; Kagan & Kagan, 1997) as a collaborative approach to processing André’s strengths and areas for growth, and for facilitating André’s self-reflection and self-awareness. The purpose of these tasks is to provide structure and opportunities for instruction, feedback, and evaluation, while allowing the supervisee to engage in self-evaluation, application of new skills, corrective action, and exploration of alternative approaches. The CFDM draws from the discrimination model when developing the contract as a means of evaluating supervisee’s three levels of foci (i.e., intervention, conceptualization and personalization). For example, when developing the supervision contract with André, the supervisor would consider André’s current level of competency with regard to techniques and clinical skills, case conceptualization skills, and self-awareness and personal style.

Instillation of hope and the creation of expectations. Frank and Frank (1991) noted the impact of positive expectations and hope in effecting change in counseling. Placebo, hope and expectancy factors emerged as a single common factor among most counseling approaches, with Lambert and Barley (2001) noting that instillation of hope accounts for 15% of client outcome. Watkins (1996) addressed the issue of demoralization in supervision, stating that beginning counselors can experience poor self-efficacy and might feel overwhelmed as they navigate their professional identity development. Watkins (1996) stated that supervisors are able to utilize the supervisory relationship as a means of encouraging supervisees and providing structure within the relationship to foster hope. Recently, Watkins and colleagues (2015) endorsed the creation of expectations and the provision of some method of supervision as a pathway by which supervisee change occurs. CFDM supervisors can incorporate hope and expectancy into supervision by using the consultant role of the discrimination model to explain to supervisees the process of supervision, and by collaborating with supervisees to provide supervision that builds on those expectations. Practical tools that André’s supervisor might implement to promote hope and positive expectations include developing a supervision contract with André or providing him with a professional disclosure statement in order to explain the process of supervision and to set supervisory rituals in motion (Watkins et al., 2015). Lampropoulos (2002) also suggested setting short- and long-term goals with supervisees as a means of instilling hope.

Supervisee Self-Awareness and Self-Reflection

An additional tenet of the CFDM is supervisee self-reflection concerning issues that influence professional development (Lampropoulos, 2002). CFDM supervision emphasizes the importance of encouraging supervisees to explore their strengths and areas for growth, and personal issues that might affect their work in counseling, as well as their therapeutic styles (Lampropoulos, 2002; Milne et al., 2008). The CFDM attempts to facilitate supervisee self-reflection by implementing strategies such as collaborative evaluation and the supervision contract (discussed above). Furthermore, the CFDM utilizes IPR (Kagan & Kagan, 1997), in which the supervisor and supervisee watch videotape of a supervisee’s counseling session together, pausing the tape at moments that either the supervisor or supervisee deems critical for further inquiry and processing. Taking on the role of counselor, the supervisor utilized IPR to explore what André was experiencing during that moment of the counseling session that might have prevented him from demonstrating reflection. Consistent with the common factors model, the supervisor confronted André with warmth, empathy and acceptance.

Acquisition of Knowledge and Skills

According to the discrimination model (Bernard, 1979, 1997), one of the primary roles of the supervisor is that of teacher. Thus, in addition to providing support and feedback, supervisors are in a position to impart knowledge and to facilitate supervisees’ acquisition of skills—a factor of supervision that surfaces in the majority of supervision models (Milne et al., 2008; Morgan & Sprenkle, 2007). Lampropoulos (2002) stated that supervisees might learn through direct instruction, through shaping (i.e., gradual learning of a desired behavior) and through their own personal experience. In addition, supervisees have opportunities to learn by imitating the behaviors of their supervisors and other counselors (Lampropoulos, 2002). Given that skills and techniques factors account for 15% of counseling outcome (Lambert & Barley, 2001), supervisors are in a position to model skills and techniques of counseling in supervision as a means of fostering supervisee learning and skill acquisition. Integrating common factors with the discrimination model, André’s supervisor might take on the role of teacher to watch a video clip with André of a recent counseling session in which André struggled to reflect feeling, directing him to role-play with his supervisor other ways that he could respond to his client when emotional content is disclosed. André’s supervisor also could provide him with a list of “feeling words” or other relevant resources in order to help him to increase his awareness of emotion and to broaden his feelings vocabulary.

Assessment of Supervisee Needs and the Provision of Feedback

A final tenet of the CFDM is assessment of supervisee needs and the provision of feedback utilizing the roles and foci presented in the discrimination model. Using the CFDM, the supervisor would implement tailoring (also referred to in the counseling literature as prescriptive matching)—or adapting supervision to fit the characteristics, worldviews and preferences of the supervisee—as would be done with clients in common factors approaches to counseling (Norcross & Halgin, 1997). In their review of the literature on clinical supervision, Goodyear and Bernard (1998) identified attending to supervisees’ individual differences as an essential component of effective supervision. Furthermore, tailoring is inherent in the discrimination model, which recommends matching the supervisor’s role to supervisee needs (Bernard, 1979, 1997). As a beginning clinician, André might express a greater need for structured, directive supervision compared to more experienced supervisees (Stoltenberg, McNeill, & Crethar, 1994). Because André self-disclosed his perception of emotion and how this relates to his identity as a male, his supervisor should include this in her conceptualization of André and how he approaches work with clients. Furthermore, this is a value that she might continue exploring with André in future supervision sessions if it could have an impact on his clinical work with clients. Multiple supervision models have recommended matching supervision to the supervisee’s therapeutic approach and cognitive and learning styles (e.g., level of cognitive complexity; Loganbill, Hardy, & Delworth, 1982; Stoltenberg, 1981), and Norcross and Halgin (1997) suggested beginning the supervisory relationship with a needs assessment to determine the supervisee’s unique needs, goals and preferences for supervision. Although tailoring can pose unique challenges for supervisors providing triadic or group supervision, individual differences such as supervisees’ level of experience, learning goals, gender and ethnicity can be taken into account in these formats.

Table 1

CFDM: Examples of DM Focus and Role Intersections and Common Factors Strategies (CFS)

Supervisor Roles (DM)
Supervision Focus Area (DM) and CFS

Teacher

Counselor

Consultant

Intervention André reports that he is uncertain of how to perform a lethality assessment. André struggles to reflect feeling and meaning with clients. André is interested in using children’s books in session with elementary-aged children.
Common Factors Strategy: Supervisor teaches André the necessary steps of assessing for lethality, then the dyad engage in a role play in which the supervisee tests his new knowledge by performing a lethality assessment with the supervisee acting as the client.(Acquisition of New Knowledge and Skills) Supervisor asks André to reflect on the fact that he demonstrates empathy toward his clients while in supervision but struggles to show empathy by reflecting feeling and meaning in session.(Self-Exploration, Awareness, and Insight) Supervisor provides André with resources for using bibliotherapy in child counseling and offers to help the supervisee brainstorm methods for utilizing this intervention in counseling.(Acquisition of Knowledge and Skills)
Conceptualization André struggles to provide client with accurate diagnosis. André perceives himself as being an ineffective counselor because he has difficulty choosing interventions in session. André requests more information on client stages of change.
Common Factors Strategy: Supervisor and André practice diagnosing fictional clients using case studies from a DSM-5casebook. Supervisor then assigns André homework to practice completing a few case studies independently. Supervisor and André review and discuss André’s answers collaboratively during following supervision session.(Acquisition of Knowledge and Skills) Supervisor reflects supervisee’s feelings of inadequacy, offers encouragement, and normalizes the developmental challenges of supervisees. (Supervisory Relationship – Instillation of Hope and Raising of Expectations) Supervisor assists supervisee with locating information on client stages of change and discusses with supervisee the idea of conceptualizing client’s progress in counseling within the context of the client’s stage of change. (Acquisition of Knowledge of Skills)
Personalization André exhibits behaviors that resemble racial microaggressions. André’s performance anxiety causes him to appear distracted in session. André shares that a client reminds him of his deceased mother.
Common Factors Strategy: Supervisor reviews videotape of session with André and identifies an instance in which he exhibits a microaggression toward client. Supervisor gives André feedback on microaggressions and encourages André to engage in self-reflection on personal biases. (Provision of Feedback) Supervisor reflects André’s feelings of anxiety and asks André to reflect on how his anxiety may be affecting his work with clients. (Supervisory Relationship – The Real Relationship) Supervisor offers to help André process countertransference and communicates to André that he has handled the situation ethically and professionally by sharing with his supervisor his feelings of countertransference toward his client. (Supervisory Relationship and Provision of Feedback)

Practical Challenges and Limitations

Utilization of the CFDM might pose challenges that warrant discussion. For example, the CFDM might intensify the parallel process due to its similarities to the structures and processes of counseling. Moreover, CFDM’s parallels to counseling might blur the lines between supervision and counseling, making it important for supervisors to clearly delineate the role and functions of supervision. Thus, the CFDM endorses utilizing the Rogerian condition of genuineness to facilitate an open, collaborative discussion between the supervisor and supervisee when potentially problematic issues of parallel processing arise in supervision. Furthermore, the CFDM might be vulnerable to challenges in dual relationships, as the various discrimination model roles that the supervisor might assume could blur the lines between the supervisory relationship versus other relationships that the supervisor might have with the supervisee, such as that of instructor. Therefore, supervisors utilizing the CFDM are encouraged to have an open discussion with supervisees from the beginning of supervision concerning the purposes, limitations and boundaries of the supervisory relationship. Such conversations can be facilitated with the use of a professional disclosure statement that outlines the supervisor’s roles (Blackwell, Strohmer, Belcas, & Burton, 2002; Cobia & Boes, 2000).

Because the central tenet of the CFDM is the identified supervisory relationship, a potential challenge that is perhaps inherent in the CFDM is addressing weaknesses and ruptures in the supervisory relationship. The CFDM might also be challenging for supervisors or supervisees who inherently struggle to establish strong supervisory and therapeutic relationships. Supervisees who demonstrate limited ability to establish a strong therapeutic relationship might benefit from direct instruction on behavioral skills that facilitate the therapeutic relationship, such as reflections of feeling and meaning. Lampropoulos (2002) recommended that gatekeeping measures be implemented for students who consistently demonstrate deficiency in establishing a strong therapeutic relationship with clients. Finally, outcome research is indicated to examine the validity of applying common factors principles of psychotherapy to clinical supervision, as well as the empirical merit of an integrated common factors and discrimination model of supervision.

Conclusion

The supervision literature abounds with approaches for supervising counselors; however, there is little evidence that any one approach outperforms another. Common factors approaches to counseling and supervision draw on the components that are shared among models for a parsimonious approach that places emphasis on the factors that are essential in producing positive counseling and supervision outcomes. However, although such factors are necessary, they are not sufficient for yielding positive change. Therefore, Watkins and colleagues (2015) noted the necessity of applying the specific factors of some form of supervision to a common factors approach. We have responded to this call by presenting the CDFM, which integrates the specific factors of Bernard’s (1979, 1997) discrimination model with the most common elements of counseling and supervision approaches: (a) the supervisory relationship, (b) supervisee acquisition of new knowledge and skills, (c) supervisee self-awareness and self-reflection, and (d) assessment of supervisees’ needs and the delivery of feedback according to the tenets of the discrimination model.

Conflict of Interest and Funding Disclosure

The authors reported no conflict of interest or funding contributions for the development of this manuscript.

References

Bahrick, A. S. (1990). Role induction for counselor trainees: Effects on the supervisory working alliance. Disser-tation Abstracts International, 51, 1484B.

Barnett, J. E., Erickson Cornish, J. A., Goodyear, R. K., & Lichtenberg, J. W. (2007). Commentaries on the ethical and effective practice of clinical supervision. Professional Psychology: Research and Practice, 38, 268–275. doi:10.1037/0735-7028.38.3.268

Beinart, H. (2004). Models of supervision and the supervisory relationship and their evidence base. In I. Fleming & L. Steen (Eds.), Supervision and clinical psychology: Theory, practice, and perspectives (pp. 36–50). New York, NY: Brunner-Routledge.

Bell, H., Hagedorn, W. B., & Robinson, E. H. M. (2016). An exploration of supervisory and therapeutic relation-ships and client outcomes. Counselor Education and Supervision, 55, 182–197. doi:10.1002/ceas.12044

Bernard, J. M. (1979). Supervisor training: A discrimination model. Counselor Education and Supervision, 19, 60–68. doi:10.1002/j.1556-6978.1979.tb00906.x

Bernard, J. M. (1997). The discrimination model. In C. E. Watkins, Jr., Handbook of psychotherapy supervision (pp. 310–327). New York, NY: Wiley.

Bernard, J. M., & Goodyear, R. K. (2014). Fundamentals of clinical supervision (5th ed.). Upper Saddle River, NJ: Pearson Education.

Blackwell, T. L., Strohmer, D. C., Belcas, E. M., & Burton, K. A. (2002). Ethics in rehabilitation counselor super-vision. Rehabilitation Counseling Bulletin, 45, 240–247. doi:10.1177/00343552020450040701

Borders, L. D., & Brown, L. L. (2005). The new handbook of counseling supervision. New York, NY: Routledge.

Bordin, E. S. (1983). A working alliance based model of supervision. The Counseling Psychologist, 11, 35–42. doi:10.1177/0011000083111007

Burke, W. R., Goodyear, R. K., & Guzzardo, C. R. (1998). Weakenings and repairs in supervisory alliances: A multiple-case study. American Journal of Psychotherapy, 52, 450–462.

Castonguay, L. G. (2000). A common factors approach to psychotherapy training. Journal of Psychotherapy Integration, 10, 263–282. doi:10.1023/A:1009496929012

Cobia, D. C., & Boes, S. R. (2000). Professional disclosure statements and formal plans for supervision: Two strategies for minimizing the risk of ethical conflicts in post-master’s supervision. Journal of Counseling & Development, 78, 293–296. doi:10.1002/j.1556-6676.2000.tb01910.x

Constantino, M. J., Castonguay, L. G., & Schut, A. J. (2002). The working alliance: A flagship for the “scientist-practitioner” model in psychotherapy. In G. S. Tryon (Ed.), Counseling based on process research: Applying what we know (pp. 81–131). Boston, MA: Allyn & Bacon.

Dewell, J. A., & Owen, J. (2015). Addressing mental health disparities with Asian American clients: Examining the generalizability of the common factors model. Journal of Counseling & Development, 93, 80–87.
doi:10.1002/j.1556-6676.2015.00183.x

Ellis, M. V., & Dell, D. M. (1986). Dimensionality of supervisor roles: Supervisors’ perceptions of supervision. Journal of Counseling Psychology, 33, 282–291. doi:10.1037/0022-0167.33.3.282

Frank, J. D. (1982). Therapeutic components shared by all psychotherapies. In J. H. Harvey & M. M. Parks (Eds.), Psychotherapy research and behavior change: 1981 Master Lecture Series. Washington, DC: American Psychological Association.

Frank, J. D., & Frank, J. B. (1991). Persuasion and healing: A comparative study of psychotherapy (3rd ed.). Baltimore, MD: Johns Hopkins University Press.

Freeman, B., & McHenry, S. (1996). Clinical supervision of counselors-in-training: A nationwide survey of idea

delivery, goals, and theoretical influences. Counselor Education and Supervision, 36, 144–158. doi:10.1002/j.1556-6978.1996.tb00382.x

Gelso, C. (2014). A tripartite model of the therapeutic relationship: Theory, research, and practice. Psycho-therapy Research, 24, 117–131. doi:10.1080/10503307.2013.845920

Glidden, C. E., & Tracey, T. J. (1992). A multidimensional scaling analysis of supervisory dimensions and their perceived relevance across trainee experience levels. Professional Psychology: Research and Practice, 23, 151–157. doi:10.1037/0735-7028.23.2.151

Goodyear, R. K., Abadie, P. D., & Efros, F. (1984). Supervisory theory into practice: Differential perceptions of

supervision by Ekstein, Ellis, Polster, and Rogers. Journal of Counseling Psychology, 31, 228–237. doi:10.1037/0022-0167.31.2.228

Goodyear, R. K., & Bernard, J. M. (1998). Clinical supervision: Lessons from the literature. Counselor Education and Supervision, 38, 6–22. doi:10.1002/j.1556-6978.1998.tb00553.x

Hess, A. K. (1986). Growth in supervision: Stages of supervisee and supervisor development. The Clinical Supervisor, 4, 51–68. doi:10.1300/J001v04n01_04

Hess, A. K., & Hess, K. A. (1983). Psychotherapy supervision: A survey of internship training practices. Profess-ional Psychology: Research and Practice, 14, 504–513. doi:10.1037/0735-7028.14.4.504

Kagan, H. K., & Kagan, N. I. (1997). Interpersonal process recall: Influencing human interaction. In C. E. Watkins, Jr. (Ed.), Handbook of psychotherapy supervision (pp. 296–309). New York, NY: Wiley.

Lambert, M. J. (1986). Implications of psychotherapy outcome research for eclectic psychotherapy. In J. C. Norcross (Ed.), Handbook of eclectic psychotherapy (pp. 436–462). New York, NY: Brunner-Mazel.

Lambert, M. J., & Barley, D. E. (2001). Research summary on the therapeutic relationship and psychotherapy outcome. Psychotherapy, 38, 357–361.

Lambert, M. J., & Ogles, B. M. (2004). The efficacy and effectiveness of psychotherapy. In M. J. Lambert (Ed.), Bergin & Garfield’s handbook of psychotherapy and behavior change (5th ed., pp. 139–193). New York, NY: Wiley.

Lampropoulos, G. K. (2002). A common factors view of counseling supervision process. The Clinical Supervisor, 21, 77–95. doi:10.1300/J001v21n01_06

Lanning, W. (1986). Development of the supervisor emphasis rating form. Counselor Education and Supervision, 25, 191–196. doi:10.1002/j.1556-6978.1986.tb00667.x

Loganbill, C., Hardy, E., & Delworth, U. (1982). Supervision: A conceptual model. The Counseling Psychologist, 10, 3–42. doi:10.1177/0011000082101002

Milne, D. L. (2006). Developing clinical supervision through reasoned analogies with therapy. Clinical Psychology & Psychotherapy, 13, 215–222. doi:10.1002/cpp.489

Milne, D. L., Aylott, H., Fitzpatrick, H., & Ellis, M. V. (2008). How does clinical supervision work? Using a “best evidence synthesis” approach to construct a basic model of supervision. The Clinical Supervisor, 27, 170–190. doi:10.1080/07325220802487915

Morgan, M. M., & Sprenkle, D. H. (2007). Toward a common-factors approach to supervision. Journal of Marital and Family Therapy, 33, 1–17. doi:10.1111/j.1752-0606.2007.00001.x

Nelson, M. L., & Friedlander, M. L. (2001). A close look at conflictual supervisory relationships: The trainee’s perspective. Journal of Counseling Psychology, 48, 384–395. doi:10.1037/0022-0167.48.4.384

Norcross, J. C., & Halgin, R. P. (1997). Integrative approaches to psychotherapy supervision. In C. E. Watkins, Jr. (Ed.), Handbook of psychotherapy supervision (pp. 203–222). New York, NY: Wiley.

Norcross, J. C., & Lambert, M. J. (2011). Evidence-based therapy relationships. In J. C. Norcross (Ed.), Psycho-therapy relationships that work: Evidence-based responsiveness (2nd ed.) (pp. 3–21). New York, NY: Oxford University Press.

Norcross, J. C., & Lambert, M. J. (2014). Relationship science and practice in psychotherapy: Closing commentary. Psychotherapy, 51, 398–403. doi:10.1037/a0037418

Norcross, J. C., & Wampold, B. E. (2011). Evidence-based therapy relationships: Research conclusions and clinical practices. Psychotherapy, 48, 98–102. doi:10.1037/a0022161

Palomo, M., Beinart, H., & Cooper, M. J. (2010). Development and validation of the Supervisory Relationship Questionnaire (SRQ) in UK trainee clinical psychologists. British Journal of Clinical Psychology, 49, 131–149. doi:10.1348/014466509X441033

Ramos-Sánchez, L., Esnil, E., Goodwin, A., Riggs, S., Touster, L. O., Wright, L. K., . . . Rodolfa, E. (2002). Negative supervisory events: Effects on supervision and supervisory alliance. Professional Psychology: Research and Practice33, 197–202.

Rønnestad, M. H., & Skovholt, T. M. (1993). Supervision of beginning and advanced graduate students of

counseling and psychotherapy. Journal of Counseling & Development, 71, 396–405. doi:10.1002/j.1556-6676.1993.tb02655.x

Rosenzweig, S. (1936). Some implicit common factors in diverse methods of psychotherapy. American Journal of Orthopsychiatry, 6, 412–415. doi:10.1111/j.1939-0025.1936.tb05248.x

Safran, J. D., Muran, J. C., Stevens, C., & Rothman, M. (2007). A relational approach to supervision: Addressing ruptures in the alliance. In C. A. Falender & E. P. Shafranske (Eds.), Casebook for clinical supervision: A competency-based approach (pp. 137–157). Washington, DC: American Psychological Association.

Schacht, A. J., Howe, H. E., & Berman, J. J. (1988). A short form of the Barrett-Lennard relationship inventory for supervisory relationships. Psychological Reports, 63, 699–706. doi:10.2466/pr0.1988.63.3.699

Stenack, R. J., & Dye, H. A. (1982). Behavioral descriptions of counseling supervision roles. Counselor Education and Supervision, 21, 295–304. doi:10.1002/j.1556-6978.1982.tb01692.x

Stoltenberg, C. (1981). Approaching supervision from a developmental perspective: The counselor complexity model. Journal of Counseling Psychology, 28, 59–65. doi:10.1037/0022-0167.28.1.59

Stoltenberg, C. D., McNeill, B. W., & Crethar, H. C. (1994). Changes in supervision as counselors and therapists gain experience: A review. Professional Psychology: Research and Practice, 25, 416–449.
doi:10.1037/0735-7028.25.4.416

Storm, C. L., Todd, T. C., Sprenkle, D. H., & Morgan, M. M. (2001). Gaps between MFT supervision assumptions and common practice: Suggested best practices. Journal of Marital and Family Therapy, 27, 227–239. doi:10.1111/j.1752-0606.2001.tb01159.x

Timm, M. (2015). Creating a preferred counselor identity in supervision: A new application of Bernard’s dis-crimination model. The Clinical Supervisor, 34, 115–125. doi:10.1080/07325223.2015.1021499

Tryon, G. S., & Winograd, G. (2011). Goal consensus and collaboration. In J. C. Norcross (Ed.), Psychotherapy relationships that work: Evidence-based responsiveness (pp. 153–167). New York, NY: Oxford University Press. doi:10.1093/acprof:oso/9780199737208.003.0007

Wampold, B. E. (2001). The great psychotherapy debate: Models, methods, and findings. Mahwah, NJ: Lawrence Erlbaum Associates.

Wampold, B. E., & Budge, S. L. (2012). The 2011 Leona Tyler Award address: The relationship—and its relation-ship to the common and specific factors of psychotherapy. The Counseling Psychologist, 40, 601–623. doi:10.1177/0011000011432709

Watkins, C. E., Jr. (1996). On demoralization and awe in psychotherapy supervision. The Clinical Supervisor, 14, 139–148. doi:10.1300/J001v14n01_10

Watkins, C. E., Jr. (2015). Extrapolating Gelso’s tripartite model of the psychotherapy relationship to the psycho-therapy supervision relationship: A potential common factors perspective. Journal of Psychotherapy Integration, 25, 143–157. doi:10.1037/a0038882

Watkins, C. E., Jr., Budge, S. L., & Callahan, J. L. (2015). Common and specific factors converging in psycho-therapy supervision: A supervisory extrapolation of the Wampold/Budge psychotherapy relationship model. Journal of Psychotherapy Integration, 25, 214–235. doi:10.1037/a0039561

A. Elizabeth Crunk is a doctoral candidate at the University of Central Florida. Sejal M. Barden is an Assistant Professor at the University of Central Florida. Correspondence can be addressed to Elizabeth Crunk, University of Central Florida, College of Education and Human Performance, Department of Child, Family, and Community Sciences, 4000 Central Florida Blvd., P.O. Box 161250, Orlando, FL 32816-1250, elizabethcrunk@gmail.com.