Jun 6, 2019 | Volume 9 - Issue 2
Clare Merlin-Knoblich, Pamela N. Harris, Erin Chase McCarty Mason
Flipped learning is an innovative teaching approach in which students view pre-recorded video lectures outside of class, then engage in activities applying course concepts during class. By removing lecture from face-to-face class time, instructors free up time in class for students to explore and apply course content. Flipped learning is a particularly useful approach in counselor education, given the need for both content and practice in the discipline. In this study, we examined student classroom engagement in flipped and non-flipped counseling courses. Using a causal comparative method, we compared student engagement via the Classroom Engagement Inventory in four counseling theories course sections. Students in the flipped counseling courses (n = 30) reported statistically higher classroom engagement than students in the non-flipped courses (n = 37). These results lend additional support to the promotion of flipped learning in counselor education.
Keywords: flipped learning, classroom engagement, counselor education, flipped counseling courses, student engagement
Counselor educators are tasked with balancing students’ need to learn course content and their need to apply that content (Gladding & Ivers, 2012; Sommers-Flanagan & Heck, 2012). In recent decades, a new teaching approach has emerged that supports counselor educators in navigating that balance—flipped learning. In flipped learning, students individually view pre-recorded video lectures outside of class so that time spent in class is freed up solely for application-based learning activities (Bishop & Verleger, 2013; Gerstein, 2012; Wallace, Walker, Braseby, & Sweet, 2014). This approach appears especially valuable in counselor education because it allows counseling students to learn critical content relevant to the counseling profession (e.g., counseling theories, techniques), while providing them sufficient in-class time to apply, discuss, or practice content in classroom activities (Merlin, 2016).
Moreover, flipped learning appears worth consideration given its use of both online and face-to-face learning components. Researchers in a variety of disciplines (e.g., communications, political science, social work) have examined student perceptions of online versus face-to-face (F2F) course formats (Bolsen, Evans, & Fleming, 2016; Bristow, Shepherd, Humphreys, & Ziebell, 2011; Okech, Barner, Segoshi, & Carney, 2014; Platt, Yu, & Raile, 2014; Young & Duncan, 2014). Findings from most of the studies suggest that students have positive perceptions of online learning, though a few (Bristow et al., 2011; Young & Duncan, 2014) suggest that more traditional F2F formats are preferred for some subject areas (e.g., communications) and by some types of students (e.g., working vs. non-working). Other studies suggest that blended formats, which contain a mixture of F2F teaching methods and online instruction tools, could be a balanced compromise (Brown, 2016; Nguyen, 2013; Paechter, Kreisler, Luttenberger, Macher, & Wimmer, 2013; Thai, De Wever, & Valcke, 2017). Flipped learning represents one such blended learning approach because it combines teaching and learning efforts in both online spaces (via posted video lectures) and physical classroom spaces (via in-person activities; Brown, 2016).
The prevalence of flipped learning in higher education has increased since 2000, and the teaching approach has recently gained momentum in counselor education in addition to or instead of more traditional, lecture-focused approaches in non-flipped courses (Fulton & Gonzalez, 2015; Merlin, 2016; Merlin-Knoblich & Camp, 2018; Moran & Milsom, 2015). Despite this attention, no researchers have published a comparison of flipped and non-flipped courses in counselor education. In this article, we seek to fill this gap by describing the findings of a causal comparative study comparing one aspect of student experiences in flipped and non-flipped counseling courses—classroom engagement.
Classroom Engagement
Classroom engagement refers to “a student’s active involvement in classroom learning activities” (Wang, Bergin, & Bergin, 2014, p. 1). Researchers have determined that the construct is comprised of three components: affective engagement, behavioral engagement, and cognitive engagement (Archambault, Janosz, Fallu, & Pagani, 2009; Fredricks, Blumenfeld, & Paris, 2004). Since the 1990s, researchers have given substantial attention to student engagement in higher education classrooms (Trowler, 2010). This focus is due in large part to the strong relationships between engagement and positive student outcomes, such as student achievement and graduation rates (Elmaadaway, 2018; Harper & Quaye, 2009; O’Brien & Iannone, 2018; Trowler, 2010). Researchers have acknowledged that student classroom engagement is a multifaceted construct impacted by multiple variables, including instructors’ behaviors with students in the classroom (Krause & Coates, 2008; O’Brien & Iannone, 2018). Thus, we chose to study the potential relationship between instructors’ use of flipped learning and student classroom engagement. In this study, we sought to understand if students reported different perceptions of their classroom engagement levels in flipped and non-flipped counseling courses. Next, we present an overview of the flipped teaching approach and its research base.
Flipped Learning Underpinnings
Flipped learning is a teaching approach in which students view pre-recorded video lectures online outside of class, then meet in class for F2F learning activities in which they apply and explore course content. These activities can include group projects, discussions, skill practice, and experiential activities (Bishop & Verleger, 2013; Gerstein, 2012). Flipped classrooms are different from non-flipped classrooms in that non-flipped classrooms feature in-class lecture for all or part of each F2F class. Thus, students in non-flipped classrooms spend class time listening to an instructor lecture instead of viewing recorded material on course content outside of class and participating in activities in class (McGivney-Burelle & Xue, 2013; Murphy, Chang, & Suaray, 2016). In some non-flipped classrooms, instructors use lecture as the primary instructional approach, whereas in other non-flipped classrooms, instructors pair lecture with experiential activities in class (Cavanagh, 2011; Foldnes, 2016). Given the popularity of experiential learning in counselor education (McAuliffe & Eriksen, 2011), and for the purpose of this study, we define a non-flipped counseling classroom as one in which students engage in both in-class lecture and experiential activities when meeting F2F.
Flipped Learning Process
When designing a flipped classroom, instructors complete two primary tasks. First, they create or select a pre-recorded video lecture with the essential content students need to learn. Instructors can create such videos using screen capture software like Camtasia (www.camtasia.com) and Screencast-O-Matic (www.screencastomatic.com). These programs allow users to create videos with audio and video of an instructor explaining a presentation with slides (e.g., a PowerPoint presentation). Because experts recommend that video lectures are no more than 15–20 minutes in length, instructors must carefully select the most essential content that students would benefit from seeing and hearing explained.
After creating video lectures, instructors design a series of in-class F2F activities for their flipped classroom. In these activities, students apply, discuss, and practice the content they learned in the pre-recorded video lecture. Flipped F2F classroom activities can vary by discipline and instructor, but they often include collaborative group activities, shared projects, and practice sessions. Scholars note that although the video lectures associated with flipped learning often receive the most attention, it is actually the in-class activities that are most crucial to the student learning process (Bergmann & Sams, 2014; Merlin, 2016).
Flipped Learning in Higher Education
As flipped learning has grown in popularity, so too has its research base. Researchers have studied a range of constructs related to the approach, including student and instructor perspectives (Gilboy, Heinerichs, & Pazzaglia, 2015; Hao, 2016; Long, Cummins, & Waugh, 2017; Nouri, 2016; Wanner & Palmer, 2015) and student outcomes (Baepler, Walker, & Driessen, 2014; Davies, Dean, & Ball, 2013; Foldnes, 2016; McLaughlin et al., 2013; Murphy et al., 2016). Researchers also have studied flipped learning in a variety of disciplines, including chemistry (Baepler et al., 2014), engineering (Kim, Kim, Khera, & Getman, 2014), public health (Simpson & Richards, 2015), pharmacy (McLaughlin et al., 2013), and information systems (Davies et al., 2013). As described below, they have consistently found positive outcomes related to flipped learning, with occasional incongruences.
Research on student perceptions of flipped learning has indicated that this teaching approach is generally enjoyed (Gilboy et al., 2015; Hao, 2016; Nouri, 2016). For example, in a sample of 142 nutrition students, 62% of participants reported preferring flipped learning to a traditional lecture format (Gilboy et al., 2015). In a sample of 240 research methods students, 75% of participants reported having positive attitudes toward flipped learning after completing flipped courses (Nouri, 2016). Moreover, in literature reviews of flipped learning research, authors concluded that student perceptions of flipped learning are mostly positive (Bishop & Verleger, 2013; Zainuddin & Halili, 2016).
In general, researchers have found higher student achievement in flipped classrooms compared to non-flipped classrooms (Baepler et al., 2014; Davies et al., 2013; Foldnes, 2016; McLaughlin et al., 2013; Murphy et al., 2016). For example, Foldnes (2016) found that the exam scores of statistics students in a
flipped learning course were 12% higher compared to those in a non-flipped course. Murphy and colleagues (2016) also compared test scores in flipped and non-flipped undergraduate algebra classes and found that flipped classroom final exam scores increased 13% compared to non-flipped classroom scores.
Increased achievement in flipped classrooms may be due to increased student engagement (McLaughlin et al., 2013). Researchers have found a perceived increase in engagement in flipped classrooms from both student and instructor perspectives (Faculty Focus, 2015; Lucke, Dunn, & Christie, 2017; Simpson & Richards, 2015; Wanner & Palmer, 2015). For instance, in their study of engineering students who participated in a course before and after it was flipped, Lucke and colleagues (2017) found that students reported an increase in engagement. Instructors also noted “a substantial increase in the level of observed student engagement” after the course was flipped (p. 54). Similarly, Simpson and Richards (2015) surveyed students who completed a flipped undergraduate health course and found that students reported that the flipped format enhanced their course engagement.
Flipped learning is a valuable instructional approach in counselor education, given its student-focused nature. Despite this relevance, research on flipped learning in counselor education is limited (Merlin, 2016). To date, researchers have published only three studies on flipped learning in counselor education. Moran and Milsom (2015) described flipped learning with 15 graduate students in a school counseling foundations course. They assessed student perceptions of the flipped course using Likert scale ratings, and students reported that in-class activities facilitated their learning more than pre-class activities. Fulton and Gonzalez (2015) studied two flipped career development courses by distributing pre- and posttests to students. They found overall increases in attitudes about career counseling. Lastly, Merlin-Knoblich and Camp (2018) conducted a qualitative case study to explore counseling student experiences in a flipped life span development course. Their participants reported that the flipped course was enjoyable, beneficial, and engaged them in learning inside and outside of the classroom.
Purpose and Rationale for the Study
Previous studies about flipped learning in counselor education are useful in drawing attention to use of the teaching approach in the field (Fulton & Gonzalez, 2015; Merlin-Knoblich & Camp, 2018; Moran & Milsom, 2015). However, across these studies, researchers did not employ a comparison group to examine if flipped learning courses produce different outcomes than non-flipped courses. Given this critical variable in understanding the value of flipped learning, research is needed on the impact the approach has on counseling students compared to non-flipped teaching approaches. To fill this research gap, we chose to compare flipped and non-flipped counseling courses by examining student classroom engagement.
Classroom engagement is the amount of active involvement a student has in learning activities while completing a course (Wang et al., 2014). We chose to study classroom engagement for three reasons. First, due to our interest in comparing flipped and non-flipped counseling courses, it was imperative to measure a construct specific to the individual class setting. Student classroom engagement refers to student involvement at the classroom level, which is more specific than overall school engagement (Wang et al., 2014). Second, given the lack of research on outcomes related to flipped learning in counselor education, we sought to understand if the teaching approach appears to impact classroom engagement, which may contribute to greater student enjoyment and better comprehension of counseling concepts. Lastly, although researchers have studied classroom engagement in previous studies on flipped learning, the topic has not been widely reviewed, and a need exists for a greater understanding of how flipped learning impacts student classroom engagement (Faculty Focus, 2015; Lucke et al., 2017; McLaughlin et al., 2013; Simpson & Richards, 2015; Wanner & Palmer, 2015).
Our research question was: Do significant differences exist between student classroom engagement levels in flipped counseling course sections and non-flipped counseling course sections? We hypothesized that the classroom engagement levels of students in the flipped counseling course sections would be significantly higher statistically than those of students in the non-flipped counseling course sections.
Method
We used a causal comparative design (Creswell & Creswell, 2018) to study student engagement in flipped and non-flipped counseling courses at a medium-sized public university in the mid-Atlantic region. In a causal comparative study, researchers compare groups by a cause, or independent variable, that has already occurred (Creswell & Creswell, 2018). In this study, the cause was a flipped or non-flipped teaching approach in counseling theories courses.
Procedures
The university where we conducted this study has a small master’s counseling program accredited by the Council for Accreditation of Counseling & Related Educational Programs (CACREP) and holds one class section for every course taught each semester. In order to compare a similar counseling course taught in both a flipped and non-flipped approach, we compared a flipped Theories for Counseling Children and Adolescents course (“experimental group”) to a non-flipped Counseling Theories course (“control group”) at the same university. Both courses include parallel emphases on counseling theories, as shown in Table 1. To obtain a sample large enough for inferential statistical analysis, we collected data in two subsequent years from students in two flipped Theories for Counseling Children and Adolescents courses and two non-flipped Counseling Theories courses. All courses met weekly across a 15-week fall semester.
Table 1
Course Topics in Flipped and Non-Flipped Courses Studied
Flipped Theories for Counseling Children and Adolescents |
Non-flipped Counseling Theories |
Psychoanalytic Counseling |
Psychoanalytic Counseling |
Person-Centered Counseling |
Person-Centered Counseling |
Gestalt Therapy |
Gestalt Therapy |
Adlerian Counseling |
Adlerian Counseling |
Reality Therapy |
Reality Therapy |
Cognitive Behavioral Therapy |
Cognitive Behavioral Therapy |
Behavior Therapy |
Behavior Therapy |
Solution-Focused Brief Therapy |
Postmodern Approaches |
Strengths-Based Counseling |
Existential Counseling |
Motivational Interviewing |
Feminist Therapy |
Play Therapy |
Family Systems Therapy |
We did not randomly assign study participants to course sections, but instead recruited participants already in existing groups based on the university’s prescribed counseling program of study. Students in the Counseling Theories courses were in their first year and students in Theories for Counseling Children and Adolescents courses were in their second year. No participants were taking both courses at the same time. The flipped Theories for Counseling Children and Adolescents course was the only flipped course in the counseling program at the time of the study.
Flipped course sections. The first author taught Theories for Counseling Children and Adolescents during the first year of data collection, and the second author taught the course in the second year of data collection. Although the use of different instructors was not intentional (and instead due to hiring changes), the first and second authors used identical flipped learning approaches in an effort to ensure that the change in instructors did not impact the study results. They both used Bergmann and Sam’s (2014) traditional flipped learning model when teaching their courses and each recorded their own video lectures using Screencast-O-Matic software. The instructors assigned these video lectures as homework prior to attending class. Students also were required to read selected book chapters and research articles on the course topics. To ensure compliance, the instructors asked students to answer pre-class questions about the topics online before coming to class. Furthermore, students’ answers allowed the instructors to evaluate comprehension of the material prior to class and adjust class activities as needed. For example, pre-class questions often asked students to explain key concepts. If the majority of student answers revealed that they had a vague or incorrect understanding of a counseling theory, the instructor allotted more class time to addressing student misunderstanding.
During class, each instructor facilitated a range of activities to help students explore and apply course content. For example, groups of students were asked to rehearse and demonstrate counseling techniques to the class. Students also engaged in large and small group discussions about course topics. They sometimes analyzed case studies and watched videos of counseling demonstrations. Lastly, instructors frequently hosted guest speakers with expertise in the topics. Table 2 includes an example class lesson plan and corresponding assigned homework from an example flipped class the first author taught in Theories for Counseling Children and Adolescents.
Table 2
Example Flipped Learning Lesson Plan—Theories for Counseling Children and Adolescents
Context |
Task |
Time Required |
Out-of-class |
Video lecture – Gestalt and Adlerian Counseling Theories |
20 minutes |
|
Textbook chapters – Gestalt Counseling, Adlerian Counseling |
80 minutes |
In-class |
Welcome – Overview and follow-ups |
5 minutes |
|
Viewing Gestalt Counseling – Students view and discuss two YouTube videos of Gestalt counselors.
Practicing Gestalt techniques – Students rehearse a role-play of a Gestalt technique and show the technique to the class. |
20 minutes
45 minutes |
|
Guest speaker – Adlerian counselor is guest speaker to describe and discuss his counseling approach. |
45 minutes |
|
Case studies – Students analyze case studies from an Adlerian perspective in groups, then discuss analyses with the class. |
30 minutes |
|
Counseling practice – Students form pairs and practice counseling using an Adlerian or Gestalt approach. |
30 minutes |
|
Closing – Questions and review |
5 minutes |
Non-flipped course sections. The non-flipped counseling course in this study was Counseling Theories, taught by the same faculty member for both semesters in which the researchers collected data. This faculty member was not an author on the manuscript. Table 1 shows a comparison of the counseling theories taught in the flipped (experimental) and non-flipped (control) counseling courses studied. Students read textbook chapters for homework prior to attending each class. The instructor spent the first half of each class lecturing about the course material, then the second half engaging students in group discussion and hosting guest speakers who were experts in the topics. In this way, the course was not flipped, but it also was not strictly a lecture course. It was “lecture-based,” and regularly involved in-class student activities, as is often the case in counselor education (Cavanagh, 2011; Foldnes, 2016). Table 3 includes an example lesson plan for a non-flipped class session in Counseling Theories.
Table 3
Example Non-Flipped Learning Lesson Plan—Counseling Theories
Context Task Time Required
Out-of-class Textbook chapters – Gestalt Counseling, Adlerian Counseling 80 minutes
In-class Welcome – Overview and follow-ups 5 minutes
Lecture – Didactically present information about Gestalt and 120 minutes
Adlerian counseling approaches
Guest speaker – Adlerian counselor is guest speaker to 45 minutes
describe and discuss his counseling approach.
Closing – Questions and review 10 minutes
Data collection. After obtaining IRB approval, we recruited participants during the final week of each semester by explaining the study to course participants. We described the purpose of the study as “to examine student engagement in counseling courses” in an attempt to prevent participant bias that could have emerged if students knew we were studying engagement related to flipped or non-flipped teaching approaches. We informed students that study participation was voluntary and anonymous and emphasized that participation had no impact on course grades. We distributed paper-and-pencil questionnaires to students in both sections of Theories for Counseling Children and Adolescents and the first section of Counseling Theories. We distributed the questionnaire electronically to students in the second section of Counseling Theories due to in-person scheduling conflicts. All participants signed an informed consent form prior to participating.
Participants
Sixty-seven master’s students participated in the study. Thirty participants were in the experimental group, completing the flipped theories course (100% participation rate). Thirty-seven participants were in the control group, completing the non-flipped theories course (93% participation rate). Given the first and second authors’ familiarity with the participants as students, we chose not to collect participants’ individual identifying demographic information (including degree specialty) because doing so might identify students as participants and cause participant bias. For example, a small number of students in the courses identified as male, African American, or Asian American, and if we asked these students to report their demographic information in the study, this information may have unintentionally identified the participants. We can report, though, that the control group participants included first-year school, clinical mental health, couples and family, and addictions counseling students. The experimental group participants included second-year school counseling and school psychology students. The average number of video lectures reportedly viewed by the experimental group participants was 7.4 (out of eight). Video lectures were not a part of the non-flipped course (control group).
Instrumentation
We distributed the Classroom Engagement Inventory (CEI; Wang et al., 2014) to participants to measure student classroom engagement because it comprehensively measures affective, behavioral, and cognitive engagement. Moreover, it can be used to measure engagement specific to the classroom level, rather than overall school or program engagement (Wang et al., 2014). Although Wang and colleagues (2014) developed the instrument with students in grades 4 through 12, they found that its factor structure was invariant when used with participants of different ages and grade levels, suggesting its relevance in higher education settings.
The CEI consists of five subscales. They are: Affective Engagement (positive emotions students could encounter in class, ω = .90), Behavioral Engagement–Compliance (students’ compliance with classroom norms, ω = .82), Behavioral Engagement–Effortful Class Participation (students’ self-directed classroom behaviors, ω = .82), Cognitive Engagement (mental effort expended, ω = .88), and Disengagement (cognitive and behavioral aspects of not engaging in class, ω = .82; Wang et al., 2014). Example items are: “I get really involved in class activities” (Behavioral Engagement–Effortful Class Participation), “I feel excited” (Affective Engagement), and “I go back over things when I don’t understand” (Cognitive Engagement; Wang et al., 2014, p. 5).
The instrument has 21 items and a 5-point frequency Likert-type scale ranging from never to hardly ever, monthly, weekly, and each day of class. We adapted the scale to be a 4-point scale by removing the answer choice each day of class because both courses only met once per week, therefore each day of class was synonymous with weekly.
Data Analysis
Using SPSS, we first analyzed internal consistency using Cronbach’s alpha to ensure that reducing the 5-point scale to a 4-point scale did not weaken reliability to an unacceptable degree. Then we ran independent samples t-tests to test for statistical significance at p < .05 in order to determine if experimental and control group scores differed by chance. We also ran Cohen’s d in SPSS to measure effect size, which quantifies the extent that the control group and experimental group diverged in the study (Thompson, 2006). We followed Cohen’s (1969) interpretation guidelines of small (0.2), medium (0.5), and large (0.8) effect sizes. We tested for significance among items grouped by scale, as well as overall measure of classroom engagement.
Results
The internal consistency for our results was deemed acceptable (α = .85). We then compared classroom engagement for students in the flipped counseling courses to students in the non-flipped counseling courses in six ways. Table 4 contains a summary of each of these comparisons.
Table 4
Statistical and Practical Significance from Experimental and Control Group Comparisons
CEI Scale p Cohen’s d
Affective Engagement .013 0.61
Behavioral Engagement–Compliance .038 0.50
Behavioral Engagement–Effortful Class Participation .344
Cognitive Engagement .013 0.64
Disengagement .005 -0.70
Overall Classroom Engagement .005 0.70
Affective and Behavioral Engagement
First, we compared the affective engagement between students in the experimental group (flipped) and the control group (non-flipped) courses. Based on a scale of 1 (never) to 4 (weekly), scores on the Affective Engagement subscale averaged 3.68 (SD = 0.32) for the experimental group and 3.44 (SD = 0.48) for the control group. This was a statistically significant difference (p = .013) with a medium effect size (Cohen’s d = 0.61), indicating that students in the flipped course self-reported significantly more affective engagement than students in the non-flipped course. We also compared Behavioral Engagement–Compliance subscale scores among both groups. Experimental group participants had an average Behavioral Engagement–Compliance score of 3.93 (SD = 0.18), whereas control group participants had a lower average Behavioral Engagement–Compliance score of 3.79 (SD = 0.35). This was a statistically significant difference (p = .038) with a medium effect size (Cohen’s d = 0.50), indicating that students in the flipped course self-reported significantly more behavioral engagement in terms of compliance compared to the students in the non-flipped course. We further compared Behavioral Engagement–Effortful Class Participation subscale scores. Although the average experimental group score for this dimension (M = 3.40, SD = 0.50) was higher than the average control group score (M = 3.28, SD = 0.47), the difference was not statistically significant (p = .344), indicating the students in the flipped counseling course were not significantly different in regards to their reported effort in class.
Cognitive Engagement and Disengagement
Next, we examined cognitive engagement for both groups. Students in the experimental group had an average Cognitive Engagement subscale score of 3.43 (SD = 0.38), and those in the control group had a lower average Cognitive Engagement score of 3.13 (SD = 0.54). This was a statistically significant difference in cognitive engagement levels (p = .013) with a medium effect size (Cohen’s d = 0.64). Students in the flipped course self-reported significantly more cognitive engagement than students in the non-flipped course. We also compared classroom disengagement among participants in both groups. Experimental group participants had an average Disengagement subscale score of 1.81 (SD = 0.50), and control group participants had a higher average Disengagement score of 2.25 (SD = 0.68). These scores indicate that experimental group participants had lower perceived levels of disengagement, a difference that was statistically significant (p = .005) and had a medium effect size (Cohen’s d = -0.70). In other words, students in the non-flipped course self-reported significantly more disengagement than those in the flipped course.
Overall Classroom Engagement
Lastly, we examined overall classroom engagement between both groups; despite its dimensions, classroom engagement can be considered a single overall construct (Wang et al., 2014). To do so, we combined and averaged participants’ responses for all subscales except Disengagement. This resulted in an Overall Classroom Engagement score of 3.55 (SD = 0.24) for the experimental group and 3.34 (SD = 0.35) for the control group. These scores represent a statistically significant difference between groups (p = .005) with a medium effect size (Cohen’s d = 0.70). That is to say, students in the flipped course had significantly higher perceptions of overall engagement than did the students in the non-flipped course.
Discussion
This study represented the first of its kind comparing students’ self-reported engagement in related flipped and non-flipped counseling courses. We sought to answer the question: Do significant differences exist between student classroom engagement levels in flipped counseling course sections and non-flipped counseling course sections? Our hypothesis that the classroom engagement levels of participants in the flipped counseling course sections would be significantly higher statistically than those of participants in the non-flipped counseling course sections was confirmed for all but one of the measures we examined.
Average perceived classroom engagement ratings were relatively high across all sections studied, including the non-flipped sections, with engagement levels measured by the CEI ranging from 3.13 to 3.93. These values indicate that participants perceived themselves to be engaged in their classrooms at least monthly if not weekly. Such high engagement ratings suggest that master’s counseling and school psychology students in our sample were generally interested and involved in the learning process in their classrooms. When separated, however, findings indicate that students in the flipped learning course sections may have felt even more frequently engaged than their non-flipped course section counterparts. Specifically, in five of the six measures examined (Affective Engagement, Behavioral Engagement–Compliance, Cognitive Engagement, Disengagement, and Overall Classroom Engagement), participants in the flipped counseling course reported significantly greater classroom engagement than in the non-flipped counseling course. This is the first study in which researchers found increased engagement among a sample of students in a flipped counseling course, and it builds a growing case for flipped learning in counselor education.
Participants in the flipped learning course sections may have reported more frequent classroom engagement given differences in the way class time was spent in the flipped and non-flipped courses. In the flipped course sections, participants spent nominal time in class listening to lecture. Instead, their F2F class time consisted of active application-based activities, such as group discussions, skills practice, and guest speakers. Although participants in the non-flipped course sections also engaged in some of these activities during class (i.e., discussion and guest speakers), they only spent part of class engaged in activities, as at least half of class was reserved for lecture by the instructor. Participants’ higher reported classroom engagement in the flipped course sections might indicate that they found a full class period of application-based activities more engaging than spending only part of class on these activities.
Although no previous studies have used the CEI to measure student engagement in flipped and non-flipped counseling courses, researchers have studied student and instructor perceptions of student engagement in flipped classrooms. The overall increased student engagement in the flipped course sections aligns with the findings of Simpson and Richards (2015) and Lucke and colleagues (2017), who found that students reported increased classroom engagement in flipped learning courses. Although we only surveyed students about their perceived classroom engagement, findings also reflect previous research on instructor perceptions that flipped classrooms increase student classroom engagement (Faculty Focus, 2015; Wanner & Palmer, 2015). For example, in a survey of 1,087 Faculty Focus (2015) readers who utilized flipped learning, 75% of participants indicated observing improved student engagement in flipped classrooms compared to those that were not flipped.
Findings also support previous research indicating that hybrid learning approaches like flipped learning may be more appealing to students than courses held solely online or solely through F2F means. Further research is needed to understand if preferences for flipped learning courses vary by student characteristics, such as working or non-working status. These characteristics have been correlated with preferences for online learning instead of F2F learning, and associations between working status and flipped learning preferences have not previously been examined (Brown, 2016; Nguyen, 2013; Paechter et al., 2013; Thai et al., 2017).
One subscale we compared, Behavioral Engagement–Effortful Class Participation, was not significantly different among students in the flipped and non-flipped counseling courses. This construct refers to students’ self-directed behavioral engagement in class versus behaviors that are compliant with classroom norms (Fredricks et al., 2004; Wang et al., 2014). Effortful class participation includes self-directed behaviors and efforts to become invested in learning (Wang et al., 2014). It might not have differed among students due to the student population used in this study—graduate counseling and school psychology students. Students were voluntarily pursuing master’s degrees in their areas of choice and subsequently had high levels of motivation toward the courses. Students in both sections were likely invested in their coursework, and this investment may not have been affected by whether or not the courses were flipped.
This study’s findings add to a growing body of research demonstrating positive findings when flipped courses are compared to non-flipped ones. Researchers have consistently found that students in flipped courses perform better than those in non-flipped courses (Day & Foley, 2006; Foldnes, 2016; Murphy et al., 2016; Thai et al., 2017). Given that higher classroom engagement is associated with better academic performance (O’Brien & Iannone, 2018; Trowler, 2010; Wang et al., 2014), the findings in our study may indicate that flipped learning could lead to enhanced academic performance for counseling students.
In counselor education, our findings provide further tentative support for the use of flipped learning within the discipline. They align with Moran and Milsom’s (2015) survey research with school counseling students, Fulton and Gonzalez’s (2015) survey research with career counseling students, and Merlin-Knoblich and Camp’s (2018) case study with life span students demonstrating positive findings on flipped learning in counselor education. The findings from these studies begin to build a credible case for the positive impact that the flipped learning approach might have on graduate counseling students.
Implications for Counselor Education
Pedagogy
Results of this study beg a larger question about the importance of pedagogy in counselor education. If programs are to graduate competent practitioners into the profession, then they must understand how to optimize students’ learning of the counseling discipline. Authors of a journal content analysis of pedagogy in counselor education over a 10-year period revealed that only 14.78% of the articles had a clear basis in learning theory or instructional research (Barrio Minton, Wachter Morris, & Yaites, 2014). Other researchers have called for the need for much more attention to teaching and learning in counselor education (Baltrinic, Jencius, & McGlothlin, 2016; Brackette, 2014; Malott, Hall, Sheely-Moore, Krell, & Cardaciotto, 2014).
Flipped learning is one type of teaching format that is a recognized practice at both the K–12 and undergraduate levels (Kurt, 2017; Sezer, 2016; Zainuddin & Halili, 2016). As students progress in their education, counselor educators need to be aware of how teaching practices must evolve in order to meet the expectations of students at the graduate level. Findings from this study suggest that it is worthwhile to consider flipped learning as a way to engage future students. Furthermore, the significance of findings related to the affective, behavioral, and cognitive engagement in flipped learning might be especially important because the practice of counseling requires simultaneous use of emotional, behavioral, and cognitive skills. The opportunity to preview lecture content before a class allows students to engage in initial cognitive processing and frees up class time for more complex and application tasks engaging with course material (Earley, 2016; Hoffman, 2014; Zainuddin & Halili, 2016). Given the cognitive complexity and skills-oriented nature of counseling courses, it seems preferable to have more time spent on higher-order thinking processes and skills practice. In this way, flipped learning may provide the additional class time needed to increase students’ counseling competence.
Counseling Student Competence
Students’ counseling competence might manifest in both counseling abilities and academic achievement. Academic achievement in counseling programs is reflected in assignment and course grades, as well as counselor examinations like the National Counselor Examination for Licensure and Certification and the Counselor Preparation Comprehensive Examination. Given research in non-counseling disciplines indicating significantly better academic achievement in flipped courses compared to non-flipped courses (Day & Foley, 2006; Foldnes, 2016; Murphy et al., 2016; Thai et al., 2017), counselor educators may want to consider the use of flipped learning in order to improve counseling course grades and exam scores. This improved academic achievement for counseling students could lead to greater numbers of students completing counseling programs and might lead to improved graduation rates among counseling programs with flipped courses.
Counselor Education Training
In addition to the implications for students’ learning in the master’s-level counseling classroom, this study has implications for the training of current and future counselor educators. Previous literature demonstrates a lack of counselor education’s attention to pedagogy and learning theory (Barrio Minton et al., 2014; Brackette, 2014; Malott et al., 2014; McAuliffe & Eriksen, 2011), much less to teaching approaches like flipped learning. Thus, one might conclude that counseling professors either have had little training in teaching and learning or are not publishing about their training in this area. Thankfully, the 2016 CACREP standards include nine standards that address pedagogy in doctoral programs (CACREP, 2016), whereas the former 2009 standards only included two in this area (CACREP, 2009). It is likely that many counselor education doctoral programs are working to better incorporate the revised standards. As such, program coordinators and faculty would be encouraged to expose doctoral students to the literature on, and examples of, flipped learning. They also would be wise to encourage doctoral students to research and publish on pedagogy in counselor education, including flipped learning, to help fill this gap in previous literature.
Limitations and Future Directions
We recognize limitations in this study that ought to be considered. First, the study was limited by its data collection measures. We measured participants’ perceived classroom engagement, which they reported via questionnaires. This self-report nature could reflect student biases or inaccuracies that observed classroom engagement measures might not reflect. Furthermore, experimental group participants were students in courses taught by the first and second authors, and despite the anonymity assured to participants, they might have felt compelled to provide favorable questionnaire responses. Although we did not collect data on participant demographics to ensure anonymity, this lack of demographic data also serves as a limitation, as such information could inform the interpretation of results. In addition, the study is limited by its two types of data collection, as one class completed the questionnaire electronically, whereas all other participants completed the questionnaire in a paper-and-pencil format.
Second, the courses we compared contained similar, though not identical content. Although the content in both courses was similar, as a causal comparative study, we were unable to manipulate course content to ensure that instructors in both courses delivered identical content. For example, the Theories for Counseling Children and Adolescents instructors taught one unit on play therapy, which the Counseling Theories instructor did not teach in her sections.
Third, the flipped course section instructors in this study were different. The first author taught the first flipped learning course section, and one year later, the second author taught the second flipped learning course section. Although they used the same instructional approach, differences in their teaching styles might have impacted student experiences in their courses and consequently, the study results as well. They tried to control for differences in their teaching by meeting to discuss the course and flipped learning teaching in between the two flipped course sections. The first author also shared all course materials (e.g., syllabus, video lectures, lesson plans) with the second author, who used or adapted the materials when she taught the course. We chose not to analyze statistical differences between these course sections due to the small sample size of each section (n = 17 and n = 13).
In addition, the student composition in the flipped and non-flipped courses varied and sample sizes were limited. Due to the causal comparative method used in the study, sample sizes could not be altered and a post hoc power analysis using G*Power indicated that the observed power in our study was 0.64. Additionally, the Counseling Theories class consisted of first-year counseling students in different specialties, whereas the Theories for Counseling Children and Adolescents course consisted of second-year school counseling and school psychology graduate students. The latter course was required in the program of study of both school counseling and school psychology students, and the former course was required in the program of study of all counseling students. These differences might have contributed to different levels of classroom engagement. Admissions standards are the same for master’s counseling and psychology students at the university where the study took place, yet qualitative differences between the counseling and school psychology students might have existed and impacted participants’ reported engagement levels. Furthermore, although no previous literature has indicated that classroom engagement is variable by year or specialty in a master’s program, school counseling and school psychology students may inherently be more engaged in a course specifically about children and adolescents, compared to counseling students in different counseling specialties in a course about counseling theories applied to any population. Similarly, students in their second year of study in a master’s program might be more engaged in classrooms than students in their first year of study because the former are closer to beginning their chosen careers. Students also could have been more engaged in the flipped learning course given that it was the only flipped course in the department at the time this study took place. The novelty of such a class format could have impacted student engagement beyond the nature of the course itself.
Lastly, the CEI was not developed with a sample of graduate students; hence, instrument reliability and validity with this sample is not certain. In their development of the instrument, however, Wang and colleagues (2014) found that the instrument factor structure was invariant by student age, grade level, and other characteristics, indicating it might be statistically sound for populations outside of students in grades 4 through 12.
Despite these limitations, the findings from the study serve as a foundation for continued research. Given that we found significant differences in levels of reported classroom engagement among participants, these differences could be even more substantial if the comparison groups were to consist of identical course content and the same instructor. That is, external validity issues could be reduced if a single instructor taught two sections of the same course, implementing flipped learning in one class but using a traditional lecture-based approach for the other class. An instructor could also teach a flipped counseling course one semester, then teach the same course with a non-flipped approach in a subsequent semester and compare student outcomes from each course.
Future research also could include expanded data collection. In the present study, we distributed the CEI at the end of the semester for all course sections; however, researchers could distribute instruments both during the middle of the semester as well as at the end of the semester to examine significant changes in student engagement. Researchers could also study student outcomes related to flipped learning to assess cognitive changes. For example, does flipped learning impact student achievement? In counselor education, such research could assess student content knowledge through comprehensive exams. Researchers also ought to address the behavioral and affective impacts of flipped learning in counselor education. To examine affective change, researchers could query students about their emotions in flipped counseling courses and how these emotions impact their development as counselors. To assess behavior, researchers could observe counseling students’ behaviors in flipped and non-flipped counseling courses, measuring constructs such as class participation and observed engagement. Finally, the counseling profession would benefit from understanding if flipped learning in counselor education impacts the attainment of actual counseling skills. Researchers might assess counseling performances of students in flipped counseling courses versus those in non-flipped courses.
Conclusion
In this causal comparative study, we measured the classroom engagement levels of master’s students in flipped and non-flipped counseling classrooms. In all but one area measured, we found that participants in the flipped counseling course sections reported significantly higher classroom engagement than participants in the non-flipped counseling course sections. Such research indicates that students may find the flipped classroom experience more engaging than a classroom experience that is lecture-based. Although this is the first study of its kind in counselor education, findings contribute to a case for the use of flipped learning in counseling courses. Counselor educators will benefit from considering applying flipped learning in the courses they teach.
Conflict of Interest and Funding Disclosure
The authors reported no conflict of interest
or funding contributions for the development
of this manuscript.
References
Archambault, I., Janosz, M., Fallu, J. S., & Pagani, L. S. (2009). Student engagement and its relationship with early high school dropout. Journal of Adolescence, 32, 651–670. doi:10.1016/j.adolescence.2008.06.007
Baepler, P., Walker, J. D., & Driessen, M. (2014). It’s not about seat time: Blending, flipping, and efficiency in active learning classrooms. Computers & Education, 78, 227–236. doi:10.1016/j.compedu.2014.06.006
Baltrinic, E. R., Jencius, M., & McGlothlin, J. (2016). Coteaching in counselor education: Preparing doctoral students for future teaching. Counselor Education & Supervision, 55, 31–45. doi:10.1002/ceas.12031
Barrio Minton, C. A., Wachter Morris, C. A., & Yaites, L. D. (2014). Pedagogy in counselor education: A 10-year content analysis of journals. Counselor Education & Supervision, 53, 162–177.
doi:10.1002/j.1556-6978.2014.00055.x
Bergmann, J., & Sams, A. (2014). Flipped learning: Gateway to student engagement. Eugene, OR: International Society for Technology in Education.
Bishop, J. L., & Verleger, M. A. (June, 2013). The flipped classroom: A survey of the research. Paper presented at the meeting of the American Society for Engineering Education Annual Conference and Expo, Atlanta, GA.
Bolsen, T., Evans, M., & Fleming, A. M. (2016). A comparison of online and face-to-face approaches to teaching introduction to American government. Journal of Political Science Education, 12, 302–317. doi:10.1080/15512169.2015.1090905
Brackette, C. M. (2014). The scholarship of teaching and learning in clinical mental health counseling. New Directions for Teaching & Learning, 139, 37–48. doi:10.1002/tl.20103
Bristow, D., Shepherd, C. D., Humphreys, M., & Ziebell, M. (2011). To be or not to be: That isn’t the question! An empirical look at online versus traditional brick-and-mortar courses at the university level. Marketing Education Review, 21, 241–250. doi:10.2753/MER1052-8008210304
Brown, M. G. (2016). Blended instructional practice: A review of the empirical literature on instructors’ adoption and use of online tools in face-to-face teaching. The Internet and Higher Education, 31, 1–10. doi:10.1016/j.iheduc.2016.05.001
Cavanagh, M. (2011). Students’ experiences of active engagement through cooperative learning activities in lectures. Active Learning in Higher Education, 12, 23–33. doi:10.1177/1469787410387724
Cohen, J. (1969). Statistical power analysis for the behavioral sciences. New York, NY: Academic Press.
Council for Accreditation of Counseling and Related Educational Programs. (2009). 2009 CACREP Accreditation Manual. Alexandria, VA: Author.
Council for Accreditation of Counseling and Related Educational Programs. (2016). 2016 CACREP Accreditation Manual. Alexandria, VA: Author.
Creswell, J. W., & Creswell, J. D. (2018). Research design: Qualitative, quantitative, and mixed methods approaches (5th ed.). Thousand Oaks, CA: SAGE.
Davies, R. S., Dean, D. L., & Ball, N. (2013). Flipping the classroom and instructional technology integration in a college-level information systems spreadsheet course. Educational Technology Research and Development, 61, 563–580. doi:10.1007/s11423-013-9305-6
Day, J. A., & Foley, J. D. (2006). Evaluating a web lecture intervention in a human–computer interaction course. IEEE Transactions on Education, 49, 420–431. doi:10.1109/TE.2006.879792
Earley, M. (2016). Flipping the graduate qualitative research methods classroom: Did it lead to flipped learning? International Journal of Teaching and Learning in Higher Education, 28, 139–147.
Elmaadaway, M. A. N. (2018). The effects of a flipped classroom approach on class engagement and skill performance in a Blackboard course. British Journal of Educational Technology, 49, 479–491.
doi:10.1111/bjet.12553
Faculty Focus. (2015). Special report: Flipped classroom trends: A survey of college faculty. Retrieved from https://www.facultyfocus.com/free-reports/flipped-classroom-trends-a-survey-of-college-faculty/
Foldnes, N. (2016). The flipped classroom and cooperative learning: Evidence from a randomised experiment. Active Learning in Higher Education, 17, 39–49. doi:10.1177/1469787415616726
Fredricks, J. A., Blumenfeld, P. C., & Paris, A. H. (2004). School engagement: Potential of the concept, state of the evidence. Review of Educational Research, 74, 59–109. doi:10.3102/00346543074001059
Fulton, C. L., & Gonzalez, L. M. (2015). Making career counseling relevant: Enhancing experiential learning using a “flipped” course design. Journal of Counselor Preparation & Supervision, 7(2), 38–67. doi:10.7729/72.1126
Gerstein, J. (2012). The flipped classroom: The full picture. Retrieved from https://read.amazon.com/?asin=B008ENPEP6
Gilboy, M. B., Heinerichs, S., & Pazzaglia, G. (2015). Enhancing student engagement using the flipped classroom. Journal of Nutrition Education and Behavior, 47, 109–114. doi:10.1016/j.jneb.2014.08.008
Gladding, S. T., & Ivers, N. N. (2012). Group work: Standards, techniques, practice, and resources. In D. M. Perera-Diltz and K. C. MacCluskie (Eds.), The counselor educator’s survival guide: Designing and teaching outstanding courses in community mental health counseling and school counseling (pp. 171–186). New York, NY: Routledge.
Hao, Y. (2016). Exploring undergraduates’ perspectives and flipped learning readiness in their flipped classrooms. Computers in Human Behavior, 59, 82–92. doi:10.1016/j.chb.2016.01.032
Harper, S. R., & Quaye, S. J. (Eds.). (2009). Student engagement in higher education: Theoretical perspectives and practical approaches for diverse populations. New York, NY: Routledge.
Hoffman, E. S. (2014). Beyond the flipped classroom: Redesigning a research methods course for e3 instruction. Contemporary Issues in Education Research, 7, 51–62.
Kim, M. K., Kim, S. M., Khera, O., & Getman, J. (2014). The experience of three flipped classrooms in an urban university: An exploration of design principles. The Internet and Higher Education, 22, 37–50. doi:10.1016/j.iheduc.2014.04.003
Krause, K.-L., & Coates, H. (2008). Students’ engagement in first-year university. Assessment & Evaluation in Higher Education, 33, 493–505. doi:10.1080/02602930701698892
Kurt, G. (2017). Implementing the flipped classroom in teacher education: Evidence from Turkey. Journal of Educational Technology & Society, 20, 211–221.
Long, T., Cummins, J., & Waugh, M. (2017). Use of the flipped classroom instructional model in higher education: Instructors’ perspectives. Journal of Computing in Higher Education, 29, 179–200.
doi:10.1007/s12528-016-9119-8
Lucke, T., Dunn, P. K., & Christie, M. (2017). Activating learning in engineering education using ICT and the concept of ‘Flipping the classroom’. European Journal of Engineering Education, 42, 45–57. doi:10.1080/03043797.2016.1201460
Malott, K. M., Hall, K. H., Sheely-Moore, A., Krell, M. M., & Cardaciotto, L. (2014). Evidence-based teaching in higher education: Application to counselor education. Counselor Education and Supervision, 53, 294–305. doi:10.1002/j.1556-6978.2014.00064.x
McAuliffe, G., & Eriksen, K. (Eds.). (2011). Handbook of counselor preparation: Constructivist, developmental, and experiential approaches. Thousand Oaks, CA: SAGE.
McGivney-Burelle, J., & Xue, F. (2013). Flipping calculus. Problems, Resources, and Issues in Mathematics Undergraduate Studies, 23, 477–486.
McLaughlin, J. E., Griffin, L. M., Esserman, D. A., Davidson, C. A., Glatt, D. M., Roth, M. T., . . . Mumper, R. J. (2013). Pharmacy student engagement, performance, and perception in a flipped satellite classroom. American Journal of Pharmaceutical Education, 77, 1–8. doi:10.5688/ajpe779196
Merlin, C. (2016). Flipping the counseling classroom to enhance application-based learning activities. Journal of Counselor Preparation and Supervision, 8(3), 1–28. doi:10.7729/83.1127
Merlin-Knoblich, C., & Camp, A. (2018). A case study exploring students’ experiences in a flipped counseling course. Counselor Education and Supervision, 57, 301–316. doi:10.1002/ceas.12118
Moran, K., & Milsom, A. (2015). The flipped classroom in counselor education. Counselor Education and Supervision, 54, 32–43. doi:10.1002/j.1556-6978.2015.00068.x
Murphy, J., Chang, J.-M., & Suaray, K. (2016). Student performance and attitudes in a collaborative and flipped linear algebra course. International Journal of Mathematical Education in Science and Technology, 47, 653–673. doi:10.1080/0020739X.2015.1102979
Nguyen, B. T. (2013). Face-to-face, blended, and online instruction: Comparison of student performance and retention in higher education. Dissertation Abstracts International Section A: Humanities and Social Sciences, 73(7-A(E)).
Nouri, J. (2016). The flipped classroom: For active, effective and increased learning—especially for low achievers. International Journal of Educational Technology in Higher Education, 13, 1–10.
doi:10.1186/s41239-016-0032-z
O’Brien, B., & Iannone, P. (2018). Students’ experiences of teaching at secondary school and university: Sharing responsibility for classroom engagement. Journal of Further and Higher Education, 42, 922–936. doi:10.1080/0309877X.2017.1332352
Okech, D., Barner, J., Segoshi, M., & Carney, M. (2014). MSW student experiences in online vs. face-to-face teaching formats. Social Work Education, 33, 121–134. doi:10.1080/02615479.2012.738661
Paechter, M., Kreisler, M., Luttenberger, S., Macher, D., & Wimmer, S. (2013). Communication in e-learning courses. The Internet and Higher Education, 44, 429–433.
doi:10.1007/s11612-013-0223-1
Platt, C. A., Raile, A. N. W., & Yu, N. (2014). Virtually the same? Student perceptions of the equivalence of online classes to face-to-face classes. Journal of Online Learning and Teaching, 10, 489–503.
Sezer, B. (2016). The effectiveness of a technology-enhanced flipped science classroom. Journal of Educational Computing Research, 55, 471–494. doi:10.1177/0735633116671325
Simpson, V., & Richards, E. (2015). Flipping the classroom to teach population health: Increasing the relevance. Nurse Education in Practice, 15, 162–167. doi:10.1016/j.nepr.2014.12.001
Sommers-Flanagan, J., & Heck, N. (2012). Counseling skills: Building the pillars of professional counseling. In D. M. Perera-Diltz and K. C. MacCluskie (Eds.), The counselor educator’s survival guide: Designing and teaching outstanding courses in community mental health counseling and school counseling (pp. 153–170). New York, NY: Routledge.
Thai, N. T. T., De Wever, B., & Valcke, M. (2017). The impact of a flipped classroom design on learning performance in higher education: Looking for the best “blend” of lectures and guiding questions with feedback. Computers & Education, 107, 113–126. doi:10.1016/j.compedu.2017.01.00
Thompson, B. (2006). Role of effect sizes in contemporary research in counseling. Counseling and Values, 50, 176–186. doi:10.1002/j.2161-007X.2006.tb00054.x
Trowler, V. (2010). Student engagement literature review. The Higher Education Academy. Retrieved from http://www.academia.edu/743769/Student_engagement_literature_review
Wallace, M. L., Walker, J. D., Braseby, A. M., & Sweet, M. S. (2014). “Now, what happens during class?” Using team-based learning to optimize the role of expertise within the flipped classroom. Journal on Excellence in College Teaching, 25, 253–273.
Wang, Z., Bergin, C., & Bergin, D. A. (2014). Measuring engagement in fourth to twelfth graded classrooms: The Classroom Engagement Inventory. School Psychology Quarterly, 29, 517–535. doi:10.1037/spq0000050
Wanner, T., & Palmer, E. (2015). Personalizing learning: Exploring student and teacher perceptions about flexible learning and assessment in a flipped university course. Computers and Education, 88, 354–369. doi:10.1016/j.compedu.2015.07.008
Young, S., & Duncan, H. E. (2014). Online and face-to-face teaching: How do student ratings differ? Journal of Online Learning and Teaching, 10, 70–79.
Zainuddin, Z., & Halili, S. H. (2016). Flipped classroom research and trends from different fields of study. International Review of Research in Open and Distributed Learning, 17, 313–340.
doi:10.19173/irrodl.v17i3.2274
Clare Merlin-Knoblich, NCC, is an assistant professor at the University of North Carolina at Charlotte. Pamela N. Harris is an assistant professor at the University of North Carolina at Greensboro. Erin Chase McCarty Mason is an assistant professor at Georgia State University. Correspondence can be addressed to Clare Merlin-Knoblich, 9201 University City Blvd., Charlotte, NC 28223, claremerlin@uncc.edu.
Nov 10, 2018 | Volume 8 - Issue 4
Zachary D. Bloom, Victoria A. McNeil, Paulina Flasch, Faith Sanders
Empathy plays an integral role in the facilitation of therapeutic relationships and promotion of positive client outcomes. Researchers and scholars agree that some components of empathy might be dispositional in nature and that empathy can be developed through empathy training. However, although empathy is an essential part of the counseling process, literature reviewing the development of counseling students’ empathy is limited. Thus, we examined empathy and sympathy scores in counselors-in-training (CITs) in comparison to students from other academic disciplines (N = 868) to determine if CITs possess greater levels of empathy than their non-counseling academic peers. We conducted a MANOVA and failed to identify differences in levels of empathy or sympathy across participants regardless of academic discipline, potentially indicating that counselor education programs might be missing opportunities to further develop empathy in their CITs. We call for counselor education training programs to promote empathy development in their CITs.
Keywords: empathy, sympathy, counselor education, counselors-in-training, therapeutic relationships
Empathy is considered an essential component of the human experience as it relates to how individuals socially and emotionally connect to one another (Goleman, 1995; Szalavitz & Perry, 2010). Although empathy can be difficult to define (Konrath, O’Brien, & Hsing, 2011; Spreng, McKinnon, Mar, & Levine, 2009), within the counseling profession there is agreement that empathy includes both cognitive and affective components (Clark, 2004; Davis, 1980, 1983). When discussing the difference between affective and cognitive empathy, Vossen, Piotrowski, and Valkenburg (2015) described that “whereas the affective component pertains to the experience of another person’s emotional state, the cognitive component refers to the comprehension of another person’s emotions” (p. 66). Regardless of specific nuances among researchers’ definitions of empathy, most appear to agree that “empathy-related responding is believed to influence whether or not, as well as whom, individuals help or hurt” (Eisenberg, Eggum, & Di Giunta, 2010, p. 144). Furthermore, empathy can be viewed as a motivating factor of altruistic behavior (Batson & Shaw, 1991) and is essential to clients’ experiences of care (Flasch et al., in press). As such, empathy is foundational to interpersonal relationships (Siegel, 2010; Szalavitz & Perry, 2010), including the relationships facilitated in a counseling setting (Norcross, 2011; Rogers, 1957).
Rogers (1957) intuitively understood the necessity of empathy in a counseling relationship, which has been verified by the understanding of the physiology of the brain (Badenoch, 2008; Decety & Ickes, 2009; Siegel, 2010) and validated in the counseling literature (Elliott, Bohart, Watson, & Greenberg, 2011). In a clinical context, empathy can be described as both a personal characteristic and a clinical skill (Clark, 2010; Elliott et al., 2011; Rogers, 1957) that contributes to positive client outcomes (Norcross, 2011; Watson, Steckley, & McMullen, 2014). For example, empathy has been identified as a factor that leads to changes in clients’ attachment styles, treatment of self (Watson et al., 2014), and self-esteem development (McWhirter, Besett-Alesch, Horibata, & Gat, 2002). Moreover, researchers regularly identify empathy as a fundamental component of helpful responses to clients’ experiences (Beder, 2004; Flasch et al., in press; Kirchberg, Neimeyer, & James, 1998).
Although empathy is lauded and encouraged in the counseling profession, empathy development is not necessarily an explicit focus or even a mandated component of clinical training programs. The Council for Accreditation of Counseling and Related Educational Programs (CACREP; 2016) identifies diverse training standards for content knowledge and practice among master’s-level and doctoral-level counselors-in-training (CITs), but does not mention the word empathy in its manual for counseling programs. One of the reasons for this could be that empathy is often understood and taught as a microskill (e.g., reflection of feeling and meaning) rather than as its own construct (Bayne & Jangha, 2016). Yet empathy is more than a component of a skillset, and CITs might benefit from a programmatic development of empathy to enhance their work with future clients (DePue & Lambie, 2014).
The application of empathy, or a counselor’s use of empathy-based responses in a therapeutic relationship, requires skill and practice (Barrett-Lennard, 1986; Truax & Carkhuff, 1967). Clark (2010) cautioned, for example, that counselors’ empathic responses need to be congruent with the client’s experience, and that the misapplication of sympathetic responses as empathic responses can interfere in the counseling relationship. In regard to sympathy, Eisenberg and colleagues (2010) explained, “sympathy, like empathy, involves an understanding of another’s emotion and includes an emotional response, but it consists of feelings of sorrow or concern for the distressed or needy other rather than merely feeling the same emotion” (p. 145). Thus, researchers call for counselor educators to do more than increase CITs’ affective or cognitive understanding of another’s experience, and to assist them in differentiating between empathic responses and sympathetic responses in order to better convey empathic understanding and relating (Bloom & Lambie, in press; Clark, 2010).
With the understanding that a counselor’s misuse of sympathetic responses might interrupt a therapeutic dialogue and that empathy is vital to the therapeutic alliance, researchers call for counselor educators to promote empathy development in CITs (Bloom & Lambie, in press; DePue & Lambie, 2014). Although there is evidence that some aspects of empathy are dispositional in nature (Badenoch, 2008; Konrath et al., 2011), which might make the counseling profession a strong fit for empathic individuals, empathy training in counseling programs can increase students’ levels of empathy (Ivey, 1971). However, the specific empathy-promoting components of empathy training are less understood (Teding van Berkhout & Malouff, 2016). Overall, empathy is an essential component of the counseling relationship, counselor competency, and the promotion of client outcomes (DePue & Lambie, 2014; Norcross, 2011). However, little is known about the training aspect of empathy and whether or not counselor training programs are effective in enhancing empathy or reducing sympathy among CITs. Thus, the following question guided this research investigation: Are CITs’ levels of empathy or sympathy different from their academic peers? Specifically, do CITs possess greater levels of empathy or sympathy than students from other academic majors?
Empathy in Counseling
Researchers have established continuous support for the importance of the therapeutic relationship in the facilitation of positive client outcomes (Lambert & Bergin, 1994; Norcross, 2011; Norcross & Lambert, 2011). In fact, the therapeutic relationship is predictive of positive client outcomes (Connors, Carroll, DiClemente, Longabaugh, & Donovan, 1997; Krupnick et al., 1996), accounting for about 30% of the variance (Lambert & Barley, 2001). That is, clients who perceive the counseling relationship to be meaningful will have more positive treatment outcomes (Bell, Hagedorn, & Robinson, 2016; Norcross & Lambert, 2011). One of the key factors in the establishment of a strong therapeutic relationship is a counselor’s ability to experience and communicate empathy. Researchers estimate that empathy alone may account for as much as 7–10% of overall treatment outcomes (Bohart, Elliott, Greenberg, & Watson, 2002; Sachse & Elliott, 2002), making it an important construct to foster in counselors.
Despite the importance of empathy in the counseling process, much of the literature on empathy training in counseling is outdated. Thus, little is known about the training aspect of empathy; that is, how is empathy taught to and learned by counselors? Nevertheless, early scholars (Barrett-Lennard, 1986; Ivey, 1971; Ivey, Normington, Miller, Morrill, & Haase, 1968; Truax & Carkhuff, 1967) posited that counselor empathy is a clinical skill that may be practiced and learned, and there is supporting evidence that empathy training may be efficacious.
In one seminal study, Truax and Lister (1971) conducted a 40-hour empathy training program with 12 counselor participants and identified statistically significant increases in participants’ levels of empathy. In their investigation, the researchers employed methods in which (a) the facilitator modeled empathy, warmth, and genuineness throughout the training program; (b) therapeutic groups were used to integrate empathy skills with personal values; and (c) researchers coded three of participants’ 4-minute counseling clips using scales of accurate empathy and non-possessive warmth (Truax & Carkhuff, 1967). Despite identifying statistically significant changes in participants’ scores of empathy, it is necessary to note that participants who initially demonstrated low levels of empathy remained lower than participants who initially scored high on the empathy measures. In a later study modeled after the Truax and Lister study, Silva (2001) utilized a combination of didactic, experiential, and practice components in her empathy training program, and found that counselor trainee participants (N = 45) improved their overall empathy scores on Truax’s Accurate Empathy Scale (Truax & Carkhuff, 1967). These findings contribute to the idea that empathy increases as a result of empathy training.
More recent researchers (Lam, Kolomitro, & Alamparambil, 2011; Ridley, Kelly, & Mollen, 2011) have identified the most common methods in empathy training programs as experiential training, didactic (lecture), skills training, and other mixed methods such as role play and reflection. In their meta-analysis, Teding van Berkhout and Malouff (2016) examined the effect of empathy training programs across various populations (e.g., university students, health professionals, patients, other adults, teens, and children) using the training methods identified above. The researchers investigated the effect of cognitive, affective, and behavioral empathy training and found a statistically significant medium effect size overall (g ranged from 0.51 to 0.73). The effect size was larger in health professionals and university students compared to other groups such as teenagers and adult community members. Though empathy increased as a result of empathy training studies, the specific mechanisms that facilitated positive outcomes remain largely unknown.
Although research indicates that empathy training can be effective, specific empathy-fostering skills are still not fully understood. Programmatically, empathy is taught to counselors within basic counseling skills (Bayne & Jangha, 2016), specifically because empathy is believed to lie in the accurate reflection of feeling and meaning (Truax & Carkhuff, 1967). But scholars argue that there is more to empathy than the verbal communication of understanding (Davis, 1980; Vossen et al., 2015). For example, in a more recent study, DePue and Lambie (2014) reported that counselor trainees’ scores on the Empathic Concern subscale of the Interpersonal Reactivity Index (IRI; Davis, 1980) increased as a result of engaging in counseling practicum experience under live supervision in a university-based clinical counseling and research center. In their study, the researchers did not actively engage in empathy training. Rather, they measured counseling students’ pre- and post-scores on an empathy measure as a result of students’ engagement in supervised counseling work to foster general counseling skills. Implications of these findings mirror those described by Teding van Berkhout and Malouff (2016), namely that it is difficult to identify specific empathy-promoting mechanisms. In other words, it appears that empathy training, when employed, produces successful outcomes in CITs. However, counseling students’ empathy also increases in the absence of specific empathy-promoting programs. This begs the question: Are counseling programs successfully training their counselors to be empathic, and is there a difference between CITs’ empathy or sympathy levels compared to students in other academic majors? Thus, the purpose of the present study was to (a) examine differences in empathy (i.e., affective empathy and cognitive empathy) and sympathy levels among emerging adult college students, and (b) determine whether CITs had different levels of empathy and sympathy when compared to their academic peers.
Methods
Participants
We identified master’s-level CITs as the population of interest in this investigation. We intended to compare CITs to other graduate and undergraduate college student populations. Thus, we utilized a convenience sample from a larger data set that included emerging adult college students between the ages of 18 and 29 who were enrolled in at least one undergraduate- or graduate-level course at nine colleges and universities throughout the United States. Participants were included regardless of demographic variables (e.g., gender, race, ethnicity).
Participants were recruited from three sources: online survey distribution (n = 448; 51.6%), face-to-face data collection (n = 361; 41.6%), and email solicitation (n = 34; 3.9%). In total, 10,157 potential participants had access to participate in the investigation by online survey distribution through the psychology department at a large Southeastern university; however, the automated system limited responses to 999 participants. We and our contacts (i.e., faculty at other institutions) distributed an additional 800 physical data collection packets to potential participants, and 105 additional potential participants were solicited by email. Overall, 1,713 data packets were completed, resulting in a sample of 1,598 participants after data cleaning. However, in order to conduct the analyses for this study, it was necessary to limit our sample to groups of approximately equal sizes (Hair, Black, Babin, & Anderson, 2010). Therefore, we were limited to the use of a subsample of 868 participants. Our sample appeared similar to other samples included in investigations exploring empathy with emerging adult college students (e.g., White, heterosexual, female; Konrath et al., 2011).
The participants included in this investigation were enrolled in one of six majors and programs of study, including Athletic Training/Health Sciences (n = 115; 13.2%); Biology/Biomedical Sciences/Preclinical Health Sciences (n = 167; 19.2%); Communication (n = 163; 18.8%); Counseling (n = 153; 17.6%); Nursing (n = 128; 14.7%); and Psychology (n = 142; 16.4%). It is necessary to note that students self-identified their major rather than selecting it from a preexisting prompt. Therefore, the researchers examined responses and categorized similar responses to one uniform title. For example, responses of psych were included with psychology. Further, in order to attain homogeneity among group sizes, we included multiple tracks within one program. For example, counseling included participants enrolled in either clinical mental health counseling (n = 115), marriage and family counseling (n = 24), or school counseling (n = 14) tracks. Table 1 presents additional demographic information (e.g., age, race, ethnicity, graduate-level status). It is necessary to note that, because of the constraints of the dataset, counseling students consisted of master’s-level graduate students, whereas all other groups consisted of undergraduate students.
Table 1
Participants’ Demographic Characteristics
Characteristic |
|
n
|
Total %
|
|
Age |
18–19 |
460
|
52.4
|
|
|
20–21 |
155
|
17.9
|
|
|
22–23 |
130
|
15.0
|
|
|
24–25 |
58
|
6.7
|
|
|
26–27 |
36
|
4.1
|
|
|
28–29 |
27
|
3.1
|
|
Gender |
Female |
692
|
79.7
|
|
|
Male |
167
|
19.2
|
|
|
Other |
8
|
0.9
|
|
Racial |
Caucasian |
624
|
71.9
|
|
Background |
African American/African/Black |
101
|
11.6
|
|
|
Biracial/Multiracial |
65
|
7.5
|
|
|
Asian/Asian American |
40
|
4.6
|
|
|
Native American |
3
|
0.3
|
|
|
Other |
25
|
2.9
|
|
Ethnicity |
Hispanic |
172
|
19.8
|
|
|
Non-Hispanic |
689
|
79.4
|
|
Academic |
Undergraduate |
709
|
81.7
|
|
Enrollment |
Graduate |
152
|
17.5
|
|
|
Other |
5
|
0.6
|
|
Academic Major |
Athletic Training/Health Sciences |
115
|
13.2
|
|
|
Biology/Biomedical Sciences/Preclinical Health Sciences |
167
|
19.2
|
|
|
Counseling |
153
|
17.6
|
|
|
Communication |
163
|
18.8
|
|
|
Nursing |
128
|
14.7
|
|
|
Psychology |
142
|
16.4
|
|
Note. N
= 868.
Procedure
The data utilized in this study were collected as part of a larger study that was approved by the authors’ institutional review board (IRB) as well as additional university IRBs where data was collected, as requested. We followed the Tailored Design Method (Dillman, Smyth, & Christian, 2009), a series of recommendations for conducting survey research to increase participant motivation and decrease attrition, throughout the data collection process for both web-based survey and face-to-face administration. Participants received informed consent, assuring potential participants that their responses would be confidential and their anonymity would be protected. We also made the survey convenient and accessible to potential participants by making it available either in person or online, and by avoiding the use of technical language (Dillman et al., 2009).
We received approval from the authors of the Adolescent Measure of Empathy and Sympathy (AMES; Vossen et al., 2015; personal communication with H. G. M. Vossen, July 10, 2015) to use the instrument and converted the data collection packet (e.g., demographic questionnaire, AMES) into Qualtrics (2013) for survey distribution. We solicited feedback from 10 colleagues regarding the legibility and parsimony of the physical data collection packets and the accuracy of the survey links. We implemented all recommendations and changes (e.g., clarifying directions on the demographic questionnaire) prior to data collection.
All completed data collection packets were assigned a unique ID, and we entered the data into the IBM SPSS software package for Windows, Version 22. No identifying information was collected (e.g., participants’ names). Having collected data both in person and online via web-based survey, we applied rigorous data collection procedures to increase response rates, reduce attrition, and to mitigate the potential influence of external confounding factors that might contribute to measurement error.
Data Instrumentation
Demographics profile. We included a general demographic questionnaire to facilitate a comprehensive understanding of the participants in our study. We included items related to various demographic variables (e.g., age, race, ethnicity). Regarding participants’ identified academic program, participants were prompted to respond to an open-ended question asking “What is your major area of study?”
AMES. Multiple assessments exist to measure empathy (e.g., the IRI, Davis, 1980, 1983; The Basic Empathy Scale [BES], Jolliffe & Farrington, 2006), but each is limited by several shortcomings (Carré, Stefaniak, D’Ambrosio, Bensalah, & Besche-Richard, 2013). First, many scales measure empathy as a single construct without distinguishing cognitive empathy from affective empathy (Vossen et al., 2015). Moreover, the wording used in most scales is ambiguous, such as items from other assessments that use words like “swept up” or “touched by” (Vossen et al., 2015), and few scales differentiate empathy from sympathy. Therefore, Vossen and colleagues designed the AMES as an empathy assessment that addresses problems related to ambiguous wording and differentiates empathy from sympathy.
The AMES is a 12-item empathy assessment with three factors: (a) Cognitive Empathy, (b) Affective Empathy, and (c) Sympathy. Each factor consists of four items rated on a 5-point Likert scale with ratings of 1 (never), 2 (almost never), 3 (sometimes), 4 (often), and 5 (always). Higher AMES scores indicate greater levels of cognitive empathy (e.g., “I can tell when someone acts happy, when they actually are not”), affective empathy (e.g., “When my friend is sad, I become sad too”), and sympathy (e.g., “I feel concerned for other people who are sick”). The AMES was developed in two studies with Dutch adolescents (Vossen et al., 2015). The researchers identified a 3-factor model with acceptable to good internal consistency per factor: (a) Cognitive Empathy (α = 0.86), (b) Affective Empathy (α = 0.75), and (c) Sympathy (α = 0.76). Further, Vossen et al. (2015) established evidence of strong test-retest reliability, construct validity, and discriminant validity when using the AMES to measure scores of empathy and sympathy with their samples. Despite being normed with samples of Dutch adolescents, Vossen and colleagues suggested the AMES might be an effective measure of empathy and sympathy with alternate samples as well.
Bloom and Lambie (in press) examined the factor structure and internal consistency of the AMES with a sample of emerging adult college students in the United States (N = 1,598) and identified a 3-factor model fitted to nine items that demonstrated strong psychometric properties and accounted for over 60% of the variance explained (Hair et al., 2010). The modified 3-factor model included the same three factors as the original AMES. Therefore, we followed Bloom and Lambie’s modifications for our use of the instrument.
Data Screening
Before running the main analysis on the variables of interest, we assessed the data for meeting the assumptions necessary to conduct a one-way between-subjects MANOVA. First, we conducted a series of tests to evaluate the presence of patterns in missing data and determined that data were missing completely at random (MCAR) and ignorable (e.g., < 5%; Kline, 2011). Because of the robust size of these data (e.g., > 20 observations per cell) and the minimal amount of missing data, we determined listwise deletion to be best practice to conduct a MANOVA and to maintain fidelity to the data (Hair et al., 2010; Osborne, 2013).
Next, we utilized histograms, Q-Q plots, and boxplots to assess for normality and identified non-normal data patterns. However, MANOVA is considered “robust” to violations of normality with a sample size of at least 20 in each cell (Tabachnick & Fidell, 2013). Thus, with our smallest cell size possessing a sample size of 115, we considered our data robust to this violation. Following this, we assumed our data violated the assumption for multivariate normality. However, Hair et al. (2010) stated “violations of this assumption have little impact with larger sample sizes” (p. 366) and cautioned that our data might have problems achieving a non-significant score for Box’s M Test. Indeed, our data violated the assumption of homogeneity of variance-covariance matrices (p < .01). However, this was not a concern with these data because “a violation of this assumption has minimal impact if the groups are of approximately equal size (i.e., largest group size ÷ smallest group size < 1.5)” (Hair et al., 2010, p. 365).
It is necessary to note that MANOVA is sensitive to outlier values. To mitigate against the negative effects of extreme scores, we removed values (n = 3) with standardized z-scores greater than +4 or less than -4 (Hair et al., 2010). This resulted in a final sample size of 868 participants.
We also utilized scatterplots to detect the patterns of non-linear relationships between the dependent variables and failed to identify evidence of non-linearity. Therefore, we proceeded with the assumption that our data shared linear relationships. We also evaluated the data for multicollinearity. Participants’ scores of Affective Empathy shared statistically significant and appropriate relationships with their scores of Cognitive Empathy (r = .24) and Sympathy (r = .43). Similarly, participants’ scores of Cognitive Empathy were appropriately related to their scores of Sympathy (r = .36; p < .01). Overall, we determined these data to be appropriate to conduct a MANOVA. Table 2 presents participants’ scores by academic discipline.
Table 2
AMES Scores by Academic Major
Scale
|
Mean (M)
|
SD
|
Range
|
Athletic Training |
|
|
|
Affective Empathy
|
3.20
|
0.80
|
4.00 |
Cognitive Empathy
|
3.80
|
0.62
|
3.33 |
Sympathy
|
4.34
|
0.55
|
2.67 |
Biomedical Sciences |
|
|
|
Affective Empathy
|
3.12
|
0.76
|
4.00 |
Cognitive Empathy
|
3.66
|
0.59
|
3.00 |
Sympathy
|
4.30
|
0.61
|
2.00 |
Communication |
|
|
|
Affective Empathy
|
3.18
|
0.87
|
4.00 |
Cognitive Empathy
|
3.80
|
0.62
|
2.67 |
Sympathy
|
4.27
|
0.69
|
3.00 |
Counseling |
|
|
|
Affective Empathy
|
3.32
|
0.60
|
3.33 |
Cognitive Empathy
|
3.83
|
0.48
|
4.00 |
Sympathy
|
4.32
|
0.54
|
2.00 |
Nursing |
|
|
|
Affective Empathy
|
3.37
|
0.71
|
3.67 |
Cognitive Empathy
|
3.80
|
0.59
|
2.67 |
Sympathy
|
4.46
|
0.49
|
2.00 |
Psychology |
|
|
|
Affective Empathy
|
3.28
|
0.78
|
4.00 |
Cognitive Empathy
|
3.86
|
0.59
|
2.67 |
Sympathy
|
4.35
|
0.65
|
2.67 |
Note. N
= 868.
Results
Participants’ scores on the AMES were used to measure participants’ levels of empathy and sympathy. Descriptive statistics were used to compare empathy and sympathy levels between counseling students and emerging college students from other disciplines. CITs recorded the second highest levels of affective empathy (M = 3.32, SD = .60) and cognitive empathy (M = 3.83, SD = 0.48), and the fourth highest levels of sympathy (M = 4.32, SD = 0.54) when compared to students from other disciplines. Nursing students demonstrated the highest levels of affective empathy (M = 3.37, SD = .71) and sympathy (M = 4.46, SD = .49), and psychology students recorded the highest levels of cognitive empathy (M = 3.86, SD = 0.59) when compared to students from other disciplines. The internal consistency values for each empathy and sympathy subscale on the AMES were as follows: Cognitive Empathy (α = 0.86), Affective Empathy (α = 0.75), and Sympathy (α = 0.76).
We performed a MANOVA to examine differences in empathy and sympathy in emerging adult college students by academic major, including counseling. Three dependent variables were included: affective empathy, cognitive empathy, and sympathy. The predictor for the MANOVA was the 6-level categorical “academic major” variable. The criterion variables for the MANOVA were the levels of affective empathy (M = 3.24, SD = .76), cognitive empathy (M = 3.80, SD = .58), and sympathy
(M = 4.34, SD = .60), respectively. The multivariate effect of major was statistically non-significant:
p = .062, Wilks’s lambda = .972, F (15, 2374.483) = 1.615, η2 = .009. Furthermore, the univariate F scores for affective empathy (p = .139), cognitive empathy (p = .074), and sympathy (p = .113) were statistically non-significant. That is, there was no difference in levels of affective empathy, cognitive empathy, or sympathy based on academic major, including counseling. Thus, these data indicated that CITs were no more empathic or sympathetic than students in other majors, as measured by the AMES.
We also examined these data for differences in affective empathy, cognitive empathy, and sympathy based on data collection method and educational level. However, we failed to identify a statistically significant difference between groups in empathy or sympathy based on data collection method
(e.g., online survey distribution, face-to-face data collection, email solicitation) or by educational level (e.g., master’s level or undergraduate status). Thus, these data indicate that data collection methods and participants’ educational level did not influence our results.
Discussion
The purpose of the present study was to (a) examine differences in empathy (i.e., affective empathy and cognitive empathy) and sympathy levels among emerging adult college students, and (b) determine whether CITs demonstrate different levels of empathy and sympathy when compared to their academic peers. We hypothesized that CITs would record greater levels of empathy and lower levels of sympathy when compared to their non-counseling peers, because of either their clinical training from their counselor education program or the possibility that the counseling profession might attract individuals with strong levels of dispositional empathy. Participants’ scores on the AMES were used to measure participants’ levels of empathy and sympathy. We conducted a MANOVA to determine if participants’ levels of empathy and sympathy differed when grouped by academic majors. CITs did not exhibit statistically significant differences in levels of empathy or sympathy when compared to students from other academic programs. In fact, CITs recorded levels of empathy that appeared comparable to students from other academic disciplines. This finding is consistent with literature indicating that even if empathy training is effective, counselor education programs might not be emphasizing empathy development in CITs or employing empathy training sufficiently. We also failed to identify statistically significant differences in participants’ AMES scores when grouping data by collection method or participants’ educational level. Thus, we believe our results were not influenced by our data collection method or by participants’ educational level.
Implications for Counselor Educators
The results from this investigation indicated that there was not a statistically significant difference in participants’ levels of cognitive or affective empathy or sympathy regardless of academic program, suggesting that CITs do not possess more or less empathy or sympathy than their academic peers. This was true for students in all majors under investigation (i.e., athletic training/health sciences, biology/biomedical sciences/preclinical health sciences, communication, counseling, nursing, and psychology), regardless of age and whether or not they belonged to professions considered helping professions (i.e., counseling, nursing, psychology). Although students in helping professions tended to have higher scores on the AMES than their peers, these differences were not statistically significant.
One might hypothesize that students in helping professions (especially in professions in which individuals have direct contact with clients or patients, such as counseling) would have significantly higher levels of empathy. However, counseling programs may not attract individuals who possess greater levels of trait empathy, or training programs might not be as effective in training their students as previously thought. Although microskills are taught in counselor preparation programs (e.g., reflection of content, reflection of feeling), microskill training might not overlap with material that is taught as part of an empathy training or enhance such training. Thus, microskill training might not be any more impactful for CITs’ development of empathy and sympathy than material included in training programs of other academic disciplines (e.g., athletic training, nursing).
Another potential reason for the lack of recorded differences between CITs and their non-counseling peers could be that counseling students are inherently anxious, skill-focused, self-focused, or have limited self-other awareness (Stoltenberg, 1981; Stoltenberg & McNeill, 2010). We wonder if CITs might not be focused on utilizing relationship-building approaches as much as they are on doing work that promotes introspection and reflection. Another inquiry for consideration is whether CITs potentially possess a greater understanding of empathy as a construct that inadvertently leads CITs to rate themselves lower in empathy than their non-counseling peers. Further, it is possible that CITs potentially minimize their own levels of empathy in an effort to demonstrate modesty, a phenomenon related to altruism and understood as the modesty bias (McGuire, 2003). Future research would be helpful to better understand various mitigating factors. Nevertheless, we suggest that counseling programs might be able to do more to foster empathy-facilitating experiences in counselors by being more proactive and effective in promoting empathy development in CITs. Through a review of the literature, we found support that empathy training is possible, and we wonder if there is a missed opportunity to effectively train counselors if counselor education programs do not intentionally facilitate empathy development in their CITs.
Counselor training programs are not charged to develop empathy in CITs; however, given the importance of empathy in the formation and maintenance of a therapeutic relationship, we propose that counseling training programs consider ways in which empathy is or is not being developed in their specific program. As such, we urge counselor educators to consider strategies to emphasize empathy development in their CITs. For example, reviewing developmental aspects of empathy in children, adolescents, and adults might fit well in a human development course, and the subject can be used to facilitate a conversation with CITs regarding their experiences of empathy development.
Similarly, because empathy consists of cognitive and affective components, CITs might benefit from work that assists them in gaining insight into areas of strengths and limitations in regard to both cognitive and affective aspects of empathy. Students who appear stronger in one area of empathy might benefit from practicing skills related to the other aspect of empathy. For example, if a student has a strong awareness of a client’s experience (i.e., cognitive empathy) but appears to have limitations in their felt sense of a client’s experience (i.e., affective empathy), a counselor educator might utilize live supervision opportunities to assist the student in recognizing present emotions or sensations in their body when working with the client or in a role play. Alternatively, to assist a student with developing a greater intellectual understanding of their client’s experience, a counselor educator might employ interpersonal process recall when reviewing their clinical work to help the student identify what their client might be experiencing as a result of their lived experience. To echo recommendations made by Bayne and Jangha (2016), we encourage counselor educators to move away from an exclusive focus on microskills for teaching empathy and to provide opportunities to teach CITs how to foster a connecting experience through creative means (e.g., improvisational skills).
Furthermore, the results from this study indicated that CITs possess higher levels of sympathy than of both cognitive and affective components of empathy. We recommend that counselor educators facilitate CITs’ understanding of the differences between empathy and sympathy and bring awareness to their use of sympathetic responses rather than empathic responses. It is our hope that CITs will possess a strong enough understanding between empathy and sympathy to be able to choose to use either response as it fits within a counseling context (Clark, 2010). We also encourage counselor educators to consider recommendations made by Bloom and Lambie (in press) to employ the AMES with CITs. The AMES could be a valuable and accessible tool to assist counselor educators in evaluating CITs’ levels of empathy and sympathy in regard to course assignments, in response to clinical situations, or as a wholesale measure of empathy development. As Bloom and Lambie encouraged, clinical training programs might benefit from using the AMES as a tool to programmatically measure CITs’ levels of empathy throughout their experience in their training program (i.e., transition points) as a way to collect programmatic data.
Limitations
Although this study produced important findings, some limitations exist. It is noted that the majority of participants from this study attended universities located within the Southeastern United States. As a result, the sample might not be representative of students nationwide. Similarly, demographic characteristics of the present study including the race, age, and gender composition of the sample limit the generalizability of the findings.
This study also is limited in that the instrument used to assess empathy and sympathy was a self-report measure. Although self-report measures have been shown to be reliable and are widely used within research, these measures might result in the under- or over-reporting of the variables of interest (Gall, Gall, & Borg, 2007). It is necessary to note that we employed the AMES, which was normed with adolescents and not undergraduate or graduate students. Although we recognize that inherent differences exist between adolescent and emerging adult populations, we believed the AMES was an effective choice to measure empathy because of Vossen and colleagues’ (2015) intentional development of the instrument to address existing weaknesses in other empathy assessment instruments. Nonetheless, it is necessary to interpret our results with caution.
Recommendations for Future Research
We recommend future researchers address some of the limitations of this study. Specifically, we recommend continuing to compare CITs’ levels of empathy with students from other academic disciplines, but to include a more diverse array of academic backgrounds. Similarly, we suggest future researchers not limit themselves to an emerging adult population, as both undergraduate and graduate populations include individuals over the age of 29. Further, researchers should aim to collect data from students across the country and to include a more demographically diverse sample in their research designs.
Additionally, it is necessary to note that limitations exist to using self-report measures (Gall et al., 2007), and measures of empathy are vulnerable to a myriad of complications (Bloom & Lambie, in press; Vossen et al., 2015). Thus, we encourage future researchers to consider using different measures of empathy that move away from a self-report format (e.g., clients’ perceptions of cognitive and affective empathy within a therapeutic relationship; Flasch et al., in press). Another area for future research is to track counseling students’ levels of empathy as they enter the counseling profession after graduation. It is possible that as they become more comfortable and competent as counselors, and as anxiety and self-focus decrease, their ability to empathize increases.
There is agreement in the counseling profession that empathy is an important characteristic for counselors to embody in order to facilitate positive client outcomes and to meet counselor competency standards (DePue & Lambie, 2014). Yet scholars have grappled with how to identify the necessary skills to foster empathy in counselor trainees and remain torn on which approaches to use. Although empathy training programs seem effective, little is known about which aspects of such programs are the effective ingredients that promote empathy-building, and we lack understanding about whether such programs are more effective than simply engaging in clinical work or having life experiences. Thus, we encourage researchers to explore if counseling programs are effective at teaching empathy to CITs and to further explore mechanisms that may or may not be valuable in empathy development.
Conflict of Interest and Funding Disclosure
The authors reported no conflict of interest
or funding contributions for the development
of this manuscript.
References
Badenoch, B. (2008). Being a brain-wise therapist: A practical guide to interpersonal neurobiology. New York, NY:
W. W. Norton & Company, Inc.
Barrett-Lennard, G. T. (1986). The Relationship Inventory now: Issues and advances in theory, method, and use. In L. S. Greenberg & W. M. Pinsof (Eds.), The psychotherapeutic process: A research handbook (pp. 439–476). New York, NY: Guilford Press.
Batson, C. D., & Shaw, L. L. (1991). Evidence for altruism: Toward a pluralism of prosocial motives. Psychological Inquiry, 2(2), 107–122. doi:10.1207/s15327965pli0202_1
Bayne, H. B., & Jangha, A. (2016). Utilizing improvisation to teach empathy skills in counselor education. Counselor Education and Supervision, 55(4), 250–262. doi:10.1002/ceas.12052
Beder, J. (2004). Lessons about bereavement. Journal of Loss & Trauma, 9, 383–387. doi:10.1080/15325020490491014
Bell, H., Hagedorn, W. B., & Robinson, E. H. M. (2016). An exploration of supervisory and therapeutic relationships and client outcomes. Counselor Education and Supervision, 55(3), 182–197. doi:10.1002/ceas.12044
Bloom, Z. D., & Lambie, G. W. (in press). The Adolescent Measure of Empathy and Sympathy in a sample of emerging adults. Measurement and Evaluation in Counseling and Development.
Bohart, A. C., Elliott, R., Greenberg, L. S., & Watson, J. C. (2002). Empathy. In J. C. Norcross (Ed.), Psychotherapy relationships that work: Therapist contributions and responsiveness to patients (pp. 89–108). New York, NY: Oxford University Press.
Carré, A., Stefaniak, N., D’Ambrosio, F., Bensalah, L., & Besche-Richard, C. (2013). The Basic Empathy Scale in Adults (BES-A): Factor structure of a revised form. Psychological Assessment, 25, 679–691.
doi:10.1037/a0032297
Clark, A. J. (2004). Empathy: Implications of three ways of knowing in counseling. The Journal of Humanistic Counseling, Education & Development, 43, 141–151. doi:10.1002/j.2164-490X.2004.tb00014.x
Clark, A. J. (2010). Empathy and sympathy: Therapeutic distinctions in counseling. Journal of Mental Health Counseling, 32(2), 95–101. doi:10.17744/mehc.32.2.228n116thw397504
Connors, G. J., Carroll, K. M., DiClemente, C. C., Longabaugh, R., & Donovan, D. M. (1997). The therapeutic alliance and its relationship to alcoholism treatment participation and outcome. Journal of Consulting and Clinical Psychology, 65, 588–598. doi:10.1037/0022-006X.65.4.588
Council for Accreditation of Counseling and Related Educational Programs. (2016). 2016 Standards. Alexandria,
VA: Author.
Davis, M. H. (1980). A multidimensional approach to individual differences in empathy. JSAS Catalog of Selected Documents in Psychology, 10, 85.
Davis, M. H. (1983). Measuring individual differences in empathy: Evidence for a multidimensional approach. Journal of Personality and Social Psychology, 44, 113–126. doi:10.1037/0022-3514.44.1.113
Decety, J., & Ickes, W. (Eds.). (2009). The social neuroscience of empathy. Cambridge, MA: MIT Press.
DePue, M. K., & Lambie, G. W. (2014). Impact of a university-based practicum experience on counseling students’ levels of empathy and assessed counseling competences. Counseling Outcome Research and Evaluation, 5(2), 89–101. doi:10.1177/2150137814548509
Dillman, D. A., Smyth, J. D., & Christian, L. M. (2009). Internet, mail, and mixed-mode surveys: The tailored design method (3rd ed.). Hoboken, NJ: Wiley.
Eisenberg, N., Eggum, N. D., & Di Giunta, L. (2010). Empathy-related responding: Associations with prosocial behavior, aggression, and intergroup relations. Social Issues and Policy Review, 4, 143–180.
doi:10.1111/j.1751-2409.2010.01020.x
Elliott, R., Bohart, A. C., Watson, J. C., & Greenberg, L. S. (2011). Empathy. Psychotherapy, 48, 43–49.
doi:10.1037/a0022187
Flasch, P. S., Limberg-Ohrt, D., Fox, J., Ohrt, J., Crunk, E., & Robinson, E. (in press). Experiences of altruistic caring by clients and their counselors in the counseling session. Counseling & Values Journal.
Gall, M. D., Gall, J. P., & Borg, W. R. (2007). Educational research: An introduction (8th ed.). Boston, MA: Allyn & Bacon.
Goleman, D. (1995). Emotional intelligence: Why it can matter more than IQ for character, health and lifelong achievement. New York, NY: Bantam Books.
Hair, J. F., Jr., Black, W. C., Babin, B. J., & Anderson, R. E. (2010). Multivariate data analysis (7th ed.). Upper Saddle River, NJ: Prentice Hall.
Ivey, A. E. (1971). Microcounseling: Innovations in interviewing training. Oxford, England: Charles C. Thomas.
Ivey, A. E., Normington, C. J., Miller, C. D., Morrill, W. H., & Haase, R. F. (1968). Microcounseling and attending behavior: An approach to prepracticum counselor training. Journal of Counseling Psychology, 15(5), 1–12.
Jolliffe, D., & Farrington, D. P. (2006). Development and validation of the Basic Empathy Scale. Journal of Adolescence, 29, 589–611. doi:10.1016/j.adolescence.205.08.010
Kirchberg, T. M., Neimeyer, R. A., & James, R. K. (1998). Beginning counselors’ death concerns and empathic responses to client situations involving death and grief. Death Studies, 22(2), 99–120.
Kline, R. B. (2011). Principles and practice of structural equation modeling (3rd ed.). New York, NY: Guilford Press.
Konrath, S. H., O’Brien, E. H., & Hsing, C. (2011). Changes in dispositional empathy in American college
students over time: A meta-analysis. Personality and Social Psychology Review, 15, 180–198. doi:10.1177/1088868310377395
Krupnick, J. L., Sotsky, S. M., Simmens, S., Moyer, J., Elkin, I., Watkins, J., & Pilkonis, P. A. (1996). The role of a therapeutic alliance in psychotherapy and pharmacotherapy outcome: Findings in the National Institute of Mental Health Treatment of Depression Collaborative Research Program. Journal of Consulting and Clinical Psychology, 64, 532–539.
Lam, T. C. M., Kolomitro, K., & Alamparambil, F. C. (2011). Empathy training: Methods, evaluation practices, and validity. Journal of Multidisciplinary Evaluation, 7(16), 162–200.
Lambert, M. J., & Barley, D. E. (2001). Research summary on the therapeutic relationship and psychotherapy outcome. Psychotherapy: Theory, Research, Practice, Training, 38, 357–361. doi:10.1037/0033-3204.38.4.357
Lambert, M. J., & Bergin, A. E. (1994). The effectiveness of psychotherapy. In A. E. Bergin & S. L. Garfield (Eds.), Handbook of psychotherapy and behavior change (4th ed.; pp. 143–189). Oxford, England: John Wiley & Sons.
McGuire, A. M. (2003). “It was nothing”: Extending evolutionary models of altruism by two social cognitive biases in judgment of the costs and benefits of helping. Social Cognition, 21, 363–394.
doi:10.1521/soco.21.5.363.28685
McWhirter, B. T., Besett-Alesch, T. M., Horibata, J., & Gat, I. (2002). Loneliness in high risk adolescents: The role of coping, self-esteem, and empathy. Journal of Youth Studies, 5, 69–84. doi:10.1080/13676260120111779
Norcross, J. C. (Ed.). (2011). Psychotherapy relationships that work: Evidence-based responsiveness (2nd ed.). New York, NY: Oxford University Press.
Norcross, J. C., & Lambert, M. J. (2011). Psychotherapy relationships that work II. Psychotherapy: Theory, Research, Practice, Training, 48, 4–8.
Osborne, J. W. (2013). Best practices in data cleaning: A complete guide to everything you need to do before and after collecting your data. Thousand Oaks, CA: Sage.
Qualtrics. (2013). Qualtrics software (Version 37,892) [Computer software]. Provo, UT: Qualtrics Research Suite.
Ridley, C. R., Kelly, S. M., & Mollen, D. (2011). Microskills training: Evolution, reexamination, and call for reform. The Counseling Psychologist, 39, 800–824. doi:10.1177/0011000010378438
Rogers, C. R. (1957). The necessary and sufficient conditions of therapeutic personality change. Journal of Consulting Psychology, 21, 95–103. doi:10.1037/h0045357
Sachse, R., & Elliott, R. (2002). Process–outcome research on humanistic therapy variables. In D. J. Cain & J. Seeman (Eds.), Humanistic psychotherapies: Handbook of research and practice (pp. 83–115). Washington, DC: American Psychological Association.
Siegel, D. J. (2010). Mindsight: The new science of personal transformation. New York: Bantam.
Silva, N. W. (2001). Effect of empathy training on masters-level counseling students. (Unpublished doctoral dissertation). University of Florida, Gainesville, FL.
Spreng, R. N., McKinnon, M. C., Mar, R. A., & Levine, B. (2009). The Toronto Empathy Questionnaire: Scale development and initial validation of a factor-analytic solution to multiple empathy measures. Journal of Personality Assessment, 91, 62–71. doi:10.1080/00223890802484381
Stoltenberg, C. (1981). Approaching supervision from a developmental perspective: The counselor complexity model. Journal of Counseling Psychology, 28, 59–65. doi:10.1037/0022-0167.28.1.59
Stoltenberg, C. D., & McNeill, B. W. (2010). IDM supervision: An integrative developmental model for supervising counselors & therapists (3rd ed.). New York, NY: Routledge.
Szalavitz, M., & Perry, B. D. (2010). Born for love: Why empathy is essential—and endangered. New York, NY: Harper Collins.
Tabachnick, B. G., & Fidell, L. S. (2013). Using multivariate statistics (6th ed). Upper Saddle River, NJ: Pearson.
Teding van Berkhout, E., & Malouff, J. M. (2016). The efficacy of empathy training: A meta-analysis of randomized controlled trials. Journal of Counseling Psychology, 63, 32–41. doi:10.1037/cou0000093
Truax, C. B., & Carkhuff, R. R. (1967). Toward effective counseling and psychotherapy: Training and practice. Chicago, IL: Aldine Publishing.
Truax, C. B., & Lister, J. L. (1971). Effects of short-term training upon accurate empathy and non-possessive warmth. Counselor Education and Supervision, 10(2), 120–125. doi:10.1002/j.1556-6978.1971.tb01430.x
Vossen, H. G. M., Piotrowski, J. T., & Valkenburg, P. M. (2015). Development of the Adolescent Measure of Empathy and Sympathy (AMES). Personality and Individual Differences, 74, 66–71.
doi:10.1016/j.paid.2014.09.040
Watson, J. C., Steckley, P. L., & McMullen, E. J. (2014). The role of empathy in promoting change. Psychotherapy Research, 24, 286–298. doi:10.1080/10503307.2013.802823
Zachary D. Bloom is an assistant professor at Northeastern Illinois University. Victoria A. McNeil is a doctoral candidate at the University of Florida. Paulina Flasch is an assistant professor at Texas State University. Faith Sanders is a mental health counselor at Neuropeace Wellness Counseling in Orlando, Florida. Correspondence can be addressed to Zachary Bloom, 5500 North St. Louis Avenue, Chicago, IL 60625, z-bloom@neiu.edu.
Jun 28, 2018 | Volume 8 - Issue 2
William H. Snow, Margaret R. Lamar, J. Scott Hinkle, Megan Speciale
The Council for Accreditation of Counseling & Related Educational Programs (CACREP) database of institutions revealed that as of March 2018 there were 36 CACREP-accredited institutions offering 64 online degree programs. As the number of online programs with CACREP accreditation continues to grow, there is an expanding body of research supporting best practices in digital remote instruction that refutes the ongoing perception that online or remote instruction is inherently inferior to residential programming. The purpose of this article is to explore the current literature, outline the features of current online programs and report the survey results of 31 online counselor educators describing their distance education experience to include the challenges they face and the methods they use to ensure student success.
Keywords: online, distance education, remote instruction, counselor education, CACREP
Counselor education programs are being increasingly offered via distance education, or what is commonly referred to as distance learning or online education. Growth in online counselor education has followed a similar trend to that in higher education in general (Allen & Seaman, 2016). Adult learners prefer varied methods of obtaining education, which is especially important in counselor education among students who work full-time, have families, and prefer the flexibility of distance learning (Renfro-Michel, O’Halloran, & Delaney, 2010). Students choose online counselor education programs for many reasons, including geographic isolation, student immobility, time-intensive work commitments, childcare responsibilities, and physical limitations (The College Atlas, 2017). Others may choose online learning simply because it fits their learning style (Renfro-Michel, O’Halloran, & Delaney, 2010). Additionally, education and training for underserved and marginalized populations may benefit from the flexibility and accessibility of online counselor education.
The Council for Accreditation of Counseling & Related Educational Programs (CACREP; 2015) accredits online programs and has determined that these programs meet the same standards as residential programs. Consequently, counselor education needs a greater awareness of how online programs deliver instruction and actually meet CACREP standards. Specifically, existing online programs will benefit from the experience of other online programs by learning how to exceed and surpass minimum accreditation expectations by utilizing the newest technologies and pedagogical approaches (Furlonger & Gencic, 2014). The current study provides information regarding the current state of online counselor education in the United States by exploring faculty’s descriptions of their online programs, including their current technologies, student and program community building approaches, and challenges faced.
Distance Education Defined
Despite its common usage throughout higher education, the U.S. Department of Education (DOE) does not use the terms distance learning, online learning, or online education; rather, it has adopted the term distance education (DOE, 2012). However, in practice, the terms distance education, distance learning, online learning, and online education are used interchangeably. The DOE has defined distance education as the use of one or more technologies that deliver instruction to students who are separated from the instructor and that supports “regular and substantive interaction between the students and the instructor, either synchronously or asynchronously” (2012, p. 5). The DOE has specified that technologies may include the internet, one-way and two-way transmissions through open broadcast and other communications devices, audioconferencing, videocassettes, DVDs, and CD-ROMs. Programs are considered distance education programs if at least 50% or more of their instruction is via distance learning technologies. Additionally, residential programs may contain distance education elements and still characterize themselves as residential if less than 50% of their instruction is via distance education. Traditional on-ground universities are incorporating online components at increasing rates; in fact, 67% of students in public universities took at least one distance education course in 2014, further reflecting the growth in this teaching modality (Allen & Seaman, 2016).
Enrollment in online education continues to grow, with nearly 6 million students in the United States engaged in distance education courses (Allen & Seaman, 2016). Approximately 2.8 million students are taking online classes exclusively. In a conservative estimate, over 25% of students enrolled in CACREP programs are considered distance learning students. In a March 2018 review of the CACREP database of accredited institutions, there were 36 accredited institutions offering 64 degree programs. Although accurate numbers are not available from any official sources, it is a conservative estimate that over 12,000 students are enrolled in a CACREP-accredited online program. When comparing this estimate to the latest published 2016 CACREP enrollment figure of 45,820 (CACREP, 2017), online students now constitute over 25% of the total. This does not include many other residential counselor education students in hybrid programs who may take one or more classes through distance learning means.
At the time of this writing, an additional three institutions were currently listed as under CACREP review, and soon their students will likely be added to this growing online enrollment. As this trend continues, it is essential for counselor education programs to understand issues, trends, and best practices in online education in order to make informed choices regarding counselor education and training, as well as preparing graduates for employment. It also is important for hiring managers in mental health agencies to understand the nature and quality of the training graduates of these programs have received.
One important factor contributing to the increasing trends in online learning is the accessibility it can bring to diverse populations throughout the world (Sells, Tan, Brogan, Dahlen, & Stupart, 2012). For instance, populations without access to traditional residential, brick-and-mortar classroom experiences can benefit from the greater flexibility and ease of attendance that distance learning has to offer (Bennet-Levy, Cromarty, Hawkins, & Mills, 2012). Remote areas in the United States, including rural and frontier regions, often lack physical access to counselor education programs, which limits the numbers of service providers to remote and traditionally underserved areas of the country. Additionally, the online counselor education environment makes it possible for commuters to take some of their course work remotely, especially in winter when travel can become a safety issue, and in urban areas where travel is lengthy and stressful because of traffic.
The Online Counselor Education Environment
The Association for Counselor Education and Supervision (ACES) Technology Interest Network (2017) recently published guidelines for distance education within counselor education that offer useful suggestions to online counselor education programs or to those programs looking to establish online courses. Current research supports that successful distance education programs include active and engaged faculty–student collaboration, frequent communications, sound pedagogical frameworks, and interactive and technically uncomplicated support and resources (Benshoff & Gibbons, 2011; Murdock & Williams, 2011). Physical distance and the associated lack of student–faculty connection has been a concern in the development of online counselor education programs. In its infancy, videoconferencing was unreliable, unaffordable, and often a technological distraction to the learning process. The newest wave of technology—enhanced distance education—has improved interactions using email, e-learning platforms, and threaded discussion boards to make asynchronous messaging virtually instantaneous (Hall, Nielsen, Nelson, & Buchholz, 2010). Today, with the availability of affordable and reliable technical products such as GoToMeeting, Zoom, and Adobe Connect, online counselor educators are holding live, synchronous meetings with students on a regular basis. This includes individual advising, group supervision, and entire class sessions.
It is important to convey that online interactions are different than face-to-face, but they are not inferior to an in-person faculty–student learning relationship (Hickey, McAleer, & Khalili, 2015). Students and faculty prefer one method to the other, often contingent upon their personal belief in the effectiveness of the modality overall and their belief in their own personal fit for this style of teaching and learning (Watson, 2012). In the actual practice of distance education, professors and students are an email, phone call, or videoconference away; thus, communication with peers and instructors is readily accessible (Murdock & Williams, 2011; Trepal, Haberstroh, Duffey, & Evans, 2007). When communicating online, students may feel more relaxed and less inhibited, which may facilitate more self-disclosure, reflexivity, and rapport via increased dialogue (Cummings, Foels, & Chaffin, 2013; Watson, 2012). Subsequently, faculty who are well-organized, technologically proficient, and more responsive to students’ requests may prefer online teaching opportunities and find their online student connections more engaging and satisfying (Meyer, 2015). Upon Institutional Research Board approval, an exploratory survey of online counselor educators was conducted in 2016 and 2017 to better understand the current state of distance counselor education in the United States.
Method
Participants
Recruitment of participants was conducted via the ACES Listserv (CESNET). No financial incentive or other reward was offered for participation. The 31 participants comprised a sample of convenience, a common first step in preliminary research efforts (Kerlinger & Lee, 1999). Participants of the study categorized themselves as full-time faculty members (55.6%), part-time faculty members (11.1%), academic chairs and department heads (22.2%), academic administrators (3.7%), and serving in other roles (7.4%).
Study Design and Procedure
The survey was written and administered using Qualtrics, a commercial web-based product. The survey contained questions aimed at exploring online counselor education programs, including current technologies utilized, approaches to reducing social distance, development of community among students, major challenges in conducting online counselor education, and current practices in meeting these challenges. The survey was composed of one demographic question, 15 multiple-response questions, and two open-ended survey questions. The demographic question asked about the respondent’s role in the university. The 15 multiple-response questions included items such as: (a) How does online counselor education fit into your department’s educational mission? (b) Do you provide a residential program in which to compare your students? (c) How successful are your online graduates in gaining postgraduate clinical placements and licensure? (d) What is the average size of an online class with one instructor? and (e) How do online students engage with faculty and staff at your university? Two open-ended questions were asked: “What are the top 3 to 5 best practices you believe are most important for the successful online education of counselors?” and “What are the top 3 to 5 lessons learned from your engagement in the online education of counselors?”
Additional questions focused on type of department and its organization, graduates’ acceptance to doctoral programs, amount of time required on the physical campus, e-learning platforms and technologies, online challenges, and best practices for online education and lessons learned. The 18 survey questions were designed for completion in no more than 20 minutes and the survey was active for 10 months, during which time there were three appeals for responses yielding 31 respondents.
Procedure
An initial recruiting email and three follow-ups were sent via CESNET. Potential participants were invited to visit a web page that first led to an introductory paragraph and informed consent page. An embedded skip logic system required consent before allowing access to the actual survey questions.
The results were exported from the Qualtrics web-based survey product, and the analysis of the 15 fixed-response questions produced descriptive statistics. Cross tabulations and chi square statistics further compared the perceptions of faculty and those identifying themselves as departmental chairs and administrators.
The two open-ended questions—“What are the top 3 to 5 best practices you believe are most important for the successful online education of counselors?” and “What are the top 3 to 5 lessons learned from your engagement in the online education of counselors?”—yielded 78 statements about lessons learned and 80 statements about best practices for a total of 158 statements. The analysis of the 158 narrative comments initially consisted of individually analyzing each response by identifying and extracting the common words and phrases. It is noted that many responses contained more than one suggestion or comment. Some responses were a paragraph in length and thus more than one key word or phrase could come from a single narrative response. This first step yielded a master list of 18 common words and phrases. The second step was to again review each comment, compare it to this master list, and place a check mark for each category. The third step was to look for similarities in the 18 common words and group them into a smaller number of meaningful categories. These steps were checked among the researchers for fidelity of reporting and trustworthiness.
Results
Thirty-one distance learning counselor education faculty, department chairs, and administrators responded to the survey. They reported their maximum class sizes ranged from 10 to 40 with a mean of 20.6 (SD = 6.5), and the average class size was 15.5 (SD = 3.7). When asked how online students are organized within their university, 26% reported that students choose classes on an individual basis, 38% said students are individually assigned classes using an organized schedule, and 32% indicated that students take assigned classes together as a cohort.
Additionally, respondents were asked how online students engage with faculty and staff at their university. Email was the most popular, used by all (100%), and second was phone calls (94%). Synchronous live group discussions using videoconferencing technologies were used by 87%, while individual video calls were reported by 77%. Asynchronous electronic discussion boards were utilized by 87% of the counselor education programs.
Ninety percent of respondents indicated that remote or distance counseling students were required to attend the residential campus at least once during their program, with 13% requiring students to come to campus only once, 52% requiring students to attend twice, and 26% requiring students to come to a physical campus location four or more times during their program.
All participants indicated using some form of online learning platform with Blackboard (65%), Canvas (23%), Pearson E-College (6%), and Moodle (3%) among the ones most often listed. Respondents indicated the satisfaction levels of their current online learning platform as: very dissatisfied (6.5%), dissatisfied (3.2%), somewhat dissatisfied (6.5%), neutral (9.7%), somewhat satisfied (16.1%), satisfied (41.9%), and very satisfied (9.7%). There was no significant relationship between the platform used and the level of satisfaction or dissatisfaction (X2 (18,30) = 11.036, p > .05), with all platforms faring equally well. Ninety-seven percent of respondents indicated using videoconferencing for teaching and individual advising using such programs as Adobe Connect (45%), Zoom (26%), or GoToMeeting (11%), while 19% reported using an assortment of other related technologies.
Participants were asked about their university’s greatest challenges in providing quality online counselor education. They were given five pre-defined options and a sixth option of “other” with a text box for further elaboration, and were allowed to choose more than one category. Responses included making online students feel a sense of connection to the university (62%), changing faculty teaching styles from traditional classroom models to those better suited for online coursework (52%), providing experiential clinical training to online students (48%), supporting quality practicum and internship experiences for online students residing at a distance from the physical campus (38%), convincing faculty that quality outcomes are possible with online programs (31%), and other (10%).
Each participant was asked what their institution did to ensure students could succeed in online counselor education. They were given three pre-defined options and a fourth option of “other” with a text box for further elaboration, and were allowed to choose more than one option. The responses included specific screening through the admissions process (58%), technology and learning platform support for online students (48%), and assessment for online learning aptitude (26%). Twenty-three percent chose the category of other and mentioned small classes, individual meetings with students, providing student feedback, offering tutorials, and ensuring accessibility to faculty and institutional resources.
Two open-ended questions were asked and narrative comments were analyzed, sorted, and grouped into categories. The first open-ended question was: “What are the top 3 to 5 best practices that are the most important for the successful online education of counselors?” This yielded 78 narrative comments that fit into the categories of fostering student engagement (n = 19), building community and facilitating dialogue (n = 14), supporting clinical training and supervision (n = 11), ensuring courses are well planned and organized (n = 10), providing timely and robust feedback (n = 6), ensuring excellent student screening and advising (n = 6), investing in technology (n = 6), ensuring expectations are clear and set at a high standard (n = 5), investing in top-quality learning materials (n = 4), believing that online counselor education works (n = 3), and other miscellaneous comments (n = 4). Some narrative responses contained more than one suggestion or comment that fit multiple categories.
The second open-ended question—“What are the top 3 to 5 lessons learned from the online education of counselors?”—yielded 80 narrative comments that fit into the categories of fostering student engagement (n = 11), ensuring excellent student screening and advising (n = 11), recognizing that online learning has its own unique workload challenges for students and faculty (n = 11), providing timely and robust feedback (n = 8), building community and facilitating dialogue (n = 7), ensuring courses are well planned and organized (n = 7), investing in technology (n = 6), believing that online counselor education works (n = 6), ensuring expectations are clear and set at a high standard (n = 5), investing in top-quality learning materials (n = 3), supporting clinical training and supervision (n = 2), and other miscellaneous comments (n = 8).
Each participant was asked how online counselor education fit into their department’s educational mission and was given three categorical choices. Nineteen percent stated it was a minor focus of their department’s educational mission, 48% stated it was a major focus, and 32% stated it was the primary focus of their department’s educational mission.
The 55% of participants indicating they had both residential and online programs were asked to respond to three follow-up multiple-choice questions gauging the success rates of their online graduates (versus residential graduates) in attaining: (1) postgraduate clinical placements, (2) postgraduate clinical licensure, and (3) acceptance into doctoral programs. Ninety-three percent stated that online graduates were as successful as residential students in gaining postgraduate clinical placements. Ninety-three percent stated online graduates were equally successful in obtaining state licensure. Eighty-five percent stated online graduates were equally successful in getting acceptance into doctoral programs.
There were some small differences in perception that were further analyzed. Upon using a chi square analysis, there were no statistically significant differences in the positive perceptions of online graduates in gaining postgraduate clinical placements (X2 (2, 13) = .709, p > .05), the positive perceptions regarding the relative success of online versus residential graduates in gaining postgraduate clinical licensure (X2 (2, 13) = .701, p > .05), or perceptions of the relative success of online graduates in becoming accepted in doctoral programs (X2 (2, 12) = 1.33, p > .05).
Discussion
The respondents reported that their distance learning courses had a mean class size of 15.5. Students in these classes likely benefit from the small class sizes and the relatively low faculty–student ratio. These numbers are lower than many residential classes that can average 25 students or more. It is not clear what the optimal online class size should be, but there is evidence that the challenge of larger classes may introduce burdens difficult for some students to overcome (Chapman & Ludlow, 2010). Beattie and Thiele (2016) found first-generation students in larger classes were less likely to talk to their professor or teaching assistants about class-related ideas. In addition, Black and Latinx students in larger classes were less likely to talk with their professors about their careers and futures (Beattie & Thiele, 2016).
Programs appeared to have no consistent approach to organizing students and scheduling courses. The three dominant models present different balances of flexibility and predictability with advantages and disadvantages for both. Some counselor education programs provide students the utmost flexibility in selecting classes, others assign classes using a more controlled schedule, and others are more rigid and assign students to all classes.
The model for organizing students impacts the social connections students make with one another. In concept, models that provide students with more opportunities to engage each other in a consistent and effective pattern of positive interactions result in students more comfortable working with one another, and requesting and receiving constructive feedback from their peers and instructors.
Cohort models, in which students take all courses together over the life of a degree program, are the least flexible but most predictable and have the greatest potential for fostering strong connections. When effectively implemented, cohort models can foster a supportive learning environment and greater student collaboration and cohesion with higher rates of student retention and ultimately higher graduation rates (Barnett & Muse, 1993; Maher, 2005). Advising loads can decrease as cohort students support one another as informal peer mentors. However, cohorts are not without their disadvantages and can develop problematic interpersonal dynamics, splinter into sub-groups, and lead to students assuming negative roles (Hubbell & Hubbell, 2010; Pemberton & Akkary, 2010). An alternative model in which students make their own schedules and choose their own classes provides greater flexibility but fewer opportunities to build social cohesion with others in their program. At the same time, these students may not demonstrate the negative dynamics regarding interpersonal engagement that can occur with close cohort groups.
Faculty–Student Engagement
Remote students want to stay in touch with their faculty advisors, course instructors, and fellow students. Numerous social engagement opportunities exist through technological tools including email, cell phone texts, phone calls, and videoconference advising. These fast and efficient tools provide the same benefits of in-person meetings without the lag time and commute requirements. Faculty and staff obviously need to make this a priority to use these tools and respond to online students in a timely manner.
All technological tools referred to in the survey responses provide excellent connectivity and communication if used appropriately. Students want timely responses, but for a busy faculty or staff member it is easy to allow emails and voicemails to go unattended. Emails not responded to and unanswered voicemail messages can create anxiety for students whose only interaction is through electronic means. This also might reinforce a sense of isolation for students who are just “hanging out there” on their own and having to be resourceful to get their needs met. It is recommended that the term timely needs to be defined and communicated so faculty and students understand response expectations. It is less important that responses are expected in 24, 48, or even 72 hours; what students need to know is when to expect a response.
Survey responses indicated that remote counselor education students are dependent upon technology, including the internet and associated web-based e-learning platforms. When the internet is down, passwords do not work, or computers fail, the remote student’s learning is stalled. Counselor education programs offering online programming must provide administrative services, technology, and learning support for online students in order to quickly remediate technology issues when they occur. It is imperative that standard practice for institutions include the provision of robust technology support to reduce down-time and ensure continuity of operations and connection for remote students.
Fostering Program and Institutional Connections
Faculty were asked how often online students were required to come to a physical campus location as part of their program. Programs often refer to short-term campus visits as limited residencies to clarify that students will need to come to the campus. Limited residencies are standard, with 90% responding that students were required to come to campus at least once. Short-term intensive residencies are excellent opportunities for online students to make connections with their faculty and fellow students (Kops, 2014). Residential intensives also provide opportunities for the university student life office, alumni department, business office, financial aid office, registrar, and other university personnel to connect with students and link a human face to an email address.
Distance learning students want to engage with their university, as well as fellow students and faculty. They want to feel a sense of connection in a similar manner as residential students (Murdock & Williams, 2011). Institutions should think creatively about opportunities to include online learners in activities beyond the classroom. An example of promoting inclusiveness is when one university moved the traditional weekday residential town halls to a Sunday evening teleconference webinar. This allowed for greater access, boosted attendance, and served to make online counselor education students feel like a part of the larger institution.
As brick-and-mortar institutions consider how to better engage distance learning students, they need to understand that a majority of students (53%) taking exclusively distance education courses reside in the same state as the university they are attending (Allen & Seaman, 2016). Given that most are within driving distance of the physical campus, students are more open to coming to campus for special events, feel their presence is valued, and know that they are not just part of an electronic platform (Murdock & Williams, 2011).
E-Learning Platforms as Critical Online Infrastructure
All participants (100%) reported using an online learning platform. E-learning platforms are standard for sharing syllabi, course organization, schedules, announcements, assignments, discussion boards, homework submissions, tests, and grades. They are foundational in supporting faculty instruction and student success with numerous quality options available. Overall, online faculty were pleased with their technological platforms and there was no clear best platform.
Online learning platforms are rich in technological features. For example, threaded discussions allow for rich, thoughtful dialogue among students and faculty, and they are often valued by less verbally competitive students who might express reluctance to speak up in class but are willing to share their comments in writing. Course examinations and quizzes in a variety of formats can be produced and delivered online through e-learning platforms such as Blackboard, Canvas, and Moodle. Faculty have flexibility for when exams are offered and how much time students have to complete them. When used in conjunction with proctoring services such as Respondus, ProctorU, and B-Virtual, integrity in the examination process can be assured. Once students complete their exam, software can automatically score and grade objective questions, and provide immediate feedback to students.
Videoconferencing and Virtual Remote Classrooms
Videoconferencing for teaching and individual advising through Adobe Connect, Zoom, GoToMeeting, and related technologies is now standard practice and changing the nature of remote learning. Distance learning can now employ virtual classroom models with synchronous audio and video communication that closely parallels what occurs in a residential classroom. Videoconferencing platforms provide tools to share PowerPoints, graphics, and videos as might occur in a residential class. Class participants can write on virtual whiteboards with color markers, annotating almost anything on their screen. Group and private chat functionality can provide faculty with real-time feedback during a class session. Newer videoconferencing features now allow faculty to break students into smaller, private discussion groups and move around to each group virtually, just like what often occurs in a residential classroom. With preparation, faculty can execute integrated survey polls during a video class session. Essentially, videoconferencing tools reduce the distance in distance education.
Videoconference platforms allow faculty to teach clinical skills in nearly the same manner as in residential programs. Counselor education faculty can model skills such as active listening in real time to their online class. Faculty can then have students individually demonstrate those skills while being observed. Embedded features allow faculty to record the video and audio features of any conversation for playback and analysis. Videoconference platforms now offer “breakout” rooms to place students in sub-groups for skills practice and debriefing, similar to working in small groups in residential classrooms. Faculty members and teaching assistants can visit each breakout room to ensure students are on task and properly demonstrating counseling skills. Just as in a residential class, students can reconvene and share the challenges and lessons learned from their small group experience.
Challenges in Providing Remote Counselor Education
Participants were asked to select one or more of their top challenges in providing quality online counselor education. In order of frequency, they reported the greatest challenges as making online students feel a sense of connection to the university (62%), changing faculty teaching styles from brick-and-mortar classroom models to those better suited for online coursework (52%), providing experiential clinical training to online students (48%), supporting quality practicum and internship experiences for online students residing at a distance from the physical campus (38%), and convincing faculty members that quality outcomes are possible with online programs (31%).
Creating a sense of university connection. Counselor education faculty did not report having major concerns with faculty–student engagement. Faculty seemed confident with student learning outcomes using e-learning platforms and videoconferencing tools that serve to reduce social distance between faculty and students and facilitate quality learning experiences. This confidence could be the result of counselor educators’ focus on fostering relationships as a foundational counseling skill (Kaplan, Tarvydas, & Gladding, 2014).
However, faculty felt challenged to foster a student’s sense of connection with the larger university. For example, remote students not receiving emails and announcements about opportunities available only to residential students can feel left out. Remote students might find it difficult to navigate the university student life office, business department, financial aid office, registration system, and other university systems initially designed for residential students. Highly dependent on their smartphone and computer, remote students can feel neglected as they anxiously wait for responses to email and voicemail inquiries (Milman, Posey, Pintz, Wright, & Zhou, 2015).
In the online environment, there are extracurricular options for participating in town halls, special webinars, and open discussion forums with departmental and university leaders. Ninety percent of the programs require students to come to their physical campus one or more times. These short-term residencies are opportunities for students to meet the faculty, departmental chairs, and university leaders face-to-face and further build a sense of connection.
A majority of online students (53%) reside in the same state as the university they are attending (Allen & Seaman, 2016), with many within commuting distance of their brick-and- mortar campus. These students will appreciate hearing about the same opportunities afforded to residential students, and under the right circumstances and scheduling they will participate.
Changing faculty teaching styles. Not all residential teaching styles and methods, such as authority-based lecture formats, work well with all students (Donche, Maeyer, Coertjens, Van Daal, & Van Petegem, 2013). Distance learning students present their own challenges and preferences. Successful distance education programs require active and engaged faculty who frequently communicate with their students, use sound pedagogical frameworks, and maintain a collaborative and interactive style (Benshoff & Gibbons, 2011; Murdock & Williams, 2011). Discovery orientation, discussion, debriefing, action research, and flipped classrooms where content is delivered outside the classroom and the classroom is used to discuss the material are good examples of more collaborative styles (Brewer & Movahedazarhouligh, 2018; Donche et al., 2013).
Organization is critical for all students, but more so for remote students who often are working adults with busy schedules. They want to integrate their coursework into other life commitments and want a clear, well-organized, and thoughtfully planned course with all the requirements published in advance, including specific assignment due dates. Distance counselor education faculty will find their syllabi growing longer with more detail as they work to integrate traditional assignments with the e-learning and videoconferencing tools in order to create engaging, predictable, and enjoyable interactive learning experiences.
Providing experiential clinical training. Counselor educators ideally provide multimodal learning opportunities for counseling students to understand, internalize, and demonstrate clinical skills for a diverse clientele. In residential classrooms, the knowledge component is usually imparted through textbooks, supplemental readings, course assignments, video demonstration, and instructor-led lecture and discussions. All remote programs provide similar opportunities for students and replicate residential teaching models with their use of asynchronous e-learning platforms and synchronous videoconferencing technologies.
Asynchronous methods are not well suited for modeling, teaching, and assessing interpersonal skills. However, synchronous videoconferencing technologies provide the same opportunity as residential settings to conduct “fishbowl” class exercises, break students into groups to practice clinical skills, conduct role plays, apply procedural learning, and give students immediate, meaningful feedback about their skills development.
The majority of surveyed programs required remote students to come to campus at least once to assess students for clinical potential, impart critical skills, and monitor student progress in achieving prerequisite clinical competencies required to start practicum. Courses that teach and assess clinical interviewing skills are well suited for these intensive experiences and provide an important gatekeeping function. Faculty not only have the opportunity to see and hear students engage in role plays, but also to see them interact with other students.
Supporting quality practicum and internship experiences. Remote counselor educators report that their programs are challenged in supporting quality practicum and internship experiences. Residential students benefit from the relationships universities develop over time with local public and nonprofit mental health agencies in which practicum and internship students may cluster at one or more sites. Although online students living close enough to the residential campus may benefit from the same opportunities, remote students living at a distance typically do not experience this benefit. They often have to seek out, interview, and compete for a clinical position at a site unfamiliar to their academic program’s field placement coordinator. Thus, online counselor education students will need field placement coordination that can help with unique practicum and internship requirements. The placement coordinator will need to know how to review and approve distance sites without a physical assessment. Relationships with placement sites will need to rely upon email, phone, and teleconference meetings. Furthermore, online students can live in a state other than where the university is located, requiring the field placement coordinator to be aware of various state laws and regulations.
Convincing faculty that quality outcomes are possible. Approximately one-third of the surveyed counselor education faculty reported the need to convince other faculty that quality outcomes are possible with remote counselor education. Changing the minds of skeptical colleagues is challenging but naturally subject to improvement over time as online learning increases, matures, and becomes integrated into the fabric of counselor education. In the interim, programs would be wise to invest in assisting faculty skeptics to understand that online counselor education can be managed effectively (Sibley & Whitaker, 2015). First, rather than just telling faculty that online counselor education works, programs should demonstrate high levels of interactivity that are comparable to face-to-face engagement by using state-of-the-art videoconferencing platforms. Second, it is worth sharing positive research outcomes related to remote education. Third, it is best to start small by encouraging residential faculty to first try a hybrid course by holding only one or two of their total class sessions online. Fourth, it is important to provide robust support for reluctant but willing faculty who agree to integrate at least one or two online sessions into their residential coursework. Finally, institutions will find more willing faculty if they offer incentives for those who give online counselor education a chance.
Ensuring Online Student Success
Student success is defined by the DOE as related to student retention, graduation rates, time to completion, academic success, and gainful employment (Bailey et al., 2011). Counselor education programs would likely add clinical success in practicum and internship and post-master’s licensure to these critical success outcomes.
The survey respondents reported that student success begins with making sure that the students they accept have the aptitude to learn via online distance education. Students may have unrealistic perceptions that remote distance education is somehow less academically strenuous. Programs need to ensure students are prepared for the unique aspects of online versus residential learning. Fifty-eight percent of the programs engaged in student screening beginning with the admissions process. A quarter of the respondents used a formal assessment tool to assess students for success factors such as motivation, learning style, study habits, access to technology, and technological skills. A commonly used instrument was the Online Readiness Assessment developed by Williams (2017).
Lessons Learned and Best Practices
The 158 statements regarding best practices and lessons learned were further refined to yield the top six imperatives for success in online counselor education, namely: (1) fostering student–faculty–community engagement (57.4%); (2) providing high expectations, excellent screening, advising, and feedback (36%); (3) investing in quality instructional materials, course development, and technology support (30.5%); (4) providing excellent support for online clinical training and supervision (14.6%); (5) recognizing the workload requirements and time constraints of online students; (6) working to instill the belief in others that quality outcomes are possible with online counselor education programs (10.1%); and (7) other assorted responses (13.5%).
An indicator of success for many counselor education programs is the rate at which students graduate, obtain clinical placement, and become licensed. There is also an interest in how successful graduates are in becoming admitted into doctoral programs. For online programs, a further benchmark test is to compare online student graduation, licensure, and doctoral admissions rates to those in residential programs. Fifty-five percent of the respondents served in programs with residential as well as online students. These respondents were able to compare their online student outcomes to residential student outcomes. Their perception was that online graduates were as successful as residential students in gaining postgraduate clinical placements (93%), obtaining state licensure (93%), and acceptance into doctoral programs (85%). They generally believed online graduates were competitive with residential graduates.
Limitations, Recommendations, and Conclusion
Limitations of the Study
When this study began in 2016, there were 11 CACREP-accredited institutions offering online counselor education programs, and by March 2018, there were 36. This study represents a single snapshot of the online counselor education experience during a time of tremendous growth.
This study focused on the reported experience of faculty, departmental chairs, and administrators who have some commitment and investment in online learning. Some would point out the bias of those who advocate for remote counselor education in relaying their own experiences, anecdotal evidence, and personal comparisons of online and residential teaching.
The exploratory nature of this study was clearly not comprehensive in its inclusion of all the factors associated with online counselor education. Specific details of online counselor education programs were not emphasized and could have offered more information about university and departmental resources for remote education, faculty training for online educational formats, and student evaluations of online courses. The numerous technologies used were identified, but this says nothing about their differential effectiveness. Future studies should include these variables as well as other factors that will provide further information about the successes and challenges of online counselor education.
This survey assessed the informed opinions of counselor education faculty and administrators who responded that they were generally satisfied with the various aspects of their programs, including student outcomes. What was not assessed was the actual quality of the education itself. In order to change the mind of skeptics, more than opinions and testimonies will be needed. Future studies need to objectively compare learning outcomes, demonstrate quality, and delineate how remote counselor education programs are meeting the challenges of training counselors within distance learning modalities.
Recommendations
The dynamic nature of the field of online counselor education requires ongoing study. As more programs offer courses and full programs through distance learning modalities, they can contribute their own unique expertise and lessons learned to inform and enrich the broader field.
The challenge of faculty skepticism and possible mixed motives regarding online learning will continue to be problematic. There is a lingering perception by some faculty that online counselor education programs are not equivalent to residential training. An inherent faculty bias might exist in which residential means higher quality and online means lower quality. Some faculty may teach online courses only for additional compensation while privately having reservations. In contrast, departmental chairs and academic administrators might want the same high levels of quality, but may find themselves more driven by the responsibility for meeting enrollment numbers and budgets. In times of scarcity, these individuals may see online counselor education as the answer for new revenue sources (Jones, 2015). For others, online education may present concerns while providing an appeal for its innovative qualities or providing social justice through increasing access to higher education by underserved populations. The best way to clarify the issues and better inform the minds of skeptics is to present them with objective data regarding the nature and positive contributions of remote counselor education learning outcomes.
Aside from the modality of their instructional platform, it is important to understand if effective remote counselor educators are different from equally effective residential course instructors. Remote teaching effectiveness might be associated with some combination of attributes, interests, and motivations, and thus self-selection to teach remote students. Further studies will need to tease out what works, what does not work, and what type of faculty and faculty training make someone best suited for participation in remote counselor education.
Technology is critical to the advances in remote counselor education. Email, smartphones, texting, and e-learning platforms have helped faculty create engaging courses with extensive faculty–student interactions. Videoconferencing in particular has served to reduce the social distance between faculty and remote students. As aforementioned, innovative programs are taking the distance out of distance counselor education, where the virtual remote classroom modality provides similar experiences to those of residential classes. The nature of these technologically facilitated online relationships deserves further study to determine which technologies and related protocols enhance learning and which impede it.
A logical next step is to build on the work that has been accomplished and conduct more head-to-head comparisons of student outcomes among remote and residential programs. This is very feasible, as 34 of the 36 institutions currently offering online counselor education programs also have a residential program with which to make comparisons. These within-institution comparisons will be inherently quasi-experimental. Effective program comparisons of delivery models will require systematically implemented reliable and valid measures of student learning outcomes at strategic points in the counselor training program. The Counselor Competency Scale (Lambie, Mullen, Swank, & Blount, 2018) is a commonly used standardized assessment for graduate students engaged in clinical practicum and internship. The National Counseling Exam scores of current students and recent graduates can provide standardized measures to compare outcomes of graduates across programs.
Finally, although we can learn from institutional best practices and student success stories, we also could benefit from understanding why some programs, faculty, and students struggle. Challenges are certainly faced in remote counselor education and training, but it is likely that one or more programs have developed innovative concepts to surmount these obstacles. The 31 respondents were able to articulate many best practices to manage challenges and believed they were achieving the same learning objectives achieved by residential counseling students. Many faculty members, departmental chairs, and administrators believed that remote counselor education graduates are as successful as those attending residential programs, but this opinion is not universally shared. What is clear is that despite some reservations, a growing number of counselors are trained via a remote modality. It is time to embrace distance counselor education; learn from best practices, successes, and struggles; and continue to improve outcomes for the benefit of programs, the profession of counseling, and the consumers of the services our graduates provide.
Conflict of Interest and Funding Disclosure
The authors reported no conflict of interest or funding contributions for the development of this manuscript.
References
Allen, I. E., & Seaman, J. (2016). Online report card: Tracking online education in the United States. Babson Survey Research Group. Retrieved from https://onlinelearningsurvey.com/reports/onlinereportcard.pdf
Association for Counselor Education and Supervision Technology Interest Network. (2017). ACES guidelines for online learning in counselor education. Retrieved from https://www.acesonline.net/sites/default/files/Online%20Learning%20CES%20Guidelines%20May%202017%20(1).pdf
Bailey, M., Benitiz, M., Burton, W., Carey, K., Cunningham, A., Fraire, J., . . . Wheelan, B. (2011). Committee on measures of student success: A report to Secretary of Education Arne Duncan. U.S. Department of Education. Retrieved from https://www2.ed.gov/about/bdscomm/list/cmss-committee-report-final.pdf
Barnett, B. G., & Muse, I. D. (1993). Cohort groups in educational administration: Promises and challenges. Journal of School Leadership, 3, 400–415.
Beattie, I. R., & Thiele, M. (2016). Connecting in class? College class size and inequality in academic social capital. The Journal of Higher Education, 87, 332–362.
Bennett-Levy, J., Hawkins, R., Perry, H., Cromarty, P., & Mills, J. (2012). Online cognitive behavioural therapy training for therapists: Outcomes, acceptability, and impact of support: Online CBT training. Australian Psychologist, 47(3), 174–182. doi:10.1111/j.1742-9544.2012.00089.x
Benshoff, J. M., & Gibbons, M. M. (2011). Bringing life to e-learning: Incorporating a synchronous approach to online teaching in counselor education. The Professional Counselor, 1, 21–28. doi:10.15241/jmb.1.1.21
Brewer, R., & Movahedazarhouligh, S. (2018). Successful stories and conflicts: A literature review on the effectiveness of flipped learning in higher education. Journal of Computer Assisted Learning, 1–8. doi:10.1111/jcal.12250
Chapman, L., & Ludlow, L. (2010). Can downsizing college class sizes augment student outcomes? An investigation of the effects of class size on student learning. Journal of General Education, 59(2), 105–123. doi:10.5325/jgeneeduc.59.2.0105
The College Atlas. (2017). 41 facts about online students. Retrieved from https://www.collegeatlas.org/41-surprising-facts-about-online-students.html
Council for Accreditation of Counseling & Related Educational Programs. (2015). 2016 CACREP standards. Washington, DC: Author.
Council for Accreditation of Counseling & Related Educational Programs. (2017). Annual report 2016. Washington, DC: Author.
Cummings, S. M., Foels, L., & Chaffin, K. M. (2013). Comparative analysis of distance education and classroom-based formats for a clinical social work practice course. Social Work Education, 32, 68–80.
doi:10.1080/02615479.2011.648179
Donche, V., De Maeyer, S., Coertjens, L., Van Daal, T., & Van Petegem, P. (2013). Differential use of learning strategies in first-year higher education: The impact of personality, academic motivation, and teaching strategies. British Journal of Educational Psychology, 83, 238–251. doi:10.1111/bjep.12016
Furlonger, B., & Gencic, E. (2014). Comparing satisfaction, life-stress, coping and academic performance of counselling students in on-campus and distance education learning environments. Australian Journal of Guidance and Counselling, 24, 76–89. doi:10.1017/jgc.2014.2
Hall, B. S., Nielsen, R. C., Nelson, J. R., & Buchholz, C. E. (2010). A humanistic framework for distance education. The Journal of Humanistic Counseling, 49, 45–57. doi:10.1002/j.2161-1939.2010.tb00086.x
Hickey, C., McAleer, S. J., & Khalili, D. (2015). E-learning and traditional approaches in psychotherapy education: Comparison. Archives of Psychiatry and Psychotherapy, 4, 48–52.
Hubbell, L., & Hubbell, K. (2010). When a college class becomes a mob: Coping with student cohorts. College Student Journal, 44, 340–353.
Jones, C. (2015). Openness, technologies, business models and austerity. Learning, Media and Technology, 40, 328–349. doi:10.1080/17439884.2015.1051307
Kaplan, D. M., Tarvydas, V. M., & Gladding, S. T. (2014). 20/20: A vision for the future of counseling: The new consensus definition of counseling. Journal of Counseling & Development, 92, 366–372.
doi:10.1002/j.1556-6676.2014.00164.x
Kerlinger, F. N., & Lee, H. B. (1999). Foundations of behavioral research (4th ed). Fort Worth, TX: Wadsworth.
Kops, W. J. (2014). Teaching compressed-format courses: Teacher-based best practices. Canadian Journal of University Continuing Education, 40, 1–18.
Lambie, G. W., Mullen, P. R., Swank, J. M., & Blount, A. (2018). The Counseling Competencies Scale: Validation and refinement. Measurement and Evaluation in Counseling and Development, 51, 1–15.
doi:10.1080/07481756.2017.1358964
Maher, M. A. (2005). The evolving meaning and influence of cohort membership. Innovative Higher Education, 30(3), 195–211.
Meyer, J. M. (2015). Counseling self-efficacy: On-campus and distance education students. Rehabilitation Counseling Bulletin, 58(3), 165–172. doi:10.1177/0034355214537385
Milman, N. B., Posey, L., Pintz, C., Wright, K., & Zhou, P. (2015). Online master’s students’ perceptions of institutional supports and resources: Initial survey results. Online Learning, 19(4), 45–66.
Murdock, J. L., & Williams, A. M. (2011). Creating an online learning community: Is it possible? Innovative Higher Education, 36, 305–315. doi:10.1007/s10755-011-9188-6
Pemberton, C. L. A., & Akkary, R. K. (2010). A cohort, is a cohort, is a cohort . . . Or is it? Journal of Research on Leadership Education, 5(5), 179–208.
Renfro-Michel, E. L., O’Halloran, K. C., & Delaney, M. E. (2010). Using technology to enhance adult learning in the counselor education classroom. Adultspan Journal, 9, 14–25. doi:10.1002/j.2161-0029.2010.tb00068.x
Sells, J., Tan, A., Brogan, J., Dahlen, U., & Stupart, Y. (2012). Preparing international counselor educators through online distance learning. International Journal for the Advancement of Counselling, 34, 39–54. doi:10.1007/s10447-011-9126-4
Sibley, K., & Whitaker, R. (2015, March 16). Engaging faculty in online education. Educause Review. Retrieved from https://er.educause.edu/articles/2015/3/engaging-faculty-in-online-education
Trepal, H., Haberstroh, S., Duffey, T., & Evans, M. (2007). Considerations and strategies for teaching online counseling skills: Establishing relationships in cyberspace. Counselor Education and Supervision, 46(4), 266–279. doi:10.1002/j.1556-6978.2007.tb00031.x
U.S. Department of Education Office of Postsecondary Education Accreditation Division. (2012). Guidelines for Preparing/Reviewing Petitions and Compliance Reports. Retrieved from https://www.asccc.org/sites/default/files/USDE%20_agency-guidelines.pdf
Watson, J. C. (2012). Online learning and the development of counseling self-efficacy beliefs. The Professional Counselor, 2, 143–151. doi:10.15241/jcw.2.2.143
Williams, V. (2017). Online readiness assessment. Penn State University. Retrieved from https://pennstate.qualtrics.com/jfe/form/SV_7QCNUPsyH9f012B
William H. Snow is an associate professor at Palo Alto University. Margaret R. Lamar is an assistant professor at Palo Alto University. J. Scott Hinkle, NCC, is Director of Professional Development at the National Board for Certified Counselors. Megan Speciale, NCC, is an assistant professor at Palo Alto University. Correspondence can be addressed to William Snow, 1791 Arastradero Road, Palo Alto, CA 94304, wsnow@paloaltou.edu.
Jun 28, 2018 | Volume 8 - Issue 2
Marisa C. Rapp, Steven J. Moody, Leslie A. Stewart
The Council for Accreditation of Counseling & Related Educational Programs (CACREP) standards call for doctoral preparation programs to graduate students who are competent in gatekeeping functions. Despite these standards, little is understood regarding the development and training of doctoral students in their roles as gatekeepers. We propose a call for further investigation into doctoral student gatekeeper development and training in gatekeeping practices. Additionally, we provide training and programmatic curriculum recommendations derived from current literature for counselor education programs. Finally, we discuss implications of gatekeeping training in counselor education along with future areas of research for the profession.
Keywords: gatekeeping, counselor education, doctoral students, programmatic curriculum, CACREP
Gatekeeping practices in counselor education are highly visible in current literature, as counselor impairment continues to be a significant concern for the mental health professions (Brown-Rice & Furr, 2015; Homrich, DeLorenzi, Bloom, & Godbee, 2014; Lumadue & Duffey, 1999; Rapisarda & Britton, 2007; Rust, Raskin, & Hill, 2013; Ziomek-Daigle & Christensen, 2010). V. A. Foster and McAdams (2009) found that counselor educators are frequently faced with counselors-in-training (CITs) whose professional performance fails to meet program standards. Although gatekeeping practices in counselor education have been cursorily examined over the past 40 years (Ziomek-Daigle & Christensen, 2010), more recent literature indicates a need to further address this topic (Brown-Rice & Furr, 2016; Burkholder, Hall, & Burkholder, 2014).
In the past two decades, researchers have examined the following aspects of gatekeeping: student selection; retention; remediation; policies and procedures; and experiences of faculty members, counseling students, and clinical supervisors (Brown-Rice & Furr, 2013, 2015, 2016; V. A. Foster & McAdams, 2009; Gaubatz & Vera, 2002; Homrich et al., 2014; Lumadue & Duffey, 1999; Parker et al., 2014; Rapisarda & Britton, 2007; Ziomek-Daigle & Christensen, 2010). Although the aforementioned areas of study are needed to address the complex facets of the gatekeeping process, there is a noticeable lack of research examining how counselor education programs are preparing and educating future faculty members to begin their role as gatekeepers.
Because doctoral degree programs in counselor education are intended to prepare graduates to work in a variety of roles (Council for Accreditation of Counseling & Related Educational Programs [CACREP], 2015), program faculty must train doctoral students in each of the roles and responsibilities expected of a future faculty member or supervisor. Authors of previous studies have examined constructs of identity, development, practice, and training in the various roles that doctoral students assume, including investigations into a doctoral student’s researcher identity (Lambie & Vaccaro, 2011), supervisor identity (Nelson, Oliver, & Capps, 2006), doctoral professional identity transition (Dollarhide, Gibson, & Moss, 2013), and co-teaching experiences (Baltrinic, Jencius, & McGlothlin, 2016). Studies investigating the various elements of these roles are both timely and necessary (Fernando, 2013; Lambie & Vaccaro, 2011; Nelson et al., 2006); yet, there is a dearth of research examining the complex development of emergent gatekeeper identity. In order to empower counseling programs in training the next generation of competent and ethical professional counselors, the development of doctoral students’ gatekeeping skills and identity must be more fully understood.
The Complexity of Gatekeeping in Counselor Education
Gatekeeping is defined as a process to determine suitability for entry into the counseling profession (Brown-Rice & Furr, 2015). When assessing this professional suitability, academic training programs and clinical supervisors actively evaluate CITs during their training as a means to safeguard the integrity of the profession and protect client welfare (Brear, Dorrian, & Luscri, 2008; Homrich et al., 2014). Evaluators who question a CIT’s clinical, academic, and dispositional fitness but fail to intervene with problematic behavior run the risk of endorsing a student who is not ready for the profession. This concept is referred to as gateslipping (Gaubatz & Vera, 2002). Brown-Rice and Furr (2014) found that consequences of gateslipping can impact client care, other CITs, and the entire counseling profession.
Gatekeeping for counselor educators and supervisors is understood as an especially demanding and complex responsibility (Brear & Dorrian, 2010). Potential complications include personal and professional confrontations (Kerl & Eichler, 2005), working through the emotional toll of dismissing a student (Gizara & Forrest, 2004), lack of preparation with facilitating difficult conversations (Jacobs et al., 2011), and fear of legal reprisal when assuming the role of gatekeeper (Homrich et al., 2014). Homrich (2009) found that although counselor educators feel comfortable in evaluating academic and clinical competencies, they often experience difficulty evaluating dispositional competencies that are nebulously and abstractly defined. To complicate the gatekeeping process further, counselor educators are often hesitant to engage in gatekeeping practices, as discerning developmentally appropriate CIT experiences from problematic behavior (Homrich et al., 2014) may be difficult at times. Thus, more clearly defined dispositional competencies and more thorough training in counselor development models may be necessary to assist counselor educators’ self-efficacy in gatekeeping decisions. The proceeding section examines doctoral students in counselor education preparation programs and their involvement in gatekeeping responsibilities and practices.
Doctoral Students’ Role in Gatekeeping
Doctoral students pursuing counselor education and supervision degrees are frequently assigned the responsibility of supervisor and co-instructor of master’s-level students. Consequently, doctoral students serve in an evaluative role (Dollarhide et al., 2013; Fernando, 2013) in which they often have specific power and authority (Brown-Rice & Furr, 2015). Power and positional authority inherent in the role of supervisor (Bernard & Goodyear, 2014) and instructor permit doctoral students ample opportunity to appraise CITs’ development and professional disposition during classroom and supervision interaction (Scarborough, Bernard, & Morse, 2006). Doctoral students frequently consult with faculty through the many tasks, roles, and responsibilities they are expected to carry out (Dollarhide et al., 2013). However, relying solely on consultation during gatekeeping responsibilities rather than acquiring formal training can present considerable risks and complications. The gatekeeping process is complex and leaves room for error in following appropriate protocol, understanding CIT behavior and development, supporting CITs, and potentially endorsing CITs with problematic behavior that may have been overlooked.
Despite the importance of doctoral student education in the counseling profession and a substantial body of research on gatekeeping over the past two decades (Brown-Rice & Furr, 2013, 2015, 2016; V. A. Foster & McAdams, 2009; Gaubatz & Vera, 2002; Lumadue & Duffey, 1999; Parker et al., 2014; Rapisarda & Britton, 2007; Ziomek-Daigle & Christensen, 2010), there is an absence in the professional discourse examining the identity, development, practice, and training of doctoral students for their role of gatekeeper. No counseling literature to date has explored how counselor education programs are supporting doctoral students’ transition into the role of gatekeeper, despite the latest accreditation standards calling for doctoral preparation programs to graduate students who are competent in gatekeeping functions relevant to teaching and clinical supervision (CACREP, 2015, Standard 6.B). A lack of specific literature is particularly problematic, as the process of gatekeeping can be difficult for faculty members. It is reasonable to assume that if faculty members struggle to navigate the responsibilities of a gatekeeper, then less experienced doctoral students would struggle in this role as well. Furthermore, most incoming doctoral students have not had an opportunity to formally engage in gatekeeping practices in academic settings as an evaluator (DeDiego & Burgin, 2016).
Although doctoral students have been introduced to the concept of gatekeeping as master’s-level students (e.g., gatekeeping policies), many counselors do not retain or understand gatekeeping information (V. A. Foster & McAdams, 2009; Parker et al., 2014; Rust et al., 2013). These research findings were further examined through an exploratory study in August of 2016. The first two authors of this article assessed beginning doctoral students’ gatekeeping knowledge and self-efficacy prior to doctoral training or formal curricula. Areas of knowledge assessed included general information on the function of gatekeeping, standard practices, and program-specific policies and procedures. Preliminary findings of six participants indicated that incoming doctoral students lacked understanding for their role in gatekeeping. This supports existing research (V. A. Foster & McAdams, 2009; Parker et al., 2014; Rust et al., 2013) and aligns with DeDeigo and Burgin’s (2016) assertion that doctoral students are often unsure of what the role of gatekeeper “even means, let alone how to carry it out” (p. 182). Consequently, attention must be given to preparing doctoral students for their gatekeeping role to meet CACREP standards and, most importantly, prepare them to gatekeep effectively in an effort to prevent gateslippage.
DeDiego and Burgin’s (2016) recommended counselor education programs support doctoral students’ development through specific programmatic training. Despite the established importance of specific training (Brear & Dorrian, 2010), no corresponding guidelines exist for content of material. To address this gap, we provide recommendations of content areas that may assist doctoral students in becoming acquainted with the complex role of gatekeeper. We derived our recommendations from a thorough review of professional literature. Recommendations compiled include current trends related to gatekeeping within the counseling profession, findings from various studies that state what information is deemed important in the realm of gatekeeping, and considerations for educational and professional standards that guide best practices as a counselor educator.
Recommendations
Recommendations contain general areas of knowledge that should accompany program-specific material for introductory gatekeeping role information. Providing doctoral students with program-specific policies and procedures related to gatekeeping practices, such as remedial and dismissal procedures, is of utmost importance. This information can be dispersed in a variety of methods such as orientation, gatekeeping-specific training, coursework, and advising. We view these areas of content as foundational in acquainting doctoral students with the role of gatekeeper. We included four general content areas of knowledge pertaining to gatekeeping practices and the role of gatekeeper: current variation of language espoused by the counselor education community; ethics related to gatekeeping; cultural considerations; and legal and due process considerations. Each of these recommended content areas will be briefly discussed with relevant literature supporting the importance of their inclusion.
Adopted Language
Current terminology in the field of counselor education describing CITs who struggle to meet professional standards and expectations is broad and lacks a universal language that has been adopted by counselor educators (Brown-Rice & Furr, 2015). Consequently, a plethora of terms and definitions exists in the literature describing CITs who are struggling to meet clinical, academic, and dispositional competencies. As described earlier, the lack of consensus regarding gatekeeping and remediation language may contribute to the lack of clarity, which many counselor educators perceive as a gatekeeping challenge. Terms appearing in gatekeeping literature that describe students of concern include: deficient trainees (Gaubatz & Vera, 2002), problems of professional competence (Elman & Forrest, 2007; Rust et al., 2013), impaired, unsuitable, unqualified, and incompetent (J. M. Foster, Leppma, & Hutchinson, 2014), with varying definitions describing these terms. Duba, Paez, and Kindsvatter (2010) defined counselor impairment as any “emotional, physical, or educational condition that interferes with the quality of one’s professional performance” (p. 155) and defined its counterpart, counselor competency, as an individual demonstrating both clinical skills and psychological health. It is important to emphasize potential complications and implications associated with the term impairment, which can have close association with disability services, rendering a much different meaning for the student, supervisee, or colleague (McAdams & Foster, 2007).
Introducing these terms to doctoral students not only familiarizes them with the definitions, history, and relevance of terms present in the counseling community, it also provides a foundation in which to begin to conceptualize the difference between clinical “impairment” versus emotional distress or developmentally appropriate academic struggle. In upholding responsibilities of gatekeeping, one must be aware of the differentiating aspects of emotional distress and impairment in order to be able to distinguish the two in professionals and students. In further support of this assertion, Rust et al. (2013) stated that counseling programs must be able to distinguish between problems of professional competence and problematic behavior related to normal CIT development. Including a review of relevant terms existing in the counseling literature in the program’s training will allow doctoral students to begin to understand and contextualize the language relevant to their new roles as gatekeepers.
Although it is essential to educate doctoral students on language common to the counseling community, familiarity with language adopted by the department and institution with which they are serving as gatekeepers is vital to training well-informed gatekeepers (Brear & Dorrian, 2010). Having a clear understanding of the terminology surrounding gatekeeping ensures that doctoral students and faculty are able to have an open and consistent dialogue when enforcing gatekeeping practices. Homrich (2009) described consistent implementation of gatekeeping protocol as a best practice for counseling programs and faculty. Additional best practices include the establishment of expectations and communicating them clearly and widely. In the recommendations offered by Homrich (2009), a common language is needed within the department in order to successfully implement these practices to improve and sustain gatekeeping procedures. After doctoral students are situated in the current climate of gatekeeping-related terms and language, an exploration of professional and educational ethics can ensue.
Ethics Related to Gatekeeping
Professional and ethical mandates should be identified and discussed to familiarize doctoral students with the corresponding ethical codes that they are expected to uphold. Three sources that guide ethical behavior and educational standards for counselor educators that must be integrated in curricula and training include the American Counseling Association Code of Ethics (2014), the 2016 CACREP Standards (2015), and the National Board for Certified Counselors Code of Ethics (2012). Doctoral preparation programs should draw specific attention to codes related to the function of gatekeeping. These ethical codes and professional standards can be introduced in an orientation and discussed in more depth during advising and formal courses.
Doctoral preparation programs have flexibility in introducing standards and ethical codes during doctoral students’ academic journey. We recommend relevant standards and ethics be introduced early and mentioned often during doctoral training, specifically in terms of gatekeeping. Doctoral students should have prior knowledge of the ethical codes before engaging in gatekeeping or remedial functions with CITs. Moreover, if doctoral students have an understanding of the educational standards that are required of them, they can strive to meet specific standards in a personalized, meaningful manner during their training. Referencing CACREP standards addressed in a course syllabus is required for accreditation and helpful for students; yet, educational standards should be incorporated in training to foster deeper meaning and applicability of standards. As doctoral students are being trained to take leadership positions in the counselor education field, a more thorough understanding of educational principles and ethical codes is vital, particularly in the area of gatekeeping. Faculty members leading doctoral courses are encouraged to speak to standards related to gatekeeping throughout the duration of a course. Faculty intentionally dialoguing about how these standards are being met may allow for doctoral students to provide informal feedback to whether they believe they understand the multifaceted role of gatekeeper. During the review of codes and standards, focused attention should be given to “cultural and developmental sensitivity in interpreting and applying codes and standards” (p. 207) in gatekeeping-related situations (Letourneau, 2016). One option for attending to such sensitivity is the introduction of a case study in which doctoral students participate in open dialogue facilitated by a trainer. The inclusion of a case study aims to engage doctoral students in critical thinking surrounding cultural and diversity implications for gatekeeping practices. The following section will draw further attention to the importance of cultural awareness in gatekeeping practices and responsibilities.
Cultural Considerations
It is vital for doctoral students to have an understanding and awareness of the cultural sensitivity that is required of them in making sound gatekeeping-related decisions. Not only do ethical codes and educational mandates expect counselor educators to possess a level of multicultural competency (American Counseling Association, 2014; CACREP, 2015), but recent literature draws attention to cultural considerations in the gatekeeping process (Goodrich & Shin, 2013; Letourneau, 2016). These cultural considerations provide doctoral students with valuable information on conceptualizing and interacting with gatekeeping practices in a more culturally sensitive manner.
Letourneau (2016) described the critical nature of taking into account students’ cultural influences and differences when evaluating their assessment of fitness for the profession, while Goodrich and Shin (2013) called attention to “how cultural values and norms may intersect” (p. 43) with appraisal of CIT counseling competencies. For example, when assessing a CIT’s behavior or performance to determine whether it may be defined as problematic, evaluators may have difficulty establishing if the identified behavior is truly problematic or rather deviating from the cultural norm (Letourneau, 2016). This consideration is essential as culture, diversity, and differing values and beliefs can influence and impact how perceived problematic behaviors emerge and consequently how observed deficiencies in performance are viewed (Goodrich & Shin, 2013; Letourneau, 2016). Examining the cultural values of the counseling profession, counselor education programs, and the community in which the program is embedded can shed light on what behaviors, attitudes, and beliefs are valued and considered norms. This examination can prompt critical awareness of how CITs differing from cultural norms may be assessed and evaluated differently, and even unfairly.
Jacobs et al. (2011) described insufficient support for evaluators in how to facilitate difficult discussions in gatekeeping-related issues, specifically when the issues included attention to diversity components. Doctoral students must be given ample opportunity to identify cultural facets of case examples and talk through their course of action as a means to raise awareness and practice looking through a multicultural lens in gatekeeping-related decisions and processes. Of equal importance is familiarity with legal and due process considerations, which are addressed in the section below.
Legal and Due Process Considerations
Three governing regulations that are often discussed in the literature, but left to the reader’s imagination in how faculty members actually understand them, include the Family Educational Rights and Privacy Act (FERPA) of 2000, the Americans with Disabilities Act (ADA) of 1990, and a student’s rights and due process policy within an institution. Presenting these three concepts and their implications to the gatekeeping process is warranted, as doctoral students are assumed only to have the understanding of these concepts from a student perspective. Although FERPA, the ADA, and the due process clause may be covered in new faculty orientation, how these regulations interface with gatekeeping and remediation are generally not reviewed during standard university orientations. It is recommended that training and curricula include general knowledge and institution-specific information related to the regulations. Institution-specific material can include university notification of rights; handbook material directly addressing student rights; remediation policy and procedures; and resources and specific location of campus services such as the disability office. Inclusion of general and program-specific information will help future faculty members in possessing a rounded and well-grounded understanding of how legal considerations will apply to students and inform their gatekeeping practices. Lastly, doctoral students should be informed that the regulations detailed below may limit their access of information due to master’s-level student privacy. To begin, doctoral students should intimately understand FERPA and its application to the CITs they often supervise, teach, and evaluate.
FERPA. General information may consist of the history and evolution of FERPA in higher education and its purpose in protecting students’ confidentiality in relation to educational records. Doctoral students must be introduced to the protocol for ensuring confidentiality in program files. Program files include communication about CIT performance and may be directly related to gatekeeping issues. Doctoral students must recognize that, as evaluators communicating CIT assessment of fitness, including dispositional competencies, they must abide by FERPA regulations, because dispositional competencies are considered educational records.
Educational programs often utilize off-site practicum and internship programs that are independent from the respective university (Gilfoyle, 2008), and this is indeed the case with many CACREP-accredited counselor training programs. Doctoral students must have an understanding of the protocols in place to communicate with site supervisors who are unaffiliated with the university, such as student written-consent forms that are a routine part of paperwork for off-site training placement (Gilfoyle, 2008). Although doctoral students may not be directly corresponding with off-site evaluators, their training should consist of familiarizing them with FERPA regulations that address the disclosure of student records in order to prepare them in serving CITs in a faculty capacity. Understanding how to communicate with entities outside of the university is crucial in the event that they are acting as university supervisors and correspondence is necessary for gatekeeping-related concerns. An additional governmental regulation they are expected to be familiar and interact with is the ADA.
The ADA. Introducing doctoral students to the ADA serves multiple functions. First, similar to FERPA, it would be helpful for doctoral students to be grounded in the history of how the ADA developed and its purpose in protecting students’ rights concerning discrimination. Second, general disability service information, such as physical location on their respective campus, contact information for disability representatives, and protocols for referring a student, provides doctoral students the necessary knowledge in the event that a CIT would inquire about accommodations. If a CIT were to inquire about ADA services during a class in which a doctoral student co-teaches or during a supervision session, it would be appropriate for the doctoral student to disseminate information rather than keeping the CIT waiting until after consultation with a faculty member. Lacking general information relevant to student services may place the doctoral student in a vulnerable position in which the supervisory alliance is undermined, as the doctoral student serving in an evaluative role is not equipped with the information or knowledge to assist the CIT. Finally, presentation of the ADA and its implications for gatekeeping will inform students of the protocols that are necessary when evaluating a CIT who has a record of impairment. For example, if a CIT has registered a disability through the university’s ADA office, appropriate accommodations must be made and their disability must be considered during the gatekeeping process.
Due Process. The introduction of students’ fundamental right to basic fairness is essential, as many doctoral students may not understand this concept outside of a student perspective because of a lack of experience in instructor and supervisor positions. Examples of such basic fairness can be illustrated for doctoral students through highlighting various components in a counselor training program that should be in place to honor students’ right to fair procedures and protect against arbitrary decision-making. These include but are not limited to access to program requirements, expectations, policies, and practices; opportunity to respond and be heard in a meaningful time in a meaningful way; decisions by faculty members, advisors, or programs to be supported by substantial evidence; option to appeal a decision and to be notified of judicial proceedings; and realistic time to complete remediation (Gilfoyle, 2008; Homrich, 2009). McAdams and Foster (2007) developed a framework to address CIT due process and fundamental fairness considerations in remediation procedures to help guide counselor educators’ implementation of remediation. It is recommended that these guidelines (McAdams & Foster, 2007) be introduced in doctoral student training to generate discussion and included as a resource for future reference. In educating doctoral students about considerations of due process through a faculty lens, formal procedures to address student complaints, concerns, and appeals also should be included in training.
Implications for Counselor Education
Doctoral preparation programs are charged with graduating students who will be prepared and competent for the various roles they will assume as a counselor educator and clinical supervisor. The lack of professional literature exploring the development and training of gatekeepers indicates a clear call to the counseling profession to investigate the emergence of counselor educators into their role of gatekeepers. This call is fueled by the need to understand how doctoral preparation programs can support students and ensure competency upon graduation. Generating dialogue related to doctoral student gatekeeper development may consequently continue the conversation of standardization in gatekeeping protocol. Accordingly, this sustained dialogue also would keep the need for more universal gatekeeping nomenclature in the forefront. Continued emphasis on a common gatekeeping language will only strengthen gatekeeping protocol and practices and in return provide an opportunity for training developments that have the potential to be standardized across programs.
The recommended content areas we have offered are intended to prepare doctoral students for their role of gatekeeper and aim to enhance the transition into faculty positions. These recommendations may be limited in their generalizability because gatekeeping practices vary across programs and department cultures, indicating that information and trainings will need to be tailored individually to fit the expectations of each counseling department. These differences hinder the ability to create a standardized training that could be utilized by all departments. As gatekeeping practices continue to receive research attention and the call for more universal language and standardization is answered, standardization of training can be revisited. Nonetheless, general recommendations in training content can serve as groundwork for programs to ensure that students are receiving a foundation of basic knowledge that will allow doctoral students to feel more confident in their role of gatekeeper. The recommended content areas also serve to help incoming doctoral students begin to conceptualize and see through an academic—rather than only a clinical—lens.
Implementation and delivery of recommended content areas may be applied in a flexible manner that meets doctoral preparation programs’ specific needs. The recommendations offered in this article can be applied to enhance existing curricula, infused throughout coursework, or disseminated in a gatekeeping training or general orientation. Faculty creating doctoral curricula should be cognizant of when doctoral students are receiving foundational gatekeeping information. If doctoral students are expected to have interaction with and evaluative power over master’s-level students, recommended gatekeeping content areas should be introduced prior to this interaction.
There are several avenues for future research, as the proposed recommendations for content areas are rich in potential for future scholarly pursuit. The first is the call to the profession for investigations examining training efforts and their effectiveness in preparing future faculty members for the multifaceted role of gatekeeper. The complexity and import of gatekeeping responsibilities and identity development may be a possible reason for the lack of studies to date on this role. Nevertheless, both qualitative and quantitative inquiry could lend insight to gaps in training that lead to potential gateslippage. Quantitative research would be helpful in examining how many programs are currently utilizing trainings and the content of such trainings. In consideration of the number of CACREP-accredited doctoral programs within the United States, a large sample size is feasible to explore trends and capture a full picture. Conducting qualitative analysis would expand and deepen the understanding of how faculty and doctoral students have been trained and their processes and experience in becoming gatekeepers.
In conclusion, doctoral preparation programs can be cognizant to infuse the aforementioned recommended content areas into doctoral curricula to meet CACREP standards and prepare doctoral students for the complex role of gatekeeper. Counselor education and supervision literature indicates that more focused attention on training could be beneficial in improving gatekeeping knowledge for doctoral students. Training recommendations derived from existing literature can be utilized as guidelines to enhance program curriculum and be investigated in future research endeavors. With a scarcity of empirical studies examining gatekeeping training and gatekeeper development, both quantitative and qualitative studies would be beneficial to better understand the role of gatekeeper and strengthen the overall professional identity of counselor educators and clinical supervisors.
Conflict of Interest and Funding Disclosure
The authors reported no conflict of interest or funding contributions for the development of this manuscript.
References
American Counseling Association. (2014). ACA 2014 Code of ethics. Alexandria, VA: Author.
Baltrinic, E. R., Jencius, M., & McGlothlin, J. (2016). Coteaching in counselor education: Preparing doctoral students for future teaching. Counselor Education and Supervision, 55, 31–45. doi:10.1002/ceas.12031
Bernard, J. M., & Goodyear, R. K. (2014). Fundamentals of clinical supervision (5th ed.). Boston, MA: Pearson.
Brear, P., & Dorrian, J. (2010). Gatekeeping or gate slippage? A national survey of counseling educators in Australian undergraduate and postgraduate academic training programs. Training and Education in Professional Psychology, 4, 264–273. doi:10.1037/a0020714
Brear, P., Dorrian, J., & Luscri, G. (2008). Preparing our future counseling professionals: Gatekeeping and the implications for research. Counselling and Psychotherapy Research, 8, 93–101. doi:10.1080/14733140802007855
Brown-Rice, K. A., & Furr, S. (2013). Preservice counselors’ knowledge of classmates’ problems of professional competency. Journal of Counseling & Development, 91, 224–233. doi:10.1002/j.1556-6676.2013.00089.x
Brown-Rice, K., & Furr, S. (2014). Lifting the empathy veil: Engaging in competent gatekeeping. In Ideas and research you can use: VISTAS 2012. Retrieved from https://www.counseling.org/docs/default-source/vistas/article_11.pdf?sfvrsn=12
Brown-Rice, K., & Furr, S. (2015). Gatekeeping ourselves: Counselor educators’ knowledge of colleagues’ problematic behaviors. Counselor Education and Supervision, 54, 176–188. doi:10.1002/ceas.12012
Brown-Rice, K., & Furr, S. (2016). Counselor educators and students with problems of professional competence: A survey and discussion. The Professional Counselor, 6, 134–146. doi:10.15241/kbr.6.2.134
Burkholder, D., Hall. S. F., & Burkholder. J. (2014). Ward v. Wilbanks: Counselor educators respond. Counselor Education and Supervision, 53, 267–283.
Council for Accreditation of Counseling & Related Educational Programs. (2015). 2016 CACREP Standards. Retrieved from https://www.cacrep.org/for-programs/2016-cacrep-standards/
DeDiego, A. C., & Burgin, E. C. (2016). The doctoral student as university supervisor: Challenges in fulfilling the gatekeeping role. Journal of Counselor Leadership and Advocacy, 3, 173–183.
doi:10.1080/2326716X.2016.1187096
Dollarhide, C. T., Gibson, D. M., & Moss, J. M. (2013). Professional identity development of counselor education doctoral students. Counselor Education and Supervision, 52, 137–150.
doi:10.1002/j.1556-6978.2013.00034.x
Duba, J. D., Paez, S. B., & Kindsvatter, A. (2010). Criteria of nonacademic characteristics used to evaluate and retain community counseling students. Journal of Counseling & Development, 88(2), 154–162. doi:10.1002/j.1556-6678.2010.tb00004.x
Elman, N. S., & Forrest, L. (2007). From trainee impairment to professional competence problems: Seeking new terminology that facilitates effective action. Professional Psychology: Research and Practice, 38, 501–509.
Fernando, D. M. (2013). Supervision by doctoral students: A study of supervisee satisfaction and self-efficacy, and comparison with faculty supervision outcomes. The Clinical Supervisor, 32, 1–14.
doi:10.1080/07325223.2013.778673
Foster, J. M., Leppma, M., & Hutchinson, T. S. (2014). Students’ perspectives on gatekeeping in counselor education: A case study. Counselor Education and Supervision, 53, 190–203.
doi:10.1002/j.1556-6978.2014.00057.x
Foster, V. A., & McAdams, C. R., III. (2009). A framework for creating a climate of transparency for professional performance assessment: Fostering student investment in gatekeeping. Counselor Education and Supervision, 48, 271–284. doi:10.1002/j.1556-6978.2009.tb00080.x
Gaubatz, M. D., & Vera, E. M. (2002). Do formalized gatekeeping procedures increase programs’ follow-up with deficient trainees? Counselor Education and Supervision, 41, 294–305.
doi:10.1002/j.1556-6978.2002.tb01292.x
Gilfoyle, N. (2008). The legal exosystem: Risk management in addressing student competence problems in professional psychology training. Training and Education in Professional Psychology, 2, 202–209. doi:10.1037/1931-3918.2.4.202
Gizara, S. S., & Forrest, L. (2004). Supervisors’ experiences of trainee impairment and incompetence at APA-accredited internship sites. Professional Psychology: Research and Practice, 35, 131–140.
doi:10.1037/0735-7028.35.2.131
Goodrich, K. M., & Shin, R. Q. (2013). A culturally responsive intervention for addressing problematic behaviors in counseling students. Counselor Education and Supervision, 52, 43–55.
doi:10.1002/j.1556-6978.2013.00027.x
Homrich, A. M. (2009). Gatekeeping for personal and professional competence in graduate counseling programs. Counseling and Human Development, 41(7), 1–23.
Homrich, A. M., DeLorenzi, L. D., Bloom, Z. D., & Godbee, B. (2014). Making the case for standards of conduct in clinical training. Counselor Education and Supervision, 53, 126–144. doi:10.1002/j.1556-6978.2014.00053.x
Jacobs, S. C., Huprich, S. K., Grus, C. L., Cage, E. A., Elman, N. S., Forrest, L., . . . Kaslow, N. J. (2011). Trainees with professional competency problems: Preparing trainers for difficult but necessary conversations. Training and Education in Professional Psychology, 5(3), 175–184. doi:10.1037/a0024656
Kerl, S., & Eichler, M. (2005). The loss of innocence: Emotional costs to serving as gatekeepers to the counseling profession. Journal of Creativity in Mental Health, 1(3–4), 71–88. doi:10.1300/J456v01n03_05
Lambie, G. W., & Vaccaro, N. (2011). Doctoral counselor education students’ levels of research self-efficacy, perceptions of the research training environment, and interest in research. Counselor Education and Supervision, 50, 243–258. doi:10.1002/j.1556-6978.2011.tb00122.x
Letourneau, J. L. H. (2016). A decision-making model for addressing problematic behaviors in counseling students. Counseling and Values, 61, 206–222. doi:10.1002/cvj.12038
Lumadue, C. A., & Duffey, T. H. (1999). The role of graduate programs as gatekeepers: A model of evaluating student counselor competence. Counselor Education and Supervision, 39, 101–109. doi:10.1002/j.1556-6978.1999.tb01221.x
McAdams, C. R., III, & Foster, V. A. (2007). A guide to just and fair remediation of counseling students with professional performance deficiencies. Counselor Education and Supervision, 47, 2–13. doi:10.1002/j.1556-6978.2007.tb00034.x
National Board for Certified Counselors. (2012). Code of ethics. Greensboro, NC: Author.
Nelson, K. W., Oliver, M., & Capps, F. (2006). Becoming a supervisor: Doctoral student perceptions of the training experience. Counselor Education and Supervision, 46, 17–31. doi:10.1002/j.1556-6978.2006.tb00009.x
Parker, L. K., Chang, C. Y., Corthell, K. K., Walsh, M. E., Brack, G., & Grubbs, N. K. (2014). A grounded theory of counseling students who report problematic peers. Counselor Education and Supervision, 53, 111–125. doi:10.1002/j.1556-6978.2014.00052.x
Rapisarda, C. A., & Britton, P. J. (2007). Sanctioned supervision: Voices from the experts. Journal of Mental Health Counseling, 29, 81–92. doi:10.17744/mehc.29.1.6tcdb7yga7becwmf
Rust, J. P., Raskin, J. D., & Hill, M. S. (2013). Problems of professional competence among counselor trainees: Programmatic issues and guidelines. Counselor Education and Supervision, 52, 30–42.
doi:10.1002/j1556-6978.2013.00026x
Scarborough, J. L., Bernard, J. M., & Morse, R. E. (2006). Boundary considerations between doctoral students and master’s students. Counseling and Values, 51, 53–65. doi:10.1002/j.2161-007X.2006.tb00065.x
Ziomek-Daigle, J., & Christensen, T. M. (2010). An emergent theory of gatekeeping practices in counselor education. Journal of Counseling & Development, 88, 407–415. doi:10.1002/j.1556-6678.2010.tb00040.x
Marisa C. Rapp, NCC, is an assistant professor at the University of Wisconsin–Parkside. Steven J. Moody, NCC, is an associate professor at Idaho State University. Leslie A. Stewart is an assistant professor at Idaho State University. Correspondence can be addressed to Marisa Rapp, 264 Molinaro Hall, 900 Wood Rd., Kenosha, WI 53144-2000, rapp@uwp.edu.
Mar 30, 2018 | Volume 8 - Issue 1
Maribeth F. Jorgensen, William E. Schweinle
The 68-item Research Identity Scale (RIS) was informed through qualitative exploration of research identity development in master’s-level counseling students and practitioners. Classical psychometric analyses revealed the items had strong validity and reliability and a single factor. A one-parameter Rasch analysis and item review was used to reduce the RIS to 21 items. The RIS offers counselor education programs the opportunity to promote and quantitatively assess research-related learning in counseling students.
Keywords: Research Identity Scale, research identity, research identity development, counselor education, counseling students
With increased accountability and training standards, professionals as well as professional training programs have to provide outcomes data (Gladding & Newsome, 2010). Traditionally, programs have assessed student learning through outcomes measures such as grade point averages, comprehensive exam scores, and state or national licensure exam scores. Because of the goals of various learning processes, it may be important to consider how to measure learning in different ways (e.g., change in behavior, attitude, identity) and specific to the various dimensions of professional counselor identity (e.g., researcher, advocate, supervisor, consultant). Previous research has focused on understanding how measures of research self-efficacy (Phillips & Russell, 1994) and research interest (Kahn & Scott, 1997) allow for an objective assessment of research-related learning in psychology and social work programs. The present research adds to previous literature by offering information about the development and applications of the Research Identity Scale (RIS), which may provide counseling programs with another approach to measure student learning.
Student Learning Outcomes
When deciding how to measure the outcomes of student learning, it is important that programs start with defining the student learning they want to take place (Warden & Benshoff, 2012). Student learning outcomes focus on intellectual and emotional growth in students as a result of what takes place during their training program (Hernon & Dugan, 2004). Student learning outcomes are often guided by the accreditation standards of a particular professional field. Within the field of counselor education, the Council for Accreditation of Counseling & Related Educational Programs (CACREP) is the accrediting agency. CACREP promotes quality training by defining learning standards and requiring programs to provide evidence of their effectiveness in meeting those standards. In relation to research, the 2016 CACREP standards require research to be a part of professional counselor identity development at both the entry level (e.g., master’s level) and doctoral level. The CACREP research standards emphasize the need for counselors-in-training to learn the following:
The importance of research in advancing the counseling profession, including how to critique research to inform counseling practice; identification of evidence-based counseling practices; needs assessments; development of outcome measures for counseling programs; evaluation of counseling interventions and programs; qualitative quantitative, and mixed research methods; designs in research and program evaluation; statistical methods used in conducting research and program evaluation; analysis and use of data in counseling; ethically and culturally relevant strategies for conducting, interpreting, and reporting results of research and/or program evaluation. (CACREP, 2016, p .14)
These CACREP standards not only suggest that counselor development needs to include curriculum that focuses on and integrates research, but also identify a possible need to have measurement tools that specifically assess research-related learning (growth).
Research Learning Outcomes Measures
The Self-Efficacy in Research Measure (SERM) was designed by Phillips and Russell (1994) to measure research self-efficacy, which is similar to the construct of research identity. The SERM is a 33-item scale with four subscales: practical research skills, quantitative and computer skills, research design skills, and writing skills. This scale is internally consistent (α = .96) and scores highly correlate with other components such as research training environment and research productivity. The SERM has been adapted for assessment in psychology (Kahn & Scott, 1997) and social work programs (Holden, Barker, Meenaghan, & Rosenberg, 1999).
Similarly, the Research Self-Efficacy Scale (RSES) developed by Holden and colleagues (1999) uses aspects of the SERM (Phillips & Russell, 1994), but includes only nine items to measure changes in research self-efficacy as an outcome of research curriculum in a social work program. The scale has excellent internal consistency (α = .94) and differences between pre- and post-tests were shown to be statistically significant. Investigators have noticed the value of this scale and have applied it to measure the effectiveness of research courses in social work training programs (Unrau & Beck, 2004; Unrau & Grinnell, 2005).
Unrau and Beck (2004) reported that social work students gained confidence in research when they received courses on research methodology. Students gained most from activities outside their research courses, such as participating in research with faculty members. Following up, Unrau and Grinnell (2005) administered the scale prior to the start of the semester and at the end of the semester to measure change in social work students’ confidence in doing research tasks. Overall, social work students varied greatly in their confidence before taking research courses and made gains throughout the semester. Unrau and Grinnell stressed their results demonstrate the need for the use of pre- and post-tests to better gauge the way curriculum impacts how students experience research.
Previous literature supports the use of scales such as the SERM and RSES to measure the effectiveness of research-related curricula (Holden et al., 1999; Kahn & Scott, 1997; Unrau & Beck, 2004; Unrau & Grinnell, 2005). These findings also suggest the need to continue exploring the research dimension of professional identity. It seems particularly important to measure concepts such as research self-efficacy, research interest, and research productivity, all of which are a part of research identity (Jorgensen & Duncan, 2015a, 2015b).
Research Identity as a Learning Outcome
The concept of research identity (RI) has received minimal attention (Jorgensen & Duncan, 2015a, 2015b; Reisetter et al., 2004). Reisetter and colleagues (2004) described RI as a mental and emotional connection with research. Jorgensen and Duncan (2015a) described RI as the magnitude and quality of relationship with research; the allocation of research within a broader professional identity; and a developmental process that occurs in stages. Scholars have focused on qualitatively exploring the construct of RI, which may give guidance around how to facilitate and examine RI at the program level (Jorgensen & Duncan, 2015a, 2015b; Reisetter et al., 2004). Also, the 2016 CACREP standards include language (e.g., knowledge of evidence-based practices, analysis and use of data in counseling) that favors curriculum that would promote RI. Although previous researchers have given the field prior knowledge of RI (Jorgensen & Duncan, 2015a, 2015b; Reisetter et al., 2004), there has been no focus on further exploring RI in a quantitative way and in the context of being a possible measure of student learning. The first author developed the RIS with the aim of assessing RI through a quantitative lens and augmenting traditional learning outcomes measures such as grades, grade point averages, and standardized test scores. There were three purposes for the current study: (a) to develop the RIS; (b) to examine the psychometric properties of the RIS from a classical testing approach; and (c) to refine the items through future analysis based on the item response theory (Nunnally & Bernstein, 1994). Two research questions guided this study: (a) What are the psychometric properties of the RIS from a classical testing approach? and (b) What items remain after the application of an item response analysis?
Method
Participants
The participants consisted of a convenience sample of 170 undergraduate college students at a Pacific Northwest university. Sampling undergraduate students is a common practice when initially testing scale psychometric properties and employing item response analysis (Embretson & Reise, 2000; Heppner, Wampold, Owen, Thompson, & Wang, 2016). The mean age of the sample was 23.1 years (SD = 6.16) with 49 males (29%), 118 females (69%), and 3 (2%) who did not report gender. The racial identity composition of the participants was mostly homogenous: 112 identified as White (not Hispanic); one identified as American Indian or Alaska Native; 10 identified as Asian; three identified as Black or African American; eight identified as multiracial; 21 identified as Hispanic; three identified as “other”; and seven preferred not to answer.
Instruments
There were three instruments used in this study: a demographic questionnaire, the RSES, and the RIS.
Demographics questionnaire. Participants were asked to complete a demographic sheet that included five questions about age, gender, major, race, and current level of education; these identifiers did not pose risk to confidentiality of the participants. All information was stored on the Qualtrics database, which was password protected and only accessible by the primary investigator.
The RSES. The RSES was developed by Holden et al. (1999) to measure effectiveness of research education in social work training programs. The RSES has nine items that assess respondents’ level of confidence with various research activities. The items are answered on a 0–100 scale with 0 indicating cannot do at all, 50 indicating moderately certain I can do, and 100 indicating certainly can do. The internal consistency of the scale is .94 at both pre- and post-measures. Holden and colleagues reported using an effect size estimate to assess construct validity but did not report these estimates, so there should be caution when assuming this form of validity.
RIS. The initial phase of this research involved the first author developing the 68 items on the RIS (contact first author for access) based on data from her qualitative work about research identity (Jorgensen & Duncan, 2015a). The themes from her qualitative research informed the development of items on the scale (Jorgensen & Duncan, 2015a). Rowan and Wulff (2007) have suggested that using qualitative methods to inform scale development is appropriate, sufficient, and promotes high quality instrument construction.
The first step in developing the RIS items involved the first author analyzing the themes that surfaced during interviews with participants in her qualitative work. This process helped inform the items that could be used to quantitatively measure RI. For example, one theme was Internal Facilitators. Jorgensen and Duncan (2015a) reported that, “participants explained the code of internal facilitators as self-motivation, time management, research self-efficacy, innate traits and thinking styles, interest, curiosity, enjoyment in the research process, willingness to take risks, being open-minded, and future goals” (p. 24). An example of scale items that were operationalized from the theme Internal Facilitators included: 1) I am internally motivated to be involved with research on some level; 2) I am willing to take risks around research; 3) Research will help me meet future goals; and 4) I am a reflective thinker. The first author used that same process when operationalizing each of the qualitative themes into items on the RIS. There were eight themes of RI development (Jorgensen & Duncan, 2015a). Overall, the number of items per theme was proportionate to the strength of theme, as determined by how often it was coded in the qualitative data. After the scale was developed, the second author reviewed the scale items and cross-checked items with the themes and subthemes from the qualitative studies to evaluate face validity (Nunnally & Bernstein, 1994).
The items on the RIS are short with easily understandable terms in order to avoid misunderstanding and reduce perceived cost of responding (Dillman, Smyth, & Christian, 2009). According to the Flesch Reading Ease calculator, the reading level of the scale is 7th grade (Readability Test Tool, n.d.). The format of answers to each item is forced choice. According to Dillman et al. (2009), a forced-choice format “lets the respondent focus memory and cognitive processing efforts on one option at a time” (p. 130). Individuals completing the scale are asked to read each question or phrase and respond either yes or no. To score the scale, a yes would be scored as one and a no would be scored as zero. Eighteen items are reverse-scored (item numbers 11, 23, 28, 32, 39, 41, 42, 43, 45, 48, 51, 53, 54, 58, 59, 60, 61, 62), meaning that with those 18 questions an answer of no would be scored as a one and an answer of yes would be scored as a zero. Using a classical scoring method (Heppner et al., 2016), scores for the RIS are determined by adding up the number of positive responses. Higher scores indicate a stronger RI overall.
Procedure
Upon Institutional Review Board approval, the study instruments were uploaded onto the primary investigator’s Qualtrics account. At that time, information about the study was uploaded onto the university psychology department’s human subject research system (SONA Systems). Once registered on the SONA system, participants were linked to the instruments used for this study through Qualtrics. All participants were asked to read an informational page that briefly described the nature and purpose of the study, and were told that by continuing they were agreeing to participate in the study and could discontinue at any time. Participants consented by selecting “continue” and completed the questionnaire and instruments. After completion, participants were directed to a post-study information page on which they were thanked and provided contact information about the study and the opportunity to schedule a meeting to discuss research findings at the conclusion of the study. No identifying information was gathered from participants. All information was stored on the Qualtrics database.
Results
All analyses were conducted in SAS 9.4 (SAS Institute, 2012). The researchers first used classical methods (e.g., KR20 and principal factor analysis) to examine the psychometric properties of the RIS. Based on the results of the factor analysis, the researchers used results from a one-parameter Rasch analysis to reduce the number of items on the RIS.
Classical Testing
Homogeneity was explored by computing Kuder-Richardson 20 (KR20) alphas. Across all 68 items the internal consistency was strong (.92). Concurrent validity (i.e., construct validity) was examined by looking at correlations between the RIS and the RSES. The overall correlation between the RIS and the RSES was .66 (p < .001).
Item Response Analysis
Item response theory brought about a new perspective on scale development (Embretson & Reise, 2000) in that it promoted scale refinement even at the initial stages of testing. Item response theory allows for shorter tests that can actually be more reliable when items are well-composed (Embretson & Reise, 2000). The RIS initially included 68 items. Through Rasch analyses, the scale was reduced to 21 items (items numbered 3, 4, 9, 10, 12, 13, 16, 18, 19, 24, 26, 34, 39, 41, 42, 43, 44, 46, 47, 49, 61).
The final 21 items were selected for their dispersion across location on theta in order to widely capture the constructs. The polychoric correlation matrix for the 21 items was then subjected to a principal components analysis yielding an initial eigenvalue of 11.72. The next eigenvalue was 1.97, which clearly identified the crook of the elbow. Further, Cronbach’s alpha for these 21 items was .90. Taken together, these results suggest that the 21-item RIS measures a single factor.
This conclusion was further tested by fitting the items to a two-parameter Rasch model (AIC = 3183.1). Slopes were constrained to unity (1.95), and item location estimates are presented in Table 1. Bayesian a posteriori scores also were estimated and strongly correlated with classical scores (i.e., tallies of the number of positive responses [r = .95, p < .0001]).
Discussion
This scale represents a move from subjective to a more objective assessment of RI. In the future, the scale may be used with other student and non-student populations to better establish its psychometric properties, generalizability, and refinement. Although this study sampled undergraduate students, this scale may be well-suited to use with counseling graduate students and practitioners because items were developed based on a qualitative study with master’s-level counseling students and practicing counselors (Jorgensen & Duncan, 2015a).
Additionally, this scale offers another method for assessing student learning and changes that take place for both students and professionals. As indicated by Holden et al. (1999), it is important to assess learning in multiple ways. Traditional methods may have focused on measuring outcomes that reflect a performance-based, rather than a mastery-based, learning orientation. Performance-based learning has been defined as wanting to learn in order to receive external validation such as a grade (Bruning, Schraw, Norby, & Ronning, 2004). Mastery learning has been defined as wanting to learn for personal benefit and with the goal of applying information to reach a more developed personal and professional identity (Bruning et al., 2004).
Based on what is known about mastery learning (Bruning et al., 2004), students with this type of learning orientation experience identity changes that may be best captured through assessing changes in thoughts, attitudes, and beliefs. The RIS was designed to measure constructs that capture internal changes that may be reflective of a mastery learning orientation. A learner who is performance-oriented may earn an A in a research course but show a lower score on the RIS. The opposite also may be true in that a learner may earn a C in a research course but show higher scores on the RIS. Through the process of combining traditional assessment methods such as grades with the RIS, programs may get a more comprehensive understanding of the effectiveness and impact of their research-related curriculum.
Table 1.
Item location estimates.
RIS Item |
Location Estimate |
Item 3 |
-2.41 |
Item 4 |
-1.80 |
Item 10 |
-3.16 |
Item 13 |
-.86 |
Item 16 |
-.94 |
Item 19 |
-3.08 |
Item 24 |
-2.86 |
Item 9 |
-1.10 |
Item 12 |
.42 |
Item 18 |
-2.24 |
Item 26 |
-2.20 |
Item 39 |
.20 |
Item 42 |
-1.28 |
Item 44 |
-.76 |
Item 34 |
-1.27 |
Item 41 |
-.76 |
Item 43 |
-1.47 |
Item 46 |
-2.03 |
Item 47 |
-2.84 |
Item 49 |
1.22 |
Item 61 |
-.44 |
Limitations and Areas for Future Research
The sample size and composition were sufficient for the purposes of the initial development and classical testing and item response analysis (Heppner et al., 2016); however, these authors still suggest caution when applying the results of this study to other populations. Endorsements of the participants may not reflect answers of the population in other areas of the country or different academic levels. Future research should sample other student and professional groups. This will help to further establish the psychometric properties and item response analysis conclusions and make the RIS more appropriate for use in other fields. Additionally, future research may examine how scores on the RIS correlate with traditional measures of learning (e.g., grades in individual research courses, collapsed grades in all research courses, research portion on counselor licensure exams).
Conclusion
As counselors-in-training and professional counselors are increasingly being required to demonstrate they are using evidence-based practices and measuring the effectiveness of their services, they may benefit from assessments of their RI (American Counseling Association, 2014; Gladding & Newsome, 2010). CACREP (2016) has responded to increased accountability by enhancing their research and evaluation standards for both master’s- and doctoral-level counseling students. The American Counseling Association is further supporting discussions about RI by publishing a recent blog post titled “Research Identity Crisis” (Hennigan Paone, 2017). In the post, Hennigan Paone described a hope for master’s-level clinicians to start acknowledging and appreciating that research helps them work with clients in ways that are informed by “science rather than intuition” (para. 5). As the calling becomes stronger for counselors to become more connected to research, it seems imperative that counseling programs assess their effectiveness in bridging the gap between research and practice. The RIS provides counseling programs an option to do exactly that by evaluating the way students are learning and growing in relation to research. Further, the use of this type of outcome measure could provide for good modeling at the program level; in that, the hope would be that it would encourage counselors-in-training to develop both a curiosity and motivation to infuse research practices (e.g., needs assessments, outcome measures, data analysis) into their clinical work.
Conflict of Interest and Funding Disclosure
The authors reported no conflict of interest or funding contribu tions for the developmentof this manuscript.
References
American Counseling Association. (2014). 2014 ACA code of ethics. Alexandria, VA: Author.
Bruning, R. H., Schraw, G. J., Norby, M. M., & Ronning, R. R. (2004). Cognitive psychology and instruction (4th ed.). Upper Saddle River, NY: Pearson Merrill/Prentice Hall.
Council for Accreditation of Counseling & Related Educational Programs. (2016). 2016 CACREP standards. Retrieved from http://www.cacrep.org/wp-content/uploads/2017/07/2016-Standards-with-Glossary-7.2017.pdf
Dillman, D. A., Smyth, J. D., & Christian, L. M. (2009). Internet, mail, and mixed-mode surveys: The tailored design method (3rd ed.). Hoboken, NJ: John Wiley & Sons, Inc.
Embretson, S. E., & Reise, S. P. (2000). Item response theory for psychologists. Mahwah, NJ: Lawrence Erlbaum.
Gladding, S. T., & Newsome, D. W. (2010). Clinical mental health counseling in community and agency settings (3rd ed.). Upper Saddle River, NJ: Prentice Hall.
Hennigan Paone, C. (2017, December 15). Research identity crisis? [Blog post]. Retrieved from https://www.counseling.org/news/aca-blogs/aca-member-blogs/aca-member-blogs/2017/12/15/research-identity-crisis
Heppner, P. P., Wampold, B. E., Owen, J., Thompson, M. N., & Wang, K. T. (2015). Research design in counseling (4th ed.). Boston, MA: Cengage Learning.
Hernon, P. & Dugan, R. E. (2004). Four perspectives on assessment and evaluation. In P. Hernon & R. E. Dugan (Eds.), Outcome assessment in higher education: Views and perspectives (pp. 219–233). Westport, CT: Libraries Unlimited.
Holden, G., Barker, K., Meenaghan, T., & Rosenberg, G. (1999). Research self-efficacy: A new possibility for educational outcomes assessment. Journal of Social Work Education, 35, 463–476.
Jorgensen, M. F., & Duncan, K. (2015a). A grounded theory of master’s-level counselor research identity. Counselor Education and Supervision, 54, 17–31. doi:10.1002/j.1556-6978.2015.00067
Jorgensen, M. F., & Duncan, K. (2015b). A phenomenological investigation of master’s-level counselor research identity development stages. The Professional Counselor, 5, 327–340. doi:10.15241/mfj.5.3.327
Kahn, J. H., & Scott, N. A. (1997). Predictors of research productivity and science-related career goals among
counseling psychology doctoral students. The Counseling Psychologist, 25, 38–67. doi:10.1177/0011000097251005
Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory (3rd ed.). New York, NY: McGraw-Hill.
Phillips, J. C., & Russell, R. K. (1994). Research self-efficacy, the research training environment, and research productivity among graduate students in counseling psychology. The Counseling Psychologist, 22, 628–641. doi:10.1177/0011000094224008
Readability Test Tool. (n.d.). Retrieved from https://www.webpagefx.com/tools/read-able/
Reisetter, M., Korcuska, J. S., Yexley, M., Bonds, D., Nikels, H., & McHenry, W. (2004). Counselor educators and qualitative research: Affirming a research identity. Counselor Education and Supervision, 44, 2–16. doi:10.1002/j.1556-6978.2004.tb01856.x
Rowan, N., & Wulff, D. (2007). Using qualitative methods to inform scale development. The Qualitative Report, 12, 450–466.
SAS Institute [Statistical software]. (2012). Retrieved from https://www.sas.com/en_us/home.html
Unrau, Y. A., & Beck, A. R. (2004). Increasing research self-efficacy among students in professional academic programs. Innovative Higher Education, 28(3), 187–204.
Unrau, Y. A., & Grinnell,, R. M., Jr. (2005). The impact of social work research courses on research self-efficacy for social work students. Social Work Education, 24, 639–651. doi:10.1080/02615470500185069
Warden, S., & Benshoff, J. M. (2012). Testing the engagement theory of program quality in CACREP-accredited counselor education programs. Counselor Education & Supervision, 51, 127–140.
doi:10.1002/j.1556-6978.2012.00009.x
Maribeth F. Jorgensen, NCC, is an assistant professor at the University of South Dakota. William E. Schweinle is an associate professor at the University of South Dakota. Correspondence can be addressed to Maribeth Jorgensen, 414 East Clark Street, Vermillion, SD 57069, maribeth.jorgensen@usd.edu.